entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 10
200
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 2
817k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2306.11264v1
|
20230620033322
|
GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks
|
[
"Wentao Zhao",
"Qitian Wu",
"Chenxiao Yang",
"Junchi Yan"
] |
cs.LG
|
[
"cs.LG"
] |
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Shanghai Jiao Tong University
Shanghai
China
Corresponding author.
[email protected]
Shanghai Jiao Tong University
Shanghai
China
Graph structure learning is a well-established problem that aims at optimizing graph structures adaptive to specific graph datasets to help message passing neural networks (i.e., GNNs) to yield effective and robust node embeddings. However, the common limitation of existing models lies in the underlying closed-world assumption: the testing graph is the same as the training graph. This premise requires independently training the structure learning model from scratch for each graph dataset, which leads to prohibitive computation costs and potential risks for serious over-fitting. To mitigate these issues, this paper explores a new direction that moves forward to learn a universal structure learning model that can generalize across graph datasets in an open world. We first introduce the mathematical definition of this novel problem setting, and describe the model formulation from a probabilistic data-generative aspect. Then we devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs to capture the generalizable patterns of optimal message-passing topology across datasets. The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning. Across diverse datasets and various challenging cross-graph generalization protocols, our experiments show that even without training on target graphs, the proposed model i) significantly outperforms expressive GNNs trained on input (non-optimized) topology, and ii) surprisingly performs on par with state-of-the-art models that independently optimize adaptive structures for specific target graphs, with notably orders-of-magnitude acceleration for training on the target graph.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010321</concept_id>
<concept_desc>Computing methodologiesย Machine learning algorithms</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[300]Computing methodologiesย Machine learning algorithms
GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks
Junchi Yan
July 31, 2023
===================================================================================
ยง INTRODUCTION
Graph neural networks (GNNs)ย <cit.>, as a de facto model class based on the message passing principle, have shown promising efficacy for learning node representations for graph-structured data, with extensive applications to, e.g., physics simulationย <cit.>, traffic predictionย <cit.>, drug recommendationย <cit.>. However, due to the inevitable error-prone data collectionย <cit.>, the input graph may contain spurious and unobserved edges that lead to sub-optimal results of GNNs and degrade the downstream performance.
Graph structure learningย <cit.> serves as a plausible remedy for such an issue via optimizing graph structures and GNN classifiers at the same time. To this end, recent endeavors explore different technical aspects, e.g., parameterizing each potential edge between any pair of nodesย <cit.> or estimating potential links through a parameterized networkย <cit.>, etc.
However, existing models limit their applicability within a closed-world hypothesis: the training and testing of structure learning models, which optimize the graph structures, are performed on the same graph. The issue, however, is that since structure learning is often heavy-weighted and requires sophisticated optimization, it can be prohibitively resource-consuming to train structure learning models from scratch for each graph dataset. Moreover, due to limited labels in common graph-based predictive tasks, structure learning models are prone to over-fitting given that they cannot utilize the common knowledge shared across different graph datasets.
To resolve the above dilemma, this paper attempts to explore a novel problem setting termed Open-World Graph Structure Learning. Specifically, we target learning a generalizable graph structure learning model which is trained with multiple source graphs and can be directly adapted for inference (without re-training or fine-tuning) on new unseen target graphs. We formulate the problem as a bi-level optimization target that jointly learns a single dataset-shared structure learner and multiple dataset-specific GNNs tailored for particular graph datasets, as shown in Fig. <ref>. Under such a framework, the well-trained structure learner can leverage the common transferrable knowledge across datasets for enhancing generalization and more critically, be readily utilized to yield adaptive message-passing topology for arbitrarily given target graphs.
With the guidance of the aforementioned general goal, we propose (short for A Graph Structure Learning Model for Open-World Generalization) that aims at learning the generalizable patterns of optimal message-passing topology across source graphs. Specifically, we first take a bottom-up perspective and formulate the generative process for observed data in a probabilistic manner. On top of this, we derive a tractable and feasible learning objective through the lens of variational inference. The structure learner is specified as a multi-head weighted similarity function so as to guarantee enough expressivity for accommodating diverse structural information, and we further harness an approximation scheme to reduce the quadratic complexity overhead of learning potential edges from arbitrary node pairs.
To reasonably and comprehensively evaluate the model, we devise experiments with a diverse set of protocols that can measure the generalization ability under different difficulty levels (according to the intensity of distribution shifts between source graphs and target graphs).
Concretely, we consider: 1) In-domain generalization, in which we generalize from some citation (social) networks to other citation (social) networks. 2) Cross-domain networks generalization between citation and social networks.
The results, which are consistent across various combinations of source and target graph datasets, demonstrate that when evaluated on the target graphs, our approach i) consistently outperforms directly training the GNN counterpart on original non-optimized graph structures of the target datasets and ii) performs on par with state-of-the-art structure learning methodsย <cit.> trained on target graphs from scratch with up to 25ร less training time consumed.
Our code is available at https://github.com/WtaoZhao/GraphGLOWhttps://github.com/WtaoZhao/GraphGLOW.
ยง PRELIMINARY AND PROBLEM DEFINITION
Node-Level Predictive Tasks. Denote a graph with N nodes as ๐ข = (๐, ๐, ๐) where ๐ = {a_uv}_Nร N is an adjacency matrix (a_uv=1 means the edge between node u and v exists and 0 otherwise), ๐ = {๐ฑ_u}_Nร D is a feature matrix with ๐ฑ_u a D-dimensional node feature vector of node u, and ๐ = {y_u}_Nร C with y_u the label vector of node u and C class number. The node labels are partially observed as training data, based on which the node-level prediction aims to predict the unobserved labels for testing nodes in the graph using node features and graph structures. The latter is often achieved via a GNN model, denoted as h_w, that yields predicted node labels ๐ฬ = h_w(๐, ๐) and is optimized with the classification loss w^* = _w = โ(๐ฬ, ๐) using observed labels from training nodes.
Closed-World Graph Structure Learning (GLCW).
The standard graph structure learning for node-level predictive tasks trains a graph structure learner g_ฮธ to refine the given structure, i.e., ๐ฬ=g_ฮธ (๐, ๐), over which the GNN classifier h_w conducts message passing for producing node representations and predictions. The g_ฮธ is expected to produce optimal graph structures that can give rise to satisfactory downstream classification performance of the GNN classifier. Formally speaking, the goal for training g_ฮธ along with h_w can be expressed as a nested optimization problem:
ฮธ^* = _wmin_ฮธโ (h_w(g_ฮธ(๐, ๐), ๐),๐ ).
The above formulation of graph structure learning under closed-world assumptions constrains the training and testing nodes in the same graph, which requires g_ฮธ to be trained from scratch on each graph dataset. Since g_ฮธ is often much more complicated (e.g., with orders-of-magnitude more trainable parameters) and difficult for optimization (due to the bi-level optimization (<ref>)) than the GNN h_w, the GLCW would lead to undesired inefficiency and vulnerability for serious over-fitting (due to limited labeled information).
Open-World Graph Structure Learning (GLOW).
In this work, we turn to a new learning paradigm that generalizes graph structure learning to open-world assumptions, borrowing the concepts of domain generalizationย <cit.> and out-of-distribution generalizationย <cit.>, more broadly. Specifically, assume that we are given multiple source graphs, denoted as {๐ข^s_m}_m=1^M = {(๐^s_m, ๐^s_m, ๐^s_m)}_m=1^M, and a target graph ๐ข^t = (๐^t, ๐^t, ๐^t), whose distribution is often different from any source graph. The goal is to train a universal structure learner g_ฮธ on source graphs which can be directly used for inference on the target graph without any re-training or fine-tuning. The trained structure learner is expected to produce desired graph structures that can bring up better downstream classification of a GNN classifier optimized for the target graph.
More specifically, we consider a one-to-many framework that coordinates a shared graph structure learner g_ฮธ and multiple dataset-specific GNNs {h_w_m}_m=1^M, where h_w_m with independent parameterization w_m is optimized for a given source graph ๐ข_m^s. With the aim of learning a universal g_ฮธ that can generalize to new unseen target graphs, our training goal can be formulated as the following bi-level optimization problem:
ฮธ^* = _ฮธmin_w_1, โฏ, w_Mโ_m=1^M โ (h_w_m(g_ฮธ(๐^s_m, ๐^s_m), ๐^s_m),๐^s_m ),
where the inner optimization is a multi-task learning objective. Generally, (<ref>) aims at finding an optimal g_ฮธ that can jointly minimize the classification loss induced by M GNN models, each trained for a particular source graph.
After training, we can directly adapt g_ฮธ^* to the target graph for testing purpose, and only need to train a GNN h_w on the target graph:
w^* = _wโ (h_w(g_ฮธ^*(๐^t, ๐^t), ๐^t),๐^t ).
ยง PROPOSED MODEL
To handle the above problem, we present an end-to-end learning framework that guides the central graph structure learner to learn adaptive message-passing structures exploited by multiple GNNs. The overview of is shown in Fig.ย <ref>.
The fundamental challenge of GLOW lies in how to model and capture the generalizable patterns among adaptive structures of different graphs. To this end, we first take a data-generative perspective that treats the inputs and inter-mediate results as random variables and investigate into their dependency, based on which we present the high-level model formulation in a probabilistic form (Sec.ย <ref>). Then we proceed to instantiate the model components (Sec.ย <ref>). Finally, we discuss differentiable training approaches for optimization (Sec.ย <ref>).
ยง.ยง Model Formulation
To commence, we characterize the data generation process by a latent variable model, based on which we derive the formulation of our method. We treat the latent graph ๐ฬ (given by g_ฮธ) as a latent variable whose prior distribution is given by p(๐ฬ | ๐, ๐). The prior distribution reflects how one presumed on the latent structures before observed labels arrive. Then, the prediction is given by a predictive distribution p(๐| ๐ฬ, ๐, ๐).
The learning objective aims at maximizing the log-likelihood of observed labels, which can be written as:
log p(๐ | ๐ , ๐ )
=logโซ_๐ฬ p(๐ | ๐, ๐, ๐ฬ ) p(๐ฬ | ๐, ๐ ) d ๐ฬ.
To estimate latent graphs that could enhance message passing for downstream tasks, one plausible way is to sample from the posterior, i.e., p(๐ฬ | ๐, ๐, ๐), conditioned on the labels from downstream tasks. Using the Bayes' rule, we have
p(๐ฬ | ๐, ๐, ๐) = p(๐ | ๐, ๐, ๐ฬ ) p(๐ฬ | ๐, ๐ )/โซ_๐ฬ p(๐ | ๐, ๐, ๐ฬ ) p(๐ฬ | ๐, ๐ ) d ๐ฬ.
However, the integration over ๐ฬ in the denominator is intractable for computation due to the exponentially large space of ๐ฬ.
To circumvent the difficulty, we can introduce a variational distribution q(๐ฬ | ๐, ๐ ) over ๐ฬ as an approximation to p(๐ฬ | ๐, ๐, ๐). We can sample latent graphs from q(๐ฬ | ๐, ๐ ), i.e., instantiate it as the structure learner g_ฮธ, and once q(๐ฬ | ๐, ๐ ) = p(๐ฬ | ๐, ๐, ๐), we could have samples from the posterior that ideally generates the optimal graph structures for downstream prediction. By this principle, we can start with minimizing the Kullback-Leibler divergence between q and p and derive the learning objective as follows:
๐_KL(q(๐ฬ | ๐, ๐ ) p(๐ฬ | ๐, ๐, ๐))
= - ๐ผ_๐ฬโผ q(๐ฬ | ๐, ๐ ) [log p(๐ | ๐, ๐, ๐ฬ ) p(๐ฬ | ๐, ๐ ) / q(๐ฬ | ๐, ๐ ) ] _ + log p(๐|๐, ๐).
Based on this equation, we further have the inequality which bridges the relationship between the Evidence Lower Bound (ELBO) and observed data log-likelihood:
log p(๐|๐, ๐)
โฅ๐ผ_๐ฬโผ q(๐ฬ | ๐, ๐ ) [log p(๐ | ๐, ๐, ๐ฬ ) p(๐ฬ | ๐, ๐ ) / q(๐ฬ | ๐, ๐ ) ].
The equality holds if and only if ๐_KL(q(๐ฬ | ๐, ๐) p(๐ฬ | ๐, ๐, ๐)) = 0. The above fact suggests that we can optimize the ELBO as a surrogate for log p(๐|๐, ๐) which involves the intractable integration.
More importantly, when the ELBO is optimized w.r.t. q distribution, the variational bound is lifted to the original log-likelihood and one has q(๐ฬ | ๐, ๐)=p(๐ฬ | ๐, ๐, ๐), i.e., the variational distribution equals to the true posterior, which is what we expect.
Pushing further and incorporating source graphs ๐ข_m (we omit the superscript for simplicity), we arrive at the following objective:
๐ผ_๐ข_m โผ p(๐ข) [
๐ผ_๐ฬโผ q_ฮธ(๐ฬ | ๐ = ๐_m, ๐ = ๐_m ) [log p_w_m (๐ | ๐ = ๐_m, ๐ = ๐_m, ๐ฬ ) . .
. . + log p_0(๐ฬ | ๐ = ๐_m, ๐ = ๐_m ) - log q_ฮธ(๐ฬ | ๐ = ๐_m, ๐ = ๐_m ) ]
].
Here we instantiate q(๐ฬ | ๐, ๐ ) as the shared structure learner g_ฮธ, p(๐ฬ|๐, ๐) as a (shared) non-parametric prior distribution p_0 for latent structures, and p(๐ | ๐, ๐, ๐ฬ) as the dataset-specific GNN model h_w_m, to suit the framework for our formulated problem in Sectionย <ref>. The formulation of (<ref>) shares the spirits with Bayesian meta learningย <cit.>. We can treat the GNN training as a dataset-specific learning task and latent graph as a certain `learning algorithm' or `hyper-parameter', so (<ref>) essentially aims at learning a structure learner that can yield desirable `learning algorithm' for each specific learning task on graphs. Furthermore, the three terms in (<ref>) have distinct effects: i) the predictive term log p_w_m acts as a supervised classification loss; ii) the prior term log p_0 serves for regularization on the generated structures; iii) the third term, which is essentially the entropy of q_ฮธ, penalizes high confidence on certain structures.
To sum up, we can optimize (<ref>) with joint learning of the structure learner g_ฮธ and GNN models {h_w_m}_m=1^M on source graphs {๐ข_m}_m=1^M for training the structure learner. After that, we can generalize the well-trained g_ฮธ^* to estimate latent graph structures for a new target graph ๐ข^t = (๐^t, ๐^t) and only need to train the GNN model h_w w.r.t. the predictive objective with fixed ฮธ^*:
๐ผ_๐ฬโผ q_ฮธ^*(๐ฬ | ๐ = ๐^t, ๐ = ๐^t ) [log p_w(๐ | ๐ = ๐^t, ๐ = ๐^t, ๐ฬ ) ].
We next discuss how to specify g_ฮธ, h_w_m and p_0 with special focus on their expressiveness and efficiency in Sectionย <ref>. Later, we present the details for loss computation and model training based on the formulation stated above in Sectionย <ref>.
ยง.ยง Model Instantiations
ยง.ยง.ยง Instantiation for q_ฮธ(๐ฬ|๐, ๐)
The variational distribution aims at learning the conditional distribution that generates suitable latent structures for message passing based on input observations. A natural means is to assume each edge of the latent graph as a Bernoulli random variable and the distribution q is a product of Nร N independent Bernoulli random variablesย <cit.>.
The graph structure learner g_ฮธ can be used for predicting the Bernoulli parameter matrix. To accommodate the information from node features and graph structure, we can use the node representation, denoted as ๐ณ_u โโ^d, where d is the embedding dimension, to compute the edge probability ฮฑ_uv for edge (u,v) as
ฮฑ_uv= ฮด (1/Hโ_h=1^H s(๐ฐ_h^1 โ๐ณ_u, ๐ฐ_h^2 โ๐ณ_v) ),
where s(ยท, ยท) is a similarity function for two vectors, โ denotes Hadamard product, ฮด is a function that converts the input into values within [0,1] and ๐ฐ_h^1, ๐ฐ_h^2 โโ^d are two weight vectors of the h-th head. Common choices for s(ยท, ยท) include simple dot-product, cosine distance <cit.>, RBF kernel <cit.>, etc. Here we introduce H heads and aggregate their results to enhance model's expressiveness for capturing the underlying influence between nodes from multifaceted causes, following the spirit of multi-head attentionย <cit.>. Besides, the weight vectors in (<ref>) could learn to element-wisely scale the input vectors, i.e., node representations, and adaptively attend to dominant features. Apart from these, two weight vectors ๐ฐ_h^1, ๐ฐ_h^2 with independent parameterization could potentially have the same or distinct directions, which makes the model capable of connecting similar or dissimilar nodes and expressive enough to handle both homophilous and non-homophilous graphs.
To obtain discrete latent graph ๐ฬ = {รข_uv}_Nร N, one can sample from รข_uvโผ Bernoulli(ฮฑ_uv) to obtain each latent edge.
However, such an approach induces the quadratic algorithmic complexity O(N^2) for computing and storing an estimated structure that entails potential links between any node pair, which could be prohibitive for large graphs. To reduce space and time complexity, we adopt a pivot-based structure learning method, as shown in Fig.ย <ref>. Concretely, we randomly choose P nodes in the graph as pivots, where
P is a hyperparameter much smaller than N (e.g., Pโ1/10N). We then leverage pivot nodes as intermediates and convert the Nร N graph ๐ฬ, which can be prohibitively large with dense edges, into a cascade of one Nร P node-pivot bipartite graph ๐ฬ_1 and one Pร N pivot-node bipartite graph ๐ฬ_2 = ๐ฬ_1^โค, which can effectively control the computational cost with proper P. In this way, we can compute a node-pivot similarity matrix ฮ= {ฮฑ_up}_Nร P based on (<ref>), to parameterize the distribution of latent graph structures. This only requires O(NP) time and space complexity, and one can sample from each Bernoulli(ฮฑ_up) to obtain ๐ฬ_1 and ๐ฬ_2. In the meanwhile, the original Nร N adjacency matrix could be retrieved by ๐ฬ = ๐ฬ_1 ๐ฬ_2, which suggests that one can execute message passing on ๐ฬ_1 and ๐ฬ_2 to approximate that on ๐ฬ (see more details in Sectionย <ref>). In terms of the acceleration of structure learning, other strategies like the all-pair message passing schemes with linear complexity explored by <cit.> can also be utilized to achieve the purpose.
ยง.ยง.ยง Instantiation for p_w_m(๐|๐, ๐, ๐ฬ)
The predictive distribution, parameterized by the GNN network h_w_m, aims at recursively propagating features along the latent graph to update node representations and producing the prediction result for each node. We then present the details of GNN's message passing on the latent graph in order for enough expressiveness, stability and efficiency.
To begin with, we review the message-passing rule in common GNN models, like GCNย <cit.>, that operates on the original graph ๐:
๐^(l+1) = ฯ (MP_1(๐^(l),๐) ๐^(l) )= ฯ( ๐^-1/2๐๐^-1/2๐^(l)๐^(l) ),
where ๐^(l)โโ^dร d is a weight matrix, ฯ is non-linear activation, and ๐ denotes a diagonal degree matrix from input graph ๐ and ๐^(l) = {๐ณ_u^(l)}_Nร d is a stack of node representations at the l-th layer.
With the estimated latent graph ๐ฬ = ๐ฬ_1 ๐ฬ_2, we perform message passing MP_2(ยท) in a two-step fashion to update node representations:
๐^(l+1/2) = RowNorm (ฮ^โค) ๐^(l),
ย ย ย ๐^(l+1) = RowNorm ( ฮ) ๐^(l+1/2),
where ๐^(l+1/2) is an intermediate node representation and ฮ= {ฮฑ_uv}_Nร P is the node-pivot similarity matrix calculated by (<ref>). Such a two-step procedure can be efficiently conducted within O(NP) time and space complexity.
Despite that the feature propagation on the estimated latent structure could presumably yield better node representations, the original input graph structures also contain useful information, such as effective inductive biasย <cit.>. Therefore, we integrate two message-passing functions to compute layer-wise updating for node representations:
๐^(l+1) = ฯ( ฮป_1(๐^(l), ๐) ๐^(l) + (1 - ฮป) _2(๐^(l), ๐ฬ) ๐^(l) ),
where ฮป is a trading hyper-parameter that controls the concentration weight on input structures. Such design could also improve the training stability by reducing the impact from large variation of latent structures through training procedure.
With L GNN layers, one can obtain the prediction ๐ฬ by setting ๐ฬ = ๐^(L) and ๐^(L-1)โโ^dร C where C is the number of classes. Alg.ย <ref> shows the feed-forward computation of message passing.
ยง.ยง.ยง Instantiation for p_0(๐ฬ|๐, ๐)
The prior distribution reflects how we presume on the latent graph structures without the information of observed labels. In other words, it characterizes how likely a given graph structure could provide enough potential for feature propagation by GNNs. The prior could be leveraged for regularization on the estimated latent graph ๐ฬ. In this consideration, we choose the prior as an energy function that quantifies the smoothness of the graph:
p_0(๐ฬ| ๐, ๐) โexp( -ฮฑโ_u, v๐ฬ_uv๐ฑ_u - ๐ฑ_v _2^2
- ฯ๐ฬ_F^2 ) ,
where
ยท_F is the Frobenius norm. The first term in (<ref>) measures the smoothness of the latent graphย <cit.>, with the hypothesis that graphs with smoother feature has lower energy (i.e., higher probability).
The
second term helps avoiding too large node degreesย <cit.>. The hyperparameters
ฮฑ and ฯ control the strength for regularization effects.
While we can retrieve the latent graph via ๐ฬ = ๐ฬ_1 ๐ฬ_2, the computation of (<ref>) still requires O(N^2) cost. To reduce the overhead, we apply the regularization on the P ร P pivot-pivot adjacency matrix ๐ฬ = ๐ฬ_2๐ฬ_1 as a proxy regularization:
โ (๐ฬ) = log p_0(๐ฬ| ๐, ๐)
โ -ฮฑโ_p, q๐ฬ_pq๐ฑ'_p - ๐ฑ'_q _2^2 - ฯ๐ฬ_F^2,
where ๐ฑ'_p denotes the input feature of the p-th pivot node.
ยง.ยง Model Training
For optimization with (<ref>), we proceed to derive the loss functions and updating gradients for ฮธ and w_m based on the three terms ๐ผ_q_ฮธ[log p_w_m], ๐ผ_q_ฮธ[log p_0] and ๐ผ_q_ฮธ[log q_ฮธ].
ยง.ยง.ยง Optimization for ๐ผ_q_ฮธ[log p_w_m]
The optimization difficulty stems from the expectation over q_ฮธ, where the sampling process is non-differentiable and hinders back-propagation.
Common strategies for approximating the sampling for discrete random variables include Gumbel-Softmax trickย <cit.> and REINFORCE trickย <cit.>. However, both strategies yield a sparse graph structure each time of sampling, which could lead to high variance for the prediction result log p_w_m(๐| ๐, ๐, ๐ฬ) produced by message passing over a sampled graph. To mitigate the issue, we alternatively adopt the Normalized Weighted Geometric Mean (NWGM)ย <cit.> to move the outer expectation into the feature-level. Specifically, we have (see Appendixย <ref> for detailed derivations)
โ_ฮธ๐ผ_q_ฮธ(๐ฬ|๐, ๐) [ log p_w_m (๐| ๐, ๐, ๐ฬ)]
โ โ_ฮธlog p_w_m (๐| ๐, ๐, ๐ฬ = ๐ผ_q_ฮธ(๐ฬ|๐, ๐) [๐ฬ]).
We denote the opposite of the above term as โ_ฮธโ_s(ฮธ). The gradient w.r.t. w_m can be similarly derived. The above form is a biased estimation for the original objective, yet it can reduce the variance from sampling and also improve training efficiency (without the need of message passing over multiple sampled graphs).(<ref>) induces the supervised cross-entropy loss.
ยง.ยง.ยง Optimization for ๐ผ_q_ฮธ[log p_0]
As for the second term in (<ref>), we adopt the REINFORCE trick, i.e., policy gradient, to tackle the non-differentiability of sampling from q_ฮธ. Specifically, for each feedforward computation, we sample from the Bernoulli distribution for each edge given by the estimated node-pivot similarity matrix, i.e., Bernoulli(ฮฑ_up), and obtain the sampled latent bipartite graph ๐ฬ_1 and subsequently have ๐ฬ = ๐ฬ_1 ๐ฬ_2 = ๐ฬ_1 ๐ฬ_1^โค. The probability for the latent structure could be computed by
ฯ_ฮธ(๐ฬ) = โ_u, p( ๐ฬ_1,upฮฑ_up + (1 - ๐ฬ_1,up) ยท(1 - ฮฑ_up) ).
Denote ๐ฬ_k as the sampled result at the k-th time, we can independently sample K times and obtain {๐ฬ_k}_k=1^K and {ฯ_ฮธ(๐ฬ_k)}_k=1^K. Recall that the regularization reward from log p_0 has been given by (<ref>).
The policy gradientย <cit.> yields the gradient of loss for ฮธ as
โ_ฮธโ_r(ฮธ) = -โ_ฮธ๐ผ_๐ฬโผ q(๐ฬ | ๐,๐ ) [log p_0(๐ฬ | ๐,๐)]
โ -โ_ฮธ1/Kโ_k=1^K logฯ_ฮธ(๐ฬ_k) (โ(๐ฬ_k) - โ),
where โ acts as a baseline function by averaging the regularization rewards โ(๐ฬ_k) in one feed-forward computation, which helps to reduce the variance during policy gradient trainingย <cit.>.
ยง.ยง.ยง Optimization with ๐ผ_q_ฮธ[log q_ฮธ]
The last entropy term for q_ฮธ could be directly computed by
โ_e (ฮธ) = ๐ผ_๐ฬโผ q(๐ฬ | ๐,๐ ) [log q(๐ฬ | ๐,๐ )]
โ1/NPโ_u=1^Nโ_p=1^P[ ฮฑ_uplogฮฑ_up + (1- ฮฑ_up ) log (1-ฮฑ_up ) ],
where we again adopt the node-pivot similarity matrix as a proxy for the estimated latent graph.
ยง.ยง.ยง Iterative Structure Learning for Acceleration
A straightforward way is to consider once structure inference and once GNN's message passing for prediction in each feed-forward computation. To enable structure learning and GNN learning mutually reinforce each otherย <cit.>, we consider multiple iterative updates of graph structures and node representations before once back-propagation. More specifically, in each epoch, we repeatedly update node representations ๐^t (where the superscript t denotes the t-th iteration) and latent graph ๐ฬ^t until a given maximum budget is achieved. To accelerate the training,
we aggregate the losses โ^t in each iteration step for parameter updating.
As different graphs have different feature space, we utilize the first layer of GNN as an encoder at the very beginning and then feed the encoded representations to structure learner.
The training algorithm for structure learner g_ฮธ on source graphs is described in Alg.ย <ref> (in the appendix) where we train structure learner for multiple episodes and in each episode, we train g_ฮธ on each source graph for several epochs.
In testing, the well-trained g_ฮธ is fixed and we train a GNN h_w on the target graph with latent structures inferred by g_ฮธ, as described in Alg.ย <ref>.
ยง RELATED WORKS
Graph Neural Networks
Graph neural networks (GNNs)ย <cit.> have achieved impressive performances in modeling graph-structured data. Nonetheless, there is increasing evidence suggesting GNNs' deficiency for graph structures that are inconsistent with the principle of message passing.
One typical situation lies in non-homophilous graphsย <cit.>, where adjacent nodes tend to have dissimilar features/labels. Recent studies devise adaptive feature propagation/aggregation to tackle the heterophilyย <cit.>. Another situation stems from graphs with noisy or spurious links, for which several works propose to purify the observed structures for more robust node representationsย <cit.>. Our work is related to these works by searching adaptive graph structures that is suitable for GNN's message passing. Yet, the key difference is that our method targets learning a new graph out of the scope of input one, while the above works focus on message passing within the input graph.
Graph Structure learning To effectively address the limitations of GNNs' feature propagation within observed structures, many recent works attempt to jointly learn graph structures and the GNN model. For instance, <cit.> models each edge as a Bernoulli random variable and optimizes graph structures along with the GCN. To exploit enough information from observed structure for structure learning, <cit.> proposes a metric learning approach based on RBF kernel to compute edge probability with node representations, while <cit.> adopts attention mechanism to achieve the similar goal. Furthermore,
<cit.> considers an iterative method that enables mutual reinforcement between learning graph structures and node embeddings. Also,
<cit.> presents a probabilistic
framework that views the input graph as a random sample from a collection modeled by a parametric random graph model.
<cit.> harnesses variational inference to estimate a posterior of graph structures and GNN parameters. While learning graph structures often requires O(N^2) complexity, a recent work <cit.> proposes an efficient Transformer that achieves latent structure learning in each layer with O(N) complexity. However, though these methods have shown promising results, they assume training nodes and testing nodes are from the same graph and consider only one graph.
By contrast, we consider graph structure learning under the cross-graph setting and propose a general framework to learn a shared structure learner which can generalize to target graphs without any re-training.
Out-of-Distribution Generalization on Graphs. Due to the demand for handling testing data in the wild, improving the capability of the neural networks for performing satisfactorily on out-of-distribution data has received increasing attentionย <cit.>. Recent studies, e.g., <cit.> explore effective treatments for tackling general distribution shifts on graphs, and there are also works focusing on particular categories of distribution shifts like size generalizationย <cit.>, molecular scaffold generalizationย <cit.>, feature/attribute shiftsย <cit.>, topological shiftsย <cit.>, etc. To the best of our knowledge, there is no prior works considering OOD generalization in the context of graph structure learning. In our case, the target graph, where the structure learner is expected to yield adaptive structures, can have disparate distributions than the source graphs. The distribution shifts could potentially stem from feature/label space, graph sizes or domains (e.g., from social networks to citation networks). As the first attempt along this path, our work can fill the research gap and enable the graph structure learning model to deal with new unseen graphs in an open world.
ยง EXPERIMENTS
We apply to real-world datasets for node classification to test the efficacy of proposed structure learner for boosting performance of GNN learning on target graphs with distribution shifts from source graphs. We specify the backbone GNN network for as a two-layer GCNย <cit.>. We focus on the following research questions:
1) How does perform compared with directly training GNN models on input structure of target graphs?
2) How does perform compared to state-of-the-art structure learning models that are directly trained on target datasets in terms of both accuracy and training time?
3) Are the proposed components of effective and necessary for the achieved performance?
4) What is the impact of hyper-parameter on performance and what is the impact of attack on observed edges?
5) What is the property of inferred latent graphs and what generalizable pattern does the structure learner capture?
ยง.ยง Experimental Protocols
Datasets. Our experiments are conducted on several public graph datasets. First we consider three commonly used citation networks Cora, CiteSeer and PubMed. We use the same splits as in <cit.>. These three datasets have high homophily ratios (i.e., adjacent nodes tend to have similar labels) <cit.>. Apart from this, we also consider four social networks from Facebook-100 <cit.>, which have low homophily ratios.
Readers may refer to Appendix <ref> for more dataset information like splitting ratios.
Competitors. We mainly compare with GCN <cit.>, the GNN counterpart trained on input structure, for testing the efficacy of produced latent graphs by . As further investigation, we also compare with other advanced GNN models: GraphSAGE <cit.>, GAT <cit.>, APPNP <cit.>, _2GCN <cit.> and GPRGNN <cit.>. Here APPNP, _2GCN and GPRGNN are all strong GNN models equipped with adaptive feature propagation and high-order aggregation. For these pure GNN models, the training and testing are considered on (the same) target graphs.
Furthermore, we compete with state-of-the-art graph structure learning models, IDSย <cit.>, IDGLย <cit.> and VGCNย <cit.>. Since these models are all designed for training on one dataset from scratch, we directly train them on the target graph and they in principle could yield better performance than .
We also consider variants of as baselines. We replace the similarity function s with attention-based structure learner, denoted as , which follows the same training scheme as . Besides, we consider some non-parametric similarity functions like dot-product, KNN and cosine distance (denoted as _dp, _knn and _cos, respectively). For these models, we only need to train the GNN network on target graphs with the non-parametric structure learners yielding latent structures. In addition, we introduce a variant that shares the same architecture as and is directly trained on target graphs. Also, in principle could produce superior results than . We report the test accuracy given by the model that produces the highest validation accuracy within 500 training epochs.
ยง.ยง In-domain Generalization
We first consider transferring within social networks or citation networks. The results are reported in Tableย <ref> where for each social network (resp. citation network) as the target, we use the other social networks (resp. citation networks) as the source datasets.
performs consistently better than GCN, i.e., the counterpart using observed graph for message passing, which proves that can capture generalizable patterns for desirable message-passing structure for unseen datasets that can indeed boost the GCN backbone's performance on downstream tasks. In particular, the improvement over GCN is over 5% on Cornell5 and Reed98, two datasets with low homophily ratios (as shown in Tableย <ref>). The reason is that for non-homophilous graphs where the message passing may propagate inconsistent signals (as mentioned in Sectionย <ref>), the GNN learning could better benefits from structure learning than homophilous graphs. Furthermore, compared to other strong GNN models, still achieves slight improvement than the best competitors though the backbone GCN network is less expressive. One could expect further performance gain by if we specify the GNN backbone as other advanced architectures.
In contrast with non-parametric structure learning models and , outperforms them by a large margin throughout all cases, which verifies the superiority of our design of multi-head weighted similarity function that can accommodate multi-faceted diverse structural information. Compared with , performs on par with and even exceeds it on Cornell5 and Amherst41. The possible reasons are two-fold. First, there exist sufficient shared patterns among citation networks (resp. social networks), which paves the way for successful generalization of . Second, could sometimes overfit specific datasets, since the amount of free parameters are regularly orders-of-magnitude more than the number of labeled nodes in the dataset. The results also imply that our transfer learning approach can help to mitigate over-fitting on one dataset.
Moreover, can generalize structure learner to unseen graphs that is nearly three times larger than training graphs, i.e., Cornell5.
ยง.ยง Cross-domain Generalization
We next consider a more difficult task, transferring between social networks and citation networks. The difficulty stems from two aspects: 1) social networks and citations graphs are from distinct categories thus have larger underlying data-generating distribution gaps; 2) they have varied homophily ratios, which indicates that the observed edges play different roles in original graphs.
In Tableย <ref> we report the results.
Despite the task difficulty, manages to achieve superior results than GCN and also outperforms other non-parametric graph structure learning methods throughout all cases. This suggests 's ability for handling target graphs with distinct properties.
In Fig.ย <ref> we further compare with three state-of-the-art graph structure learning models that are directly trained on target graphs. Here we follow the setting in Tableย <ref>. The results show that even trained on source graphs that are different from the target one, still performs on par with the competitors that are trained and tested on (the same) target graphs. Notably, significantly reduces training time.
For instance, in John Hopkins55, is 6x, 9x and 40x faster than IDGL, LDS and VGCN, respectively. This shows one clear advantage of in terms of training efficiency and also verifies that our model indeed helps to reduce the significant cost of training time for structure learning on target graphs.
ยง.ยง Ablation Studies
We conduct ablation studies to test the effectiveness of iterative learning scheme and regularization on graphs.
Effect of Iterative Learning.
We replace the iterative learning process as a one-step prediction (i.e., once structure estimation and updating node representations in once feed-forward computation) and compare its test accuracy with . The results are shown in Fig. ย <ref> where we follow the setting of Tableย <ref>.
The non-iterative version exhibits a considerable drop in accuracy (as large as 5.4% and 8.8% when tested on target graphs Cornell5 and Amherst41, respectively).
Therefore, the iterative updates indeed help to learn better graph structures and node embeddings, contributing to higher accuracy for downstream prediction.
Effect of Regularization on structures.
We remove the regularization on structures (i.e., setting ฮฑ = ฯ = 0) and compare with . As shown in Fig.ย <ref>, there is more or loss performance degradation. In fact, the regularization loss derived from the prior distribution for latent structures could help to provide some guidance for structure learning, especially when labeled information is limited.
ยง.ยง Hyper-parameter Sensitivity
In Fig.ย <ref> (in the appendix), we study the variation of model's performance w.r.t. ฮป (the weight on input graphs) and P (the number of pivots) on target datasets Cora and CiteSeer.
Overall, the model is not sensitive to ฮป's.
For Cora, larger ฮป contributes to higher accuracy, while for CiteSeer, smaller ฮป yields better performance. The possible reason is that the initial graph of Cora is more suitable for message passing (due to higher homophily ratio).
For the impact of pivot number, as shown in Fig.ย <ref>(b), a moderate value of P could provide decent downstream performance.
ยง.ยง Robustness Analysis
In addition, we find that is more immune to edge deletion attack than GCN. We randomly remove 10-50% edges of target graphs respectively, and then apply and GCN.
We present the results in Johns Hopkins55 in Fig.ย <ref> and leave more results in Appendix <ref>. When the drop ratio increases, the performance gap between two models becomes more significant.
This is due to our structure learner's ability for learning new graph structures from node embeddings, making it less reliant on initial graph structures and more robust to attack on input edges.
ยง.ยง Case Study
We further probe into why our approach is effective for node classification by dissecting the learnt graph structures.
Specifically, we measure the homophily ratios of learnt structures and their variance of neighborhood distributions of nodes with same labels.
As nodes receive messages from neighbors in message passing, the more similar the neighborhood patterns of nodes within one class are, the easier it is for GNNs to correctly classify them <cit.>.
We use homophily metric proposed in <cit.> to measure homophily ratios.
For calculation of variance of neighborhood distribution, we first calculate variance for each class, and then take weighted sum to get the final variance, where the weight is proportional to the number of nodes within corresponding class.
Homophily Ratio. We choose Amherst41, Johns Hopkins55 and Reed98 as target graphs, and record the homophily ratios of inferred latent structures every five epochs during training. As shown in Fig. <ref>. the homophily ratios of inferred latent graphs exhibit a clear increase as the training epochs become more and the final ratio is considerably larger than that of input graph. The results indicate that the trained structure learner incline to output more homophilous latent structures that are reckoned to be more suitable for message passing.
Neighborhood Distribution Variance. As shown in Fig. <ref>, the variance of neighborhood distribution of nodes with the same label is significantly smaller in our learnt structure, making it easier to classify nodes through message passing.
The results also imply that high homophily ratio and similar intra-class neighborhood patterns could be two of the underlying transferrable patterns of optimal message-passing structure, identified by .
ยง CONCLUSION
This paper proposes Graph Structure Learning Under Cross-Graph Distribution Shift, a new problem that requires structure learner to transfer to new target graphs without re-training and handles distribution shift.
We develop a transfer learning framework that guides the structure learner to discover shared knowledge across source datasets with respect to optimal message-passing structure for boosting downstream performance. We also carefully design the model components and training approach in terms of expressiveness, scalability and stability.
We devise experiments with various difficulties and demonstrate the efficacy and robustness of our approach.
Although our framework is pretty general, we believe their are other potential methods that can lead to equally competitive results, which we leave as future work.
The work was supported in part by National Key Research and Development Program of China (2020AAA0107600), NSFC (62222607), Science and Technology Commission of Shanghai Municipality (22511105100), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).
ACM-Reference-Format
ยง DERIVATIONS FOR NWGM
First, when taking the gradient, we have โ_ฮธ๐ผ_q_ฮธ [ log p_w_m(๐| ๐, ๐, ๐ฬ)] โ c โ_ฮธlog๐ผ_q_ฮธ [ p_w_m(๐| ๐, ๐, ๐ฬ)] with basic applications of the chain rule. We then adopt Normalized Weighted Geometric Mean (NWGM)ย <cit.>:
โ_ฮธlog๐ผ_q_ฮธ(๐ฬ|๐, ๐) [ p_w_m(๐_u, c = 1| ๐, ๐, ๐ฬ)]
= โ_ฮธlogโ_๐ฬexp(s_1(๐ฬ))/exp(s_1(๐ฬ)) + exp(s_2(๐ฬ)) q_ฮธ(๐ฬ|๐, ๐)
= โ_ฮธlogโ_๐ฬ(s_1(๐ฬ)) q_ฮธ(๐ฬ|๐, ๐)
โ โ_ฮธlog((s_1(๐ฬ)))
= โ_ฮธlogโ_๐ฬโ_๐ฬ [exp (s_1(๐ฬ))]^q_ฮธ(๐ฬ|๐, ๐)/โ_๐ฬexp (s_1(๐ฬ))]^q_ฮธ(๐ฬ|๐, ๐) + โ_๐ฬexp (s_2(๐ฬ))]^q_ฮธ(๐ฬ|๐, ๐)
= โ_ฮธlogโ_๐ฬexp(โ_๐ฬ s_1(๐ฬ)q_ฮธ(๐ฬ|๐, ๐))/exp(โ_๐ฬ s_1(๐ฬ)q_ฮธ(๐ฬ|๐, ๐)) + exp(โ_๐ฬ s_2(๐ฬ)q_ฮธ(๐ฬ|๐, ๐))
= โ_ฮธlogโ_๐ฬexp(๐ผ_q_ฮธ(๐ฬ|๐, ๐) [s_1(๐ฬ)] )/exp(๐ผ_q_ฮธ(๐ฬ|๐, ๐) [s_1(๐ฬ)] ) + exp(๐ผ_q_ฮธ(๐ฬ|๐, ๐) [s_2(๐ฬ)] )
= โ_ฮธlog p_w_m (๐_u,c = 1| ๐, ๐, ๐ฬ = ๐ผ_q_ฮธ(๐ฬ|๐, ๐) [๐ฬ]),
where s_1 denotes the positive predicted score for class c which is indeed associated with node u, and s_2(๐ฬ) = 0 in our case. We thus conclude the proof for (<ref>).
ยง DATASETS AND EXPERIMENTAL DETAILS
The statistical information on datasets is displayed in Table <ref>. For the splitting of Cora, CiteSeer and PubMed, we follow <cit.> to randomly select 20 instances per
class for training, 500/1000 instances for validation/testing in each dataset. In the remaining datasets,
we employ random train/valid/test splits of 50%/25%/25%.
The backbone GNN network is specified as a two-layer GCN model.
We set the similarity function s in (<ref>) as cosine similarity and ฮด as a threshold-based truncation.
Besides, since the dimensions of input node features are different across datasets, we adopt a transformation network that converts input features into a d-dimensional node representations before the structure learning module as shown in Alg.ย <ref> (๐^0 = (๐; w_m) or ๐^0 = (๐, ๐; w_m)). We can specify the transformation as a one-layer MLP or a one-layer GCN network (what we adopt).
Most of the experiments were conducted on an NVIDIA GeForce RTX 2080 Ti with 11GB memory. For experiments involving two larger datasets, PubMed and Cornell5, we utilized an NVIDIA GeForce RTX 3090 with 24 GB memory.
ยง HYPERPARAMETERS
We use grid search on validation set to tune the hyperparameters.
The learning rate is searched in {0.001, 0.005, 0.01,0.05 }; Dropout is searched in {0, 0.2, 0.3, 0.5, 0.6 }; Hidden channels is searched in {16, 32, 64, 96}.
Other hyperparameters for specific models are stated below.
For GCN, GraphSAGE and H^2GCN, we use 2 layers. For GAT, we search gat head number in {2, 4 } and use 2 layers. For APPNP and GPR, we search ฮฑ in {0.1, 0.2, 0.5} and set K to 10. We list the searching space of structure learning methods below.
* and its variants:
pivot number P โ {800, 1000, 1200, 1400}, embedding size d โ {16, 32, 64, 96},
ฮปโ [0.1, 0.9],
ฮฑโ {0, 0.1, 0.15, 0.2, 0.25, 0.3}, ฯโ {0, 0.1, 0.15, 0.2, 0.25, 0.3}, threshold โ {4e-5, 8.5e-5}, H โ {4, 6}, T=10, Eโ{1, 2, 3}.
* LDS: the sampling time S=16, the patience window size ฯโ{10, 20}, the hidden size โ {8, 16, 32, 64},
the inner learning rate ฮณโ {1e-4, 1e-3, 1e-2, 1e-1},
and the number of updates used to compute the truncated hypergradient ฯโ {5, 10, 15}.
* IDGL: ฯต=0.01, hidden size โ {16, 64, 96,128}, ฮปโ {0.5, 0.6, 0.7, 0.8}, ฮทโ {0, 0.1, 0.2}, ฮฑโ {0, 0.1, 0.2}, ฮฒโ {0, 0.1}, ฮณโ {0.1, 0.2}, m โ {6, 9, 12}.
* VGCN: ฯฬ
_1 โ {0.25, 0.5, 0.75, 0.99}, ฯฬ
_0=10^-5, ฯ_0 โ {0.1, 0.5}, ฯโ {0.1, 0.5}, ฮฒโ {10^-4, 10^-3, 10^-2, 1}. Sampling time is 3. Maximum number of training epochs is 5000.
ยง MORE EXPERIMENTAL RESULTS
We compare with using MLP and GCN, respectively, as the transformation network before structure learning module and report the results in Tableย <ref>.
In summary, these two methods are of equal competence, which suggests that is not sensitive to the transformation network used for converting node features with various dimensions into embeddings with a shared dimension. This also implies that simple neural architectures, e.g. MLP and GCN, could provide enough capacity for extracting the information in input observation, which is leveraged by the shared graph structure learner to discover generalizable patterns in optimal message-passing structure.
We also provide more results of edge deletion experiments in Fig. <ref>. We randomly remove 10-50% edges of target graphs respectively, and then apply and GCN. The results demonstrate that is more immune to edge deletion. This is due to our structure learner's ability for learning new structures, making it less reliant on initial graph structures and more robust to attack on input edges.
|
http://arxiv.org/abs/2306.06706v1
|
20230611153917
|
Ascending chain condition in generic groups
|
[
"Ilya Kapovich"
] |
math.GR
|
[
"math.GR",
"math.GT",
"20F69"
] |
Ascending chain condition in generic groups]
Ascending chain condition in generic groups
I.ย Kapovich]Ilya Kapovich
Department of Mathematics and Statistics, Hunter College of CUNY695 Park Ave, New York, NY 10065
[email protected]
The author was supported by the individual NSF
grants DMS-1905641
namedefsubjclassname@2020
Mathematics Subject Classification
[2020]Primary 20F69, Secondary 20E07, 20E15
We prove that for any fixed integers mโฅ 2, tโฅ 1, kโฅ 2 a generic m-generator t-relator group satisfies the Ascending Chain Condition for k-generated subgroups.
[
[
July 31, 2023
=================
ยง INTRODUCTION
For an integer kโฅ 1 we say that a group G is k-generated if there exist elements g_1,โฆ, g_kโ G such that G=โจ g_1,โฆ, g_kโฉ. Thus G being k-generated is equivalent to having rank(G)โค k, where for a group G the rank of G, denoted rank(G), is defined as rank(G)=min{|S| : Sโ G, โจ Sโฉ=G}.
For kโฅ 1 we say that a group G satisfies the ascending chain condition ACC_k for k-generated subgroups if every strictly ascending chain of k-generated subgroups of G
H_1โช H_2โชโฆ
terminates in finitely many steps. For example, if every nontrivial element of G is contained in a unique maximal cyclic subgroup, then G satisfies ACC_1. For this reason, every torsion-free word-hyperbolic group satisfies ACC_1. A finite group G obviously satisfies ACC_k for every kโฅ 1. It is also fairly easy to see that a finitely generated abelian group satisfies ACC_k for all kโฅ 1. However, in most situations establishing condition ACC_k requires substantial work.
A classic result of Higmanย <cit.> and Takahasi shows that a free group F satisfies ACC_k for every kโฅ 1. Kapovich and Myasnikovย <cit.> later gave a proof of this result using Stallings foldingsย <cit.>. Shustermanย <cit.> used pro-finite techniques to show that limit groups (including, in particular, closed orientable surface groups) satisfy ACC_k for all kโฅ 1. Note that since closed non-orientable surface groups are commensurable with closed orientable surface groups, it follows that ACC_k holds for all closed surface groups.
Recently Bering and Lazarovichย <cit.> adapted the Kapovich-Myasnikov argument to give another proof of the ACC_k condition for surface groups; they also proved that closed 3-manifold groups satisfy the ascending chain condition for k-generated free groups, with an arbitrary fixed kโฅ 1.
In this paper we establish condition ACC_k, for any fixed kโฅ 2, for generic finite group presentations. We refer the reader to <cit.> for a more detailed discussion of various models of random and generic groups. We only briefly recall a few relevant definitions here. Let mโฅ 2, A={a_1,โฆ,a_k} and F_m=F(A). For an integer tโฅ 1 let ๐_m,t be the set of all group presentations โจ a_1,โฆ, a_m|r_1,โฆ, r_tโฉ where r_iโ F(A) are nontrivial cyclically reduced words. For a subset ๐ซโ๐_m,t and nโฅ 1 denote by ๐ฉ(๐ซ,n,t) the number of presentations โจ a_1,โฆ, a_m|r_1,โฆ, r_tโฉ in ๐ซ with |r_i|โค n for i=1,โฆ, t. Similarly, we denote by ๐ฎ(๐ซ,n,t) the number of presentations โจ a_1,โฆ, a_m|r_1,โฆ, r_tโฉ in ๐ซ with |r_i|=n for i=1,โฆ, t.
We say that ๐ซ is generic in ๐_m,t if
lim_nโโ๐ฉ(๐ซ,n)/๐ฉ(๐_m,t,n,t)=1.
If, in addition, the convergence in the above limit is exponentially fast, we say that ๐ซ is exponentially generic in ๐_m,t. Note that both ๐ฉ(๐_m,1,n,1) and ๐ฎ(๐_m,1,n,1) grow as Const (2m-1)^n, and both ๐ฉ(๐_m,t,n,t) and ๐ฎ(๐_m,t,n,t) grow as Const (2m-1)^tn. For this reason replacing |r_i|โค n by |r_i|=n leads to the notion of genericity with similar properties.
Our main result is:
Let mโฅ 2, tโฅ 1, kโฅ 1 be integers. There exists an exponentially generic class Q_m,t,k of group presentations with m generators and t defining relators such that every group G=โจ a_1,โฆ, a_m| r_1,โฆ, r_t โฉ from Q_m,t,k satisfies the Ascending Chain Condition ACC_k for k-generated subgroups.
Theoremย <ref> deals with the "basic" or "few relators" model of random groups, where the number tโฅ 1 of defining relators is fixed and n=max_i |r_i| tends to infinity. The "density" model of random groups, for a fixed density parameter dโ ),1), considers group presentations G=โจ a_1,โฆ, a_m| r_1,โฆ, r_t โฉ where all |r_i|=n and where t=t_n grows with n as t_n=(2m-1)^dn. We denote ๐_m the set of all finite group presentations โจ a_1,โฆ, a_m|Rโฉ where R is a finite collection of nontrivial cyclically reduced words of equal length. Let ๐ซโ๐_m. We say that a presentation from ๐_m belongs to ๐ซ with overwhelming probability at density 0<d<1 if
lim_nโโ๐ฎ(๐ซ,n,t_n)/๐ฎ(๐_m,n, t_n)=1,
where t_n=โ (2m-1)^dnโ.
Let mโฅ 2, kโฅ 1 be integers.
There exists 0<d_0=d_0(m,k)<1 such that for every 0<dโค d_0 with overwhelming probability a group G given by a density-d presentation on m generators satisfies the Ascending Chain Condition ACC_k for k-generated subgroups.
All of the AC_k groups produced by the proofs of Theoremย <ref> and Theoremย <ref> satisfy the C'(1/6) small cancellation condition and therefore are word-hyperbolic. Note that there do exist torsion-free word-hyperbolic groups where already AC_2 fails. Consider for example, the group
G=โจ a, b, t| t^-1at=ab^2a, t^-1bt=ba^2bโฉ.
This group arises as the mapping torus of the injective non-surjective endomorphism ฯ:F(a,b)โ F(a,b), ฯ(a)=ab^2a, ฯ(b)=ba^2b. The subgroup ฯ(F(a,b))โช F(a,b) is malnormal in F(a,b) and ฯ is an expanding immersion. Then it follows from the Bestvina-Feighn Combination Theoremย <cit.> that G is word-hyperbolic (see <cit.> for details). Put H_n =t^n F(a,b) t^-n where n=0,1, 2,โฆ.
Then
H_0โช H_1 โช H_2โชโฆ
is an infinite strictly ascending chain of subgroups of of G where each H_i is free of rank 2. Thus ACC_2 fails for G.
ยง REPRESENTING SUBGROUPS BY LABELED GRAPHS
For the remainder of this paper, unless specified otherwise, let A={a_1,โฆ, a_m} be a finite alphabet, where mโฅ 2, which will be the
marked set of generators of the group G under consideration. We denote F_m=F(A)=F(a_1,โฆ, a_m), the free group on A.
Following the approach of Stallingsย <cit.>, we use labeled graphs
to study finitely generated subgroups of quotients of F_m.
<cit.>.
By a graph we mean a 1-dimensional CW-complex ฮ. We refer to 0-cells of ฮ as vertices and to open 1-cells of ฮ as topological edges. An oriented edge of ฮ is a topological edge with a choice of an orientation on it. For an oriented edge e we denote by e^-1 the same edge with the opposite orientation. We denote the set of all vertices of ฮ by Vฮ and the set of all oriented edges of ฮ by Eฮ. For an oriented edge eโ Eฮ the attaching maps define its initial vertex o(e)โ Vฮ and its terminal vertex t(e)โ Eฮ. For a vertex vโฮ the degree deg_ฮ(v) of v is the number of all eโ Eฮ with o(e)=v. An edge-path in ฮ is a sequence of edges ฮณ=e_1,โฆ, e_k (where kโฅ 0) of oriented edges of ฮ such that t(e_i)=o(e_i+1) for 1โค i<k. We put o(ฮณ)=o(e_1), t(ฮณ)=t(e_k), |ฮณ|=k and ฮณ^-1=e_k^-1,โฆ, e_1. For k=0 we view ฮณ=vโ Vฮ as an edge-path with o(ฮณ)=t(ฮณ)=v, |ฮณ|=0 and ฮณ^-1=ฮณ=v. We say that an edge-path ฮณ is reduced if it has no subpaths of the form e,e^-1, where eโ Eฮ. An arc in a graph ฮ is a simple edge-path of positive length, possibly closed, where every intermediate vertex of the path has degree 2 in
ฮ. Thus in a finite graph ฮ, every arc is contained in a unique maximal arc, whose end-vertices have degree โฅ 3 or degree 1.
An A-graph ฮ consists of an underlying oriented
graph where every (oriented) edge e is labeled by an element
ฮธ(e)โ A^ยฑ 1 in such a way that ฮธ(e^-1)=ฮธ(e)^-1 for every edge
e of ฮ. We allow multiple edges between vertices as well as
edges which are loops.
An A-graph ฮ is said to be non-folded if there
exists a vertex x and two distinct edges e_1, e_2 with origin
x such that the words ฮธ(e_1)=ฮธ(e_2). Otherwise
ฮ is said to be folded.
If ฮณ=e_1,โฆ, e_k is an edge-path in an A-graph ฮ, we put ฮธ(ฮณ)=ฮธ(e_1)โฆฮธ(e_k). Thus ฮธ(ฮณ) is a word in A^ยฑ 1. For a path ฮณ with |ฮณ|=0 we put ฮธ(ฮณ) to be the empty word ฯต.
For a finite graph ฮ we denote by b(ฮ) the first Betti number of ฮ. Thus if ฮ is connected then b(ฮ)=rank(ฯ_1(ฮ)).
For an edge-path p in ฮ we denote by o(p) the initial vertex of p and by t(p) the terminal vertex of p in ฮ.
Every edge-path p in an A-graph ฮ has a label ฮธ(p) which is a word. The number of edges in p
will be called the length of p and denoted |p|. For F_m=F(A)
a path p in an A-graph ฮ is said to be reduced
if it does not contains subpaths of the form e, e^-1 and if
p does not contain subpaths of the form e,e where e is an
edge of ฮ.
The definition implies that an A-graph ฮ is folded if and only if the label of every reduced edge-path in ฮ is a reduced word.
Let G=โจ A|Rโฉ=โจ a_1,โฆ, a_m| r_1, r_2โฆโฉ be a group presentation on the generators a_1,โฆ, a_m, where each r_i is a cyclically reduced word in F(A).
Let ฮ be a connected A-graph with a base-vertex x_0. There is a natural labeling homomorphism ฮ: ฯ_1(ฮ,x_0)โ G where for every closed edge-path ฮณ from x_0 to x_0 we put ฮ([ฮณ])=ฮธ(ฮณ)โ G.
We say that the subgroup H=ฮ(ฯ_1(ฮ,x_0))โค G is represented by (ฮ,x_0).
Note that if H is represented by (ฮ,x_0), where ฮ is a finite connected A-graph, then rank(H)โค b(ฮ).
In addition to foldings, we need the following transformation of
labeled graphs used by Ol'shanskii and Arzhantseva <cit.>.
Let ฮ be a connected A-graph and let e_1,e_2 be two distinct oriented edges in ฮ with o(e_1)=o(e_2) and ฮธ(e_1)=ฮธ(e_2)=aโ A^ยฑ 1.
A Stalling fold on e_1,e_2 consists in producing a new A-graph ฮ' obtained from ฮ by identifying e_1,e_2 into a single oriented edge e with label ฮธ(e)=a (and identifying t(e_1) and t(e_2) into a single vertex if t(e_1) t(e_2) in ฮ. This fold is called singular if t(e_1)=t(e_2) and non-singular if t(e_1)=t(e_2).
Let p=p_1 p' p_2 be a reduced edge-path in a finite connected
A-graph ฮ such that p' is an arc of ฮ and the
paths p_1, p_2 do not overlap p'. Let p have initial vertex
x, terminal vertex y, and label ฮธ(p)=v. Let z be a
reduced word such that v =_G z in G.
We now modify ฮ by adding a new arc q from x to y with
label z and removing all the edges and interior vertices of p' from ฮ.
We will say that the resulting A-graph ฮ' is obtained from
ฮ by an AO-move on the arc p'.
Note that we allow the case where z=1 is the trivial word, that is, where v=_G 1. In this case the AO move removes the interior of p' from ฮ and, if x y in ฮ, identifies x and y into a single vertex. If we already had x=y in ฮ then in this case the AO move just removes the interior of p' from ฮ; we refer to the AO moves of this latter type (where z=1 and x=y in ฮ) as singular, and otherwise call an AO-move non-singular.
We record here some important properties of moves on A-graphs that follow directly from the definitions (see alsoย <cit.>).
Let ฮ be a finite connected A-graph with a base-vertex x_0 such that (ฮ,x_0) represents a subgroup Hโค G.
* Let ฮ' be obtained from ฮ by a fold and let x_0' be the image of x_0 in ฮ'. Then (ฮ',x_0') also represents Hโค G, and b(ฮ')โค b(ฮ). Moreover, b(ฮ')=b(ฮ) if the fold is non-singular, and b(ฮ')=b(ฮ)-1 if the fold is singular.
* Let ฮ' be obtained from ฮ by an AO-move on an arc p' such that x_0 is not an interior vertex of p'. [If the AO-move was on an arc p' with label v=_G 1 with endpoints x y and if x_0โ{x,y}, we still denote the image of x_0 in ฮ' by x_0.]
Then (ฮ',x_0) also represents Hโค G, and b(ฮ')โค b(ฮ). Moreover, b(ฮ')= b(ฮ) if the AO-move is nonsingular, and b(ฮ')= b(ฮ)-1 if the AO-move is singular.
ยง EQUALITY DIAGRAMS IN SMALL CANCELLATION GROUPS
Let G be a group with a finite generating set A. As usual, for a word w over A^ยฑ 1 we denote by w the element of G represented by A. We also denote by d_A the word metric on G corresponding to A.
Recall that for Cโฅ 1, a word w over A^ยฑ 1 is called a (C,0)-quasigeodesic in G if for every subword v of w we have |v|โค C d_A(1, v). We refer the reader to <cit.> for background info on small cancellation groups.
Let
G=โจ a_1,โฆ, a_m | Rโฉ
be a C'(ฮป)-presentation, where 0<ฮปโค 1/6.
A word w over A^ยฑ 1 is called ฮป-reduced if w is freely reduced and if whenever w contains a subword v such that v is also a subword of some cyclic permutation of r or r^-1 for some rโ R then |v|โค (1-3ฮป)|r|.
Note that of ฮป=1/6 then being ฮป-reduced is the same as being ฮป-reduced. In general, for 0<ฮปโค 1/6 being Dehn-reduced implies being ฮป-reduced.
Greendlinger's lemma (<cit.>) implies:
Let
G=โจ a_1,โฆ, a_m | Rโฉ
be a C'(ฮป)-presentation, where 0<ฮปโค 1/6.
Let w be a nontrivial freely reduced word over A^ยฑ 1 such that w=_G 1. Then w is not ฮป-reduced.
Thus for a
C'(ฮป)-presentation with ฮปโค 1/6, a nontrivial ฮป-reduced word in F(a_1,โฆ, a_m) represents a
nontrivial element of G.
The following proposition follows from the basic results of small cancellation
theory, established in Ch. V, Sections 3-5 of <cit.>. The proof of this proposition is essentially identical to the proof of Lemmaย 2.11 in <cit.>
(and is similar to the proof of Propositionย 39 in <cit.>). For these reasons we omit the details.
[Equality diagrams in
C'(ฮป)-groups]
Let
G=โจ a_1,โฆ, a_m |
Rโฉโ
be a C'(ฮป)-presentation, where
ฮปโค 1/6.
Let w_1,w_2โ F(a_1,โฆ, a_m) be freely reduced and
ฮป-reduced words such that w_1=_G w_2.
Then any reduced van Kampen diagram D over (โ), realizing the equality w_1=_G
w_2, has the form as shown in Figureย <ref>. Specifically, any
region Q of D labeled by rโ R intersects both the upper
boundary of D (labeled by w_1) and the lower boundary of D
(labeled by w_2) in simple segments ฮฑ_1, ฮฑ_2
accordingly, satisfying
ฮป|r|โค |ฮฑ_j|โค
(1-3ฮป)|r|.
Moreover, if two regions Q,Q' in D,
labeled by r,r'โ R, have a common edge, then they intersect in
closed simple segment ฮณ joining a point of the upper boundary
of D with a point of the lower boundary of D and labeled by a
piece with respect to R. In particular |ฮณ|<ฮป |r| and
|ฮณ|< ฮป|r'|.
Propositionย <ref> immediately implies:
Let Let
G=โจ a_1,โฆ, a_m |
Rโฉ
be a C'(ฮป)-presentation, where
0<ฮปโค 1/6. Put C=1-3ฮป/ฮป. [Note that 0<ฮปโค 1/6 implies Cโฅ 1.]
Then every ฮป-reduced word wโ F(A) is a (C,0)-quasigeodesic in the Cayley graph of G with respect to A.
ยง GENERICITY CONDITIONS
We will need some genericity consitions introduced by Arzhansteva and Ol'shanskiiย <cit.> and further explored in <cit.>.
Recall that mโฅ 2, A={a_1,โฆ, a_m} and F_m=F(A) are fixed.
The following genericity condition was defined by Arzhantseva in <cit.> and generalizes an earlier version of this condition defined by Arzhantseva and Ol'shansky in <cit.>.
<cit.>
Let 0< ฮผ< 1 be a real number and let kโฅ 2 be an integer. A
nontrivial freely reduced word w in F(A) is called
(ฮผ,k)-readable if there exists a finite connected folded
A-graph ฮ such that:
* The number of edges in ฮ is at most ฮผ |w|.
* We have b(ฮ)โค k.
* There is a path in ฮ with label w.
* The graph ฮ has at least one vertex of degree < 2m.
Note that if kโค m-1 then any finite folded A-graph ฮ with b(ฮ)โค k has a vertex of degree <2m, so that (2) is redundant in this case.
Let 0< ฮผ<1, 0<ฮปโค 1/6 be a real numbers, let tโฅ 1 and kโฅ m be integers.
We will say that a tuple of nontrivial cyclically reduced words (r_1,โฆ, r_t) in F(A) satisfies the (ฮป, ฮผ,
k)-condition if:
* The words r_i are not a proper powers in F(A).
* The symmetrized closure of {r_1,โฆ, r_t} satisfies the C'(ฮป) small
cancellation condition.
* If w is a subword of a cyclic permutation of some r_i^ยฑ 1 and
|w|โฅ |r_i|/2 then w is not (ฮผ,k)-readable.
For a fixed integer tโฅ 1 we denote by Q_m,t(ฮป,ฮผ,k) the set of all finite presentations โจ a_1,โฆ, a_m| r_1,โฆ, r_tโฉ satisfying the (ฮป, ฮผ, k)-condition.
We also denote Q_m(ฮป,ฮผ,k)=โช_t=1^โ Q_m,t(ฮป,ฮผ,k).
The results of Arzhantseva-Ol'shanskiiย <cit.> and Arzhantsevaย <cit.> imply:
Let mโฅ 2, tโฅ 1, kโฅ m be integers. Let 0<ฮป, ฮผ<1 be such that
ฮป<ฮผ/15k+3ฮป<1/6.
Then Q_m,t(ฮป,ฮผ,k) is exponentially generic in the set of all presentations โจ a_1,โฆ, a_m| r_1,โฆ, r_tโฉ where r_i are cyclically reduced words in F(A).
ยง PROPERTIES OF SUBGROUPS OF GENERIC GROUPS
Let mโฅ 2, tโฅ 1, kโฅ m be integers and let 0<ฮป, ฮผ<1 be real numbers satisfying
ฮป<ฮผ/15k+3ฮป<1/6 and
0<ฮป/1-3ฮป< ฮผ/15k+5.
Let
G=โจ a_1,โฆ, a_m| r_1,โฆ, r_tโฉโ
be a presentation satisfying the Q_m,t(ฮป,ฮผ,k) condition.
Let Hโค G be a k-generated subgroup with [G:H]=โ. Let (ฮ,x_0) be an A-graph with b(ฮ)โค k representing Hโค G with the smallest number of edges among all such graphs.
Then ฮ is folded and the label of every reduced edge-path in ฮ is ฮป-reduced with respect to presentation (โ ).
We may assume that H 1 since otherwise the result holds vacuously. By the minimal choice of ฮ, the A-graph ฮ is folded and has no degree-1 vertices except possibly x_0. Since Hโค G has infinite index, there exists a vertex of degree <2m in ฮ.
Suppose that the conclusion of the proposition fails for ฮ. Then there exists a reduced edge-path p in ฮ with label v such that v is a subword of a cyclic permutation r of some defining relator r_j^ยฑ 1 with |v|>(1-3ฮป)|r|. The maximal subarcs of ฮ break p into a concatenation p=p_1โฆ p_s, where p_2,โฆ, p_s-1 are maximal arcs in ฮ and where p_1,p_s are contained in maximal arcs.
Recall that by definition every endpoint of a maximal arc in ฮ either has degree โฅ 3 or is the base-vertex x_0. Thus x_0 is not an interior vertex of any of p_1,โฆ, p_s.
There are two cases to consider.
Case 1. Suppose there exists some p_i with |p_i|>5ฮป|r|.
We claim that there exists a subpath p_i' of p_i with |p'|>3ฮป|r| such that p_i' does not overlap the rest of the path p.
Assume first that 1<i<s. Since |p_i|>5ฮป|r|, the word r is not a proper power and since (โ ) satisfies the C'(ฮป) small cancellation condition, we have p_i p_j^ยฑ 1 for 1<j<s, j i and hence p_i does not overlap any such p_j. Similarly, the overlap of p_i with p_1, p_s have length < ฮป|r| each. Therefore p_i has a subpath of length >3ฮป|r| that does not overlap with the rest of p, as claimed.
Assume now that i=1 (the case i=s is similar). Thus |p_1|>5ฮป|r|. Then p_1 does not overlap any p_j with 1<j<s since otherwise we are in the previous situation. The overlap of p_1 and p_s has length <ฮป|r| since (โ ) satisfies C'(ฮป) and since r is not a proper power.
Thus the claim is verified.
Let ฮ' be obtained from ฮ by performing the AO-move on the arc p_i' corresponding to the relator r. Since |p'|>3ฮป|r|, the graph ฮ has fewer edges than ฮ. Moreover, since x_0 was not an interior vertex of p_i', we have x_0โ Vฮ' and by Propositionย <ref> the graph ฮ' represents Hโค G. Moreover, b(ฮ')โค b(ฮ)โค k. This contradicts the minimal choice of ฮ.
Thus Caseย 1 is impossible.
Case 2. Suppose that |p_i|โค 5ฮป|r| for i=1,โฆ, s.
Let ฮโ be the subgraph of ฮ spanned by p=p_1โฆ p_s. Since b(ฮ)โค k and ฮ has at most one vertex of degree 1 (namely x_0), there are at most 3k+1 maximal arcs in ฮ. Hence the number of edges ฮโ is at most
(3k+1)5ฮป|r|โคฮผ (1-3ฮป)|r|โคฮผ|v|,
where the inequality (3k+1)5ฮปโค 1-3ฮป holds by our choice of ฮป, ฮผ, k.
Since ฮ has a vertex of degree <2m and all vertices in ฮ have degree โค 2m, it follows that ฮโ also has a vertex of degree <2m. Finally, since ฮโ is a connected subgraph of ฮ and b(ฮ)โค k, it follows that b(ฮโ)โค k. Hence the word v is (ฮผ,k)-readable. Since v is a subword of r with |v|โฅ |r|/2, we get a contradiction with the assumption that (โ ) satisfies the Q_m,t(ฮป,ฮผ,k)-condition.
Therefore Caseย 2 is also impossible.
Hence the label of every reduced path in ฮ is ฮป-reduced, as required.
Propositionย <ref> and Corollaryย <ref> imply:
Let m,t,k,ฮป, ฮผ, G, H be as in Propositionย <ref>. Then the following hold:
* The labeling homomorphism ฯ: ฯ_1(ฮ, x_0)โ G is injective. In particular, the group H=ฯ(ฯ_1(ฮ, x_0) ) is free.
* The label of every reduced edge-path in ฮ is a (C,0)-quasigeodesic in G with C=1-3ฮป/ฮป.
* The subgroup Hโค G is quasiconvex in G.
ยง PROOFS OF THE MAIN RESULTS
Let kโฅ mโฅ 2, tโฅ 1 integers and let 0<ฮป,ฮผ<1 be real numbers such that
ฮป<ฮผ/15k+3ฮป<1/6 and
0<ฮป/1-3ฮป< ฮผ/15k+5.
Let
G=โจ a_1,โฆ, a_m| r_1,โฆ, r_tโฉโ
be a presentation satisfying the Q_m,t(ฮป,ฮผ,k) condition.
Then ACC_k holds for G.
Let G=โจ a_1,โฆ, a_m| r_1,โฆ, r_tโฉ be a group presentation satisfying the (ฮป, ฮผ, k) condition.
Put C=1-3ฮป/ฮป.
We claim that G satisfies ACC_k. Indeed, suppose not.
Then there exists an infinite strictly ascending sequence
H_1โช H_2โช H_3 โชโฆ
of nontrivial k-generated subgroups of G. Thus every H_i has infinite index in G since otherwise a sequence as above cannot be infinite.
For every i=1, 2, 3,โฆ let (ฮ_i, โ_i) be a finite connected folded A-graph with b(ฮ_i) representing H_i such that (ฮ_i, โ_i) has the smallest number of (topological) edges among all such A-graphs. Thus ฮ_i is folded and has no degree-1 vertices except possibly โ_i. Moreover, by Propositionย <ref> and Corollaryย <ref>, the label of every reduced edge-path in ฮ_i is ฮป-reduced and (C,0)-quasigeodesic in G.
Recall that according to our convention for A-graphs, oriented edges in ฮ_i labelled by elements of A are considered positive, and oriented edges in ฮ_i labelled by elements of A^-1 are considered negative. Recall also that vol(ฮ_i)=# E_+ฮ_i, the number of topological edges of ฮ_i. Since () is an infinite sequence of distinct k-generated subgroups of G, there are infinitely many distinct basepointed A-graphs in the sequence (ฮ_i, โ_i)_i=1^โ. After passing to a subsequence, we will assume that vol(ฮ_i)<vol(ฮ_i+1) for all iโฅ 1.
For the graph (ฮ_1, โ_1) choose a maximal tree Tโฮ. Each positive edge in ฮ_i-T defines an element of ฯ_1(ฮ_i,โ_i) and the set of all such elements gives a free basis B_T of ฯ_1(ฮ_i,โ_i). Denote by S_T the set of labels of the paths in B_T. Thus S_T is a free basis of H_1. By construction, #B_T=#S_T=b(ฮ_1)โค k and for every wโ S_T we have |w|โค 2 vol(ฮ_1).
For every i=2, 3, 4, โฆ and for every wโ S_T choose a freely reduced word ลต(i)โ F(A) such that ลต(i)=G w and that ลต_i labels a closed reduced path p(w,i) from โ_i to โ_i in ฮ_i. Such ลต_i exists since H_1โค H_i. Since ลต(i) is a (C,0)-quasigeodesic in G, we have |ลต_i|=|p(w,i)|โค C|w|โค 2C vol(ฮ_1). For all iโฅ 2, let ฮ_i,1 be the subgraph of ฮ_i spanned by the union of p(w,i) over all wโ S_T. Thus ฮ_i,1 is a finite connected graph containing โ_i with vol(ฮ_i,1)โค 2kC vol(ฮ_1). Thus there are only finitely many possibilities for the base-pointed A-graph (ฮ_i,1,โ_i). After passing to a further subsequence, we will assume that there is a finite connected base-pointed A-graph (ฮ_1,x_1) such that for all iโฅ 2 we have (ฮ_i,1,โ_i)=(ฮ_1,x_1). Note that, since H_1 1, the graph ฮ_1 is non-contractible, so that b(ฮ_1)โฅ 1.
Since all the subgroups H_1, H_2,โฆ, are distinct, for infinitely many iโฅ 2 we have ฮ_i,1โฮ_i. After passing to a further subsequence, we may assume that ฮ_i,1โฮ_i for all iโฅ 2.
Since ฮ_2,1โฮ_2, we can choose a finite connected subgraph ฮ_2'โฮ_2 such that ฮ_2,1โฮ_2', that b(ฮ_2')=b(ฮ_2,1)+1=b(ฮ_2)+1 and that ฮ_2' has no degree-1 vertices except possibly for โ_2. Thus ฮ_2' is spanned by the union of ฮ_2,1 and a single closed reduced path ฮณ_2 at โ_2, not contained in ฮ_2,1. We have ฯ_1(ฮ_2',โ_2)=ฯ_1(ฮ_2,1,โ_2)โโจฮณ_2โฉ. Let u be the label of ฮณ_2.
Now for each i=3, 4, 5, โฆ choose a reduced word u_i labeling a closed path p(ฮณ_2,i) in ฮ_i from โ_i to โ_i such that u_i=_G u. Then |p(ฮณ_2,i)|=|u_i|โค C |u|=C|ฮณ_2|. For i=3, 4, 5, โฆ let ฮi,2 be the subgraph of ฮ_i given by the union of ฮ_i,1 and the path p(ฮณ_2,i). Since all the labeling homomorphisms for the graphs under consideration are injective and ฮณ_2โฯ_1(ฮ_2,1)=ฯ_1(ฮ_1), and since ฮ_i,1=ฮ_1 for all iโฅ 3, we have p(ฮณ_2,i)โฮ_i,1 and thus ฮ_i,1โฮ_i,2 for all iโฅ 3. Therefore b(ฮ_i,2)โฅ b(ฮ_1)+1 for all iโฅ 3. Moreover, by construction vol(ฮ_i,2)โค vol(ฮ_1)+C|ฮณ_2|. Thus there are only finitely many choices for the base-pointed A-graph (ฮ_i,2,โ_i), where iโฅ 3. After passing to a further subsequence, we may assume that there is a finite connected base-pointed A-graph (ฮ_2,โ) such that for all iโฅ 3 we have (ฮ_i,2,โ_2)=(ฮ_2,โ).
Iterating this process, after passing to a subsequence of the original sequence, for j=1, 2, 3,, โฆ we construct an infinite sequence of finite connected base-pointed A-graphs (ฮ_j, x_j) and subgraphs ฮ_i,jโฮ_i, where iโฅ j+1, with the following properties:
* Each ฮ_i,j contains โ_i and satisfies (ฮ_j, x_j)=(ฮ_i,j ,โ_i) where jโฅ 1 and iโฅ j+1.
* We have b(ฮ_j+1)โฅ b(ฮ_j)+1 for j=1,2,3,โฆ.
* We have (ฮ_j, x_j)โ (ฮ_j+1, x_j+1) for all jโฅ 1.
* We have b(ฮ_1)โฅ 1.
Then b(ฮ_k+1)โฅ k+1. Since (ฮ_k+2,k+1,โ_k+2)=(ฮ_k+1, x_k+1) is a connected subgraph of ฮ_k+2, it follows that b(ฮ_k+2)โฅ k+2, yielding a contradiction.
We may assume that kโฅ m since ACC_k implies ACC_s for every integer 1โค s โค k.
Choose 0<ฮป<ฮผ<1/6 so that ฮป,ฮผ satisfy the assumptions of Theoremย <ref>. Put Q_m,t,k=Q_m.t(ฮป,ฮผ,k). We claim that Q_m,t,k satisfies the conclusions of Theoremย <ref>.
First, by Propositionย <ref> the set Q_m.t(ฮป,ฮผ,k) is exponentially generic in the set of all m-generator t-relator presentations โจ a_1,โฆ, a_k| r_1,โฆ, r_tโฉ with r_i being cyclically reduced in F(A). Theoremย <ref> shows that ACC_k holds for every group G, given by a Q_m.t(ฮป,ฮผ,k) presentation. Therefore Q_m,t,k=Q_m.t(ฮป,ฮผ,k) satisfies the conclusions of Theoremย <ref>, as required.
As before, it is enough to prove Theoremย <ref> for all kโฅ m, and thus we assume that kโฅ m.
Choose 0<ฮป,ฮผ<1/6 as in Theoremย <ref>.
It is knownย <cit.> that for any 0<ฮปโค 1/6 the the C'(ฮป) small cancellation condition is satisfied with overwhelming probability for d-random groups for sufficiently small density d>0. Also, by Propositionย <ref>, for any kโฅ mโฅ 2 condition Q_m,1(ฮป,ฮผ,k) for cyclically reduced words in F_m=F(a_1,โฆ, a_m) is exponentially generic, for ฮผ,ฮป as in Propositionย <ref>. Therefore, by Corollaryย 4.3 of <cit.>, for any kโฅ mโฅ 2 and ฮปฮผ as in Propositionย <ref>, there exists 0<d_0=d_0(m,k)<1 such that for any 0<d<d_0 condition Q_m(ฮป,ฮผ,k) holds with overwhelming probability for random finite presentations โจ a_1,โฆ, a_m|Rโฉ at density d. By Theoremย <ref>, ACC_k holds for every group given by a Q_m(ฮป,ฮผ,k) presentation. The conclusion of Theoremย <ref> now follows.
XXX
A1
G. Arzhantseva, On groups in which subgroups with a fixed number of generators are free, (Russian) Fundam. Prikl. Mat. 3 (1997), no. 3, 675โ683
A2
G. Arzhantseva, Generic properties of finitely presented groups and Howson's theorem. Comm. Algebra 26 (1998), no. 11, 3783โ3792
AO96
G. Arzhantseva, A. Ol'shanskii, Generality of the class of groups in which subgroups with a lesser number of generators are free, Mat. Zametki 59, no. 4 (1996), 489โ496; translation in: Math. Notes 59 (1996), no. 3-4, 350โ355
BF92
M. Bestvina and M. Feighn, The combination theorem for negatively curved groups, J. Diff. Geom. 35 (1992), 85โ101
BL21
E. A. Bering IV and N. Lazarovich, Ascending chains of free groups in 3-manifold groups, preprint, 2021, arXiv:2111.11777
H51
G. Higman, A finitely related group with an isomorphic proper factor group,
J. London Math. Soc. 26 (1951), 59โ61
K00
I. Kapovich, Mapping tori of endomorphisms of free groups. Comm. Algebra 28 (2000), no. 6, 2895โ2917
KM02
I. Kapovich, and A. Myasnikov, Stallings foldings and subgroups of free groups. J. Algebra 248 (2002), no. 2, 608โ668
KS04
I. Kapovich, and P. Schupp, Relative hyperbolicity and Artin groups. Geom. Dedicata 107 (2004), 153โ167
KS05
I. Kapovich, and P. Schupp, Genericity, the Arzhantseva-Ol'shanskii method and the isomorphism problem for one-relator groups. Math. Ann. 331 (2005), no. 1, 1โ19
KS08
I. Kapovich, and P. Schupp, On group-theoretic models of randomness and genericity. Groups Geom. Dyn. 2 (2008), no. 3, 383โ404
LS R.ย Lyndon and P.ย Schupp, Combinatorial Group
Theory, Springer-Verlag, 1977. Reprinted in the Classics in
Mathematics series, 2000.
Oll05
Y. Ollivier,
A January 2005 invitation to random groups.
Ensaios Matemรกticos [Mathematical Surveys], 10. Sociedade Brasileira de Matemรกtica, Rio de Janeiro, 2005. ISBN: 85-85818-30-1
Oll07
Y. Ollivier, Some small cancellation properties of random groups.
Internat. J. Algebra Comput. 17 (2007), no. 1, 37โ51
Shu
M. Shusterman, Ascending chains of finitely generated subgroups, J. Algebra 471 (2017), 240โ250
Sta J.ย R.ย Stallings, Topology of finite graphs,
Invent. Math. 71 (1983), no. 3, 551โ565
Stre R. Strebel, Small cancellation groups, in:
โSur les groupes hyperboliques d'aprรฉs Mikhael Gromov.โ Papers from the
Swiss Seminar on Hyperbolic Groups held in Bern, 1988. Edited by E. Ghys and P. de la Harpe. Appendix, 227โ273. Progress in
Mathematics, 83. Birkhauser, Boston, 1990.
T51
M. Takahasi, Note on chain conditions in free groups, Osaka Math. J. 3 (1951), 221โ225
|
http://arxiv.org/abs/2306.04420v1
|
20230607132533
|
The HARPS search for southern extra-solar planets. XLVII. Five Jupiter-mass planets in long-period orbits, one highly irradiated Neptune, one brown dwarf, and five stellar binaries
|
[
"Y. G. C. Frensch",
"G. Lo Curto",
"F. Bouchy",
"M. Mayor",
"G. Hรฉbrard",
"C. Lovis",
"C. Moutou",
"F. A. Pepe",
"D. Queloz",
"N. Santos",
"D. Segransan",
"S. Udry",
"N. Unger"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"astro-ph.SR"
] |
XLVII. Five Jupiter-mass planets in long-period orbits, one highly irradiated Neptune, one brown dwarf, and five stellar binaries.
Based on observations made with the HARPS instrument on the ESO 3.6-m telescope at La Silla Observatory (Chile), under GTO programme ID 072.C-0488, and its continuation programmes ID 085.C-0019, 087.C-0831, 089.C-0732, 090.C-0421, 091.C-0034, 092.C-0721, 093.C-0409, 095.C-0551, 096.C-0460, 098.C-0366, 099.C-0458, 0100.C-0097, 0101.C-0379, 0102.C-0558, 0103.C-0432 106.21R4.001, 108.222V.001, 183.C-0972, 192.C-0852 and 196.C-1006.
European Southern Observatory, Karl-Schwarzschild-Strasse 3, 85748 Garching,
Germany
email: <[email protected]>
Observatoire de Genรจve, 51 Ch. des Maillettes, 1290 Sauverny, Switzerland
Institut d'astrophysique de Paris, UMR7095 CNRS, Universitรฉ Pierre & Marie Curie, 98bis boulevard Arago, 75014 Paris, France
Observatoire de Haute-Provence, CNRS, Universitรฉ d'Aix-Marseille, 04870 Saint-Michel-l'Observatoire, France
Universitรฉ de Toulouse, UPS-OMP/CNRS, IRAP,
14 avenue E. Belin, Toulouse, F-31400, France
ETH Zurich, Department of Physics, Wolfgang-Pauli-Strasse 2, CH-8093 Zurich, Switzerland
Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal
Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal
The long-term ongoing HARPS radial velocity survey of extra-solar planets initiated in 2003 provides a unique data set with a 19 year baseline that allows the detection of long-period exoplanets, brown dwarfs, and low-mass binaries.
Our aim is to detect and characterise long-period companions around main sequence stars (spectral types late F to early M). Only 6% of the planets discovered so far have periods longer than 3 years; we are probing this still largely unknown population.
We use the radial velocity method to search for exoplanets around stars. The radial velocity variations are measured with HARPS at the ESO 3.6 metre telescope. Difficulties in characterising long-period exoplanets arise from the entanglement of the radial velocity with the stellar magnetic cycle. We thoroughly examined the stellar activity indicators to rule out magnetic cycles as the source of the observed variation. The true
mass and inclination of our heavier companions are provided by astrometry, for which we use proper motions from Hipparcos and Gaia.
Five Jupiter-mass exoplanets are reported to orbit HIP54597, BD-210397 (ร 2), HD74698, and HD94771 with 8.9 yr, 5.2 yr, 17.4 yr, 9.4 yr, and 5.9 yr orbits, and to have minimum masses of 2.01ยฑ 0.03,
0.7 ยฑ 0.1, 2.4^+1.5_-0.2, 0.40 ยฑ 0.06, and 0.53 ยฑ 0.03 M_J respectively. HD74698 also hosts a highly irradiated Neptune in a 15 day orbit with a minimum mass of 0.07ยฑ 0.01 M_J. The mass and inclination of the exoplanets cannot yet be well constrained by astrometric measurements. Only HIP54597 b, HD74698 c, and BD-210397 c have weak constraints. The mass of HIP54597 b can maximally increase by 10%-30%, the minimum mass of HD74698 c is likely equal to its true mass, and BD-210397 c has a mass of 2.66_-0.32^+0.63 M_J.
HD62364 hosts a brown dwarf with a true mass of 18.77_-0.63^+0.66 M_J in an orbit of 14 yr. The mass of HD62364 b is around the limit of the masses of brown dwarfs, but its orbit is highly eccentric (e = 0.607 ยฑ 0.005), which is more common among brown dwarfs than exoplanets.
HD56380B, HD221638B, and HD33473C have minimum masses within the brown dwarf limits, in orbits of 8.9 yr, 16.6 yr, and 50 yr respectively; however, astrometric measurements reveal them to be stellar binaries, with masses of 375.3_-8.4^+8.6, 110.0_-3.7^+3.9, and 271.0_-3.8^+3.9 M_J. The orbits of the stellar binaries HD11938 and HD61383 are incomplete.
The preliminary result for HD61383 is a 0.190 M_โ binary in a 39 yr orbit. The secondary of the binary system HD11938 has a mass of 0.33 M_โ โwhich is confirmed by a secondary peak in the cross-correlation function (CCF)โ and a preliminary period of 35 yr. The origin of the 3.0 yr radial velocity (RV) signal of HD3964 is uncertain as it shows entanglement with the magnetic cycle of the star. We finally report one more star, HD11608, with a magnetic cycle that mimics a planetary signal.
We present the discovery of six exoplanets, one uncertain exoplanet candidate, one brown dwarf, and five stellar binaries around main sequence stars. We also improve the orbital solution of the stellar binary HD33473C thanks to long-term monitoring.
Y.G.C. Frensch et al.
The HARPS search for southern extra-solar planets. XLVII.
The HARPS search for southern extra-solar planets
Y.G.C. Frensch1,2, G. Lo Curto1, F. Bouchy2, M. Mayor2, G. Hรฉbrard3,4, C. Lovis2,
C. Moutou 5, F. A. Pepe2, D. Queloz 6, N. Santos 7,8, D. Segransan2, S. Udry 2
N. Unger2
Received 21 February 2023; accepted 15 May 2023
==================================================================================================================================================================================================
ยง INTRODUCTION
Of the currently more than 5300 known exoplanets, only 6% have periods of longer than 3 years, and less than 3% have periods longer than 10 years[See The Extrasolar Planets Encyclopaedia, <http://exoplanet.eu>]. The radial velocity (RV) method, one of the leading methods in providing long-period exoplanet detections, was responsible for most of them. Long-term RV surveys using precise spectrographs made this accomplishment possible, including but not limited to: the over 12 year historical ELODIE program initiated in 1994 <cit.>, the SOPHIE large program started in 2006 <cit.>, the Anglo-Australian Planet Search launched in 1998 <cit.>, and the HARPS[High Accuracy Radial velocity Planet Searcher.] volume-limited survey, from which the observations in this paper originate. From these different RV surveys, Jupiter analogues and long-period companions were discovered and their properties scrutinised (e.g. , , , , , , , , , ). The HARPS volume-limited survey (up to 57.5pc) started as part of the HARPS Guaranteed Time Observations (GTO) in 2003 <cit.> and is targeting low-activity, solar-type dwarf stars with spectral types from late F to early M <cit.>. The baseline of the survey, which is over 19 years, allows the detection of long-period signals, including gas giants beyond the ice line. The detection of lighter planets and other long-period companions such as brown dwarfs is also possible, although these targets are not the direct aim of the program.
The detection of long-period planets, and in particular of `cold-Jupiters' (M >0.3 M_J, a>1 AU), strongly correlates with the presence of inner super-Earths; more specifically, <cit.> quote a conditional probability of super-Earths of 90%ยฑ 20%. <cit.> are probing a different class of outer planets with semi-major axis a in the range 0.3 to 3 AU and in their recent publication present a value of 32^+24_-16%; as they are looking at different samples, this is effectively not in contrast.
In any case, the host stars to our detected long-period planets are potential sources of yet-undetected super-Earths. Our survey is not designed to detect these super-Earths, and our precision and sampling are not optimal. Outer giant planets might stabilise planetary systems, as for example shown by the Nice model <cit.> in which the gas giants are believed to shield the inner planets from collisions; Jupiter and Saturn for instance could be responsible for the existence of Earth and the other inner planets in their current orbits.
Hot giant planets tend to orbit metal-rich host stars, a correlation known as the giant planetโmetallicity correlation <cit.>. As the metallicity of the host star is representative of the metallicity content in the protoplanetary disc, this result is an important piece of observational evidence in support of the core-accretion scenario <cit.>. In this formation scenario, a large metal core, which is expected to be more easily formed in a metal-rich disc (i.e. more planetesimals), efficiently accretes gas to form a gas giant. While hot Jupiters are more frequent around metal-rich stars, for longer-period giant planets this is not necessarily the case. <cit.> showed that planets (from โผ 0.03 M_J to โผ 4 M_J) formed in metal-poor systems generally have longer periods than those formed in metal-rich systems. These authors suggest a metal-poor disc may form the giant planets further out and/or formation may start later, which would mean the planets undergo less migration. Short- and long-period giant planets may have the same formation but different migration histories <cit.>. More data are needed to further constrain the formation and evolution of long-period giant planets. With this research, we are probing this still largely unknown population of exoplanets.
The RV method observes the stellar wobble induced by the gravitational pull of the companion orbiting its host star. Stellar activity can mimic this wobble and create false-positive detections. As our program focuses on finding Jupiter-mass long-period companions, it does not require high precision and is not hampered by the short-period (โผminutes) p-mode oscillations.
For long-period planets, the greatest difficulties stem from the entanglement of the RV with the stellar magnetic cycle and the RV-induced signal from binary stars (or multiple-star systems). Although the magnetic cycle can inconveniently dominate the RV variation, the signal induced by the magnetic cycle correlates well with the stellar activity indices. The correlation can therefore be used to determine whether or not there is entanglement with the magnetic cycle and then be fitted and removed if the long-term activity index evolves smoothly <cit.>.
In search of gas giants, we discuss 12 stars in this paper. Each signal is thoroughly examined to ensure the variation is not caused by stellar activity. Section <ref> contains information on the observations. In Section <ref> we present the stellar parameters of the host stars. In Sect. <ref> we discuss one magnetic cycle and derive the orbital solutions for each star from the RV observations. In Sect. <ref> we present the combination of astrometric measurements from Hipparcos and Gaia with RVs in order to constrain the mass and inclination of the heaviest companions. Our results are discussed in Sect. <ref>.
ยง OBSERVATIONS
The observations used in the present study were carried out using the HARPS spectrograph at the ESO 3.6 meter telescope at La Silla Observatory (Chile) <cit.>. The presented signals are part of a long-term ongoing exoplanet survey that started in 2003 as part of the HARPS GTO program (Mayor 2003) and is still continuing today (Lo Curto 2010). In May 2015, the fibre link of HARPS was upgraded <cit.>. With the upgrade, the instrumental profile was changed significantly. As this might induce an offset, we considered the data before and after the fibre upgrade as coming from two independent instruments. The magnitude of the offset varies for different stars and affects the CCF-FWHM as well.
The RVs result from version 3.5 of the HARPS data reduction software (DRS), which cross-correlates each spectrum with a numerical stellar template, matching its spectral type. The pipeline derives several cross-correlation function parameters: the RVs, the full width at half maximum (FWHM), the bisector span, and the contrast. Other results are the Mount Wilson S-index, the chromospheric emission ratio logR'_HK, and the activity indices based on the Na, Ca, and Hฮฑ lines.
The number of measurements and the basic characteristics of the measurements are summarised in Table <ref>. Our program is targeting mainly Jupiter-mass planets and therefore requires only a moderate RV precision of โผ 3 m s^-1, which corresponds to a signal-to-noise ratio (S/N) of about 40. As this goal is not always obtained, we include observations with an S/N of larger than 25. This limit is used as additional quality control (QC); because the exposure times are defined for an S/N of 40, values below 25 indicate a poor observation (bad seeing, bad weather, etc.). The total number of measurements N_meas excludes the observations with a S/N below 25 and observations that did not pass the DRS QC. The DRS QC checks for the reliability and accuracy of the data obtained by verifying the instrumental stability, data-reduction process, and consistency with known calibration sources <cit.>. The public HIRES <cit.> RV data are added to the analysis of BD-210397, and CORALIE observations <cit.> are added to the study of HD61383.
ยง STELLAR PROPERTIES
All stars presented in this paper are main sequence stars, like our Sun. Table <ref> summarises the parameters of the stars analysed in the present paper. The spectral type, V, and B-V values originate from the Hipparcos catalogue <cit.>, where the magnitudes are dereddened by <cit.>. The parallax with the derived distance is taken from Gaia DR3 <cit.>. The spectroscopic analysis of <cit.> provides log g (corrected by ), T_eff, and [Fe/H] for all stars, except BD-210397 <cit.> and HIP54597 <cit.>, which are too cold for the parameter estimation by <cit.>. The parameters M_V, L, and R_โ are determined from the above values using the bolometric correction from <cit.>. The age and mass estimates are obtained by <cit.>, apart from HIP54597 and BD-210397, which are calculated via theoretical isochrones <cit.>.
The average FWHM is obtained from the HARPS data from before 2015 and cross-correlated with G2 masks, as the v sin i approximation adapted from <cit.> is calibrated on the same data period and spectral type. The FWHM uncertainty is the standard deviation and is therefore an indicator of activity in the RV signal. The average chromosphere emission ratio logR'_HK results from the <cit.> method. The rotational period is averaged and estimated from the activity-Rossby relations described by <cit.>, implementing the convective overturn time from <cit.>. These relations are defined for stars with (logR'_HK > -5.0). Here, we use the same method for relatively quiet stars to get an approximation of the rotational period. Long-term companions have periods that do not coincide with the rotational period of the stars and we therefore do not require a very precise value. We use the standard deviations of logR'_HK and P_rot as an estimate of their uncertainties.
ยง RADIAL VELOCITY DATA AND ORBITAL SOLUTIONS
The RV variations are examined using tools provided by the Data and Analysis Center for Exoplanets (DACE)[<https://dace.unige.ch>]. We use the data obtained before the fibre upgrade in 2015 as the reference systemic velocity, which we denote as ฮณ_03 and introduce an offset constant for the data obtained after the fibre upgrade, which we denote as ฮณ_15. To ensure the observed RV variation is caused by a companion, its correlations with the stellar activity indicators and the bisector span are inspected. <cit.> describe the computation of activity cycles as part of a detrending process to remove systematic effects from RV data. A model is fitted to the systematic effects โincluding activity cyclesโ and is subtracted from the RV data to improve the sensitivity with which planets are detected. As our main focus is on long-period companions, inspection of the correlations is sufficient. For periods shorter than a few months, the RV variation is also compared to the rotational period of the star and the bisector span to infer whether the signal might originate from starspots or plages. After fitting away the activity signals detected in the periodograms, further periodic RV variations in the data are searched for using the false alarm probability (FAP) of the periodogram as an indicator of the statistical significance of the candidate signal. The Keplerian model is computed as described in <cit.>. The values of the FAP are calculated analytically following <cit.>. When fitting multiple Keplerian solutions, the stellar activity indicators are again examined before each new fit. An MCMC is used to specify the orbital solution and its uncertainties, the algorithm for which is defined in <cit.>.
A magnetic cycle is discussed in subsection <ref>, as an example of how to distinguish stellar activity from long-period exoplanets. In subsection <ref> we present the uncertain solution of HD3964, where the magnetic cycle hinders the accurate determination of the RV amplitude. The orbital solutions of the exoplanets without strong entanglement with the star's magnetic cycle are presented in subsection <ref>, while those of the brown dwarf candidates and binaries are presented in subsections <ref> and <ref>. For all solutions presented, we fixed the HARPS instrumental error to 0.75 m s^-1. This value is close to some of the lowest residual RMSs we obtain from the literature; see e.g. <cit.>. The true mass and inclination derived from astrometry in section <ref> classify some of the brown dwarf companions as stellar binaries; we therefore introduce these as candidates.
ยง.ยง Magnetic cycle of HD11608
HD11608 was part of our analysis because the RVs show a prominent peak above 0.1% FAP at โผ3950 days in the periodogram. However, after inspecting the stellar activity correlation values, we find that it is more likely activity induced.
The observations show high (โณ 0.5) correlations with many activity indicators, such as the S-index, Ca-index, and logR'_HK (see Fig. <ref>), which all have peaks in the periodogram above 0.1% FAP at โผ 3300 days. After detrending for logR'_HK with a Gaussian low-pass filter (timescale 0.5 yr) the peak reduces to below 10% FAP. When using a longer timescale of 1 yr, the peak remains above 1% FAP, but the residuals correlate with the CCF-Contrast, with a coefficient of -0.21. When subsequently detrending the CCF-Bissector, all correlation coefficients fall below 0.1 and the RV peak stays above 10% FAP, as visible in Figure <ref>. The corresponding Keplerian solution has either an unlikely offset between the two datasets (21 m s^-1) or, when fixing the offset to 11 m s^-1 โwhich is a more plausible offset for K1/K2V <cit.>โ does not converge and finds a highly eccentric orbit with a period of decades. As neither solution is convincing, we conclude that the RV variations of HD11608 are most likely caused by stellar activity. If there is a companion present, its RV signal is heavily entangled with its magnetic cycle.
ยง.ยง Uncertain orbital solution of HD3964
HD3964 is a relatively quiet star (logR'_HK = -4.87). The logR'_HK periodogram has a peak at 23.5 days, which corresponds to the stellar rotation period (see Table <ref>). The RV variation shows a significant correlation with Hฮฑ, as is visible in the correlation values in Figure <ref>. The Hฮฑ-index has a peak in the periodogram around 1150 days, which is similar to the detected long-period RV variation at 1086 days.
There is a clear magnetic cycle visible in the RV variation of HD3964, but we cannot exclude a distant companion as well. After detrending for the Hฮฑ-index, the RV variation peak in the periodogram remains above 0.1% FAP level at a period of 1086 days, as is visible in Figure <ref>, where the first periodogram is before and the second is after detrending the Hฮฑ-index. The two other strong peaks (both above 10% FAP before detrending) are the aliases of the 1086 day period. As the periods of the magnetic cycle and possible companion are similar, there is no way to accurately determine the RV amplitude. Therefore, we present the orbital solution of HD3964 b with caution.
If the magnetic cycle is not intertwined with the companion signal, the detected long-period variation at 1086 days can be attributed to the presence of a distant Jupiter-like planet of 0.58 M_J minimum mass. The period of the RV variation does not correspond to the rotational period of the star. Figure <ref> shows the RV variation of HD3964 with the proposed orbital solution versus time; the residuals (O-C) are included.
The residuals show a correlation with the CCF-Bissector with a correlation coefficient of โผ0.3. However, the CCF-Bissector does not show any peaks above 10% FAP in its periodogram. The average variation in RV signal changes after the 2015 fibre upgrade (6.2 m s^-1 before, 9.8 m s^-1 after), which is also visible in Figure <ref>. The stellar jitter of the model is equal to 3.1 m s^-1 and the ฯ(O-C) is 3.49 m s^-1.
A drift does not improve the model; therefore, if there is a second companion, it should be at a short period. More extensive RV follow-up observations could provide more insight into the origin of the remaining fluctuations. Complementary RV measurements in the near-infrared (NIR) may help to disentangle stellar activity using the chromaticity of stellar active regions.
ยง.ยง Planetary systems
ยง.ยง.ยง HD94771
The observations of the quiet relatively evolved (1.9 R_โ) star HD94771 reveal no strong correlation with stellar activity indicators. However, there is a strong periodic RV variation above FAP 0.1% for 2164 days plus its 1 day and 1 year alias. The Keplerian solution corresponds to a companion with a minimum mass of 0.53 M_J. There is no strong suggestion of a second companion in the current data: adding a drift does not improve the model and the 2.2 m s^-1 stellar jitter is towards the lower limit for a 1.2 M_โ star with log(g) โผ4 cm s^-2 <cit.>. We conclude that neither stellar activity nor the rotational period of HD94771 induces the observed RV variation. The signal is caused by a giant exoplanet. The low stellar jitter might imply the rotation of the star is seen pole on; if the whole system is misaligned, the mass is much higher. In combination with the eccentric orbit, this suggests HD94771 b is potentially a brown dwarf. However, it is also possible that the star is in a quiet part of its cycle. As mentioned in Sect. <ref>, there is no strong astrometric signal visible for HD94771 b, and so we favour the latter explanation. The observations cover three periods of the orbit of HD94471b, as visible in Figure <ref>.
ยง.ยง.ยง HIP54597
HIP54597 is a relatively quiet star. Only one observation shows excessive activity in comparison to the others. Therefore, after the pre-selection of the S/N > 25 and the DRS QC, this observation was excluded from the analysis. Stellar activity indicators S-index, Hฮฑ-index, log R'_HK , and P_rot suggest a modulation above 10% FAP level at a slightly shorter period (โผ2825 days) than the RV variation (3250 days), and have correlation coefficients of approximately -0.25.
However, after detrending for the S-index and the CCF-FWHM, the RV signal stays above FAP 0.1% (see Fig. <ref>); moreover, the detrending does not significantly affect the periodogram, and there are only some minor changes in the model (ฮฯ^2_red = 0.01). The correlation is small enough and the difference in periods large enough to conclude that the observed RV variation is not caused by stellar activity or the rotational period of the star, but by a companion with a minimum mass of 2.01 M_J . The Keplerian solution is shown in Figure <ref>. There are no strong indications of an additional companion, a drift does not improve the model, and 2.4 m s^-1 stellar jitter is relatively low for a K5V star <cit.>. The correlations suggest that there may be a magnetic cycle at play.
ยง.ยง.ยง BD-210397
There are 83 HARPS observations of the late K star BD-210397 that pass the DRS QC. One data point was re-reduced with a K5 mask to match the rest of the observations. The HARPS observations show a negative correlation with the Hฮฑ-, Ca-, Na-, and S-index, but their periodicity (respectively โผ5800, โผ5500, โผ5400, and โผ5250 days) is not similar to the RV variation (1891 and 6360 days). After detrending for the S-index, the correlation coefficients are reduced to below 0.1 and the two RV variations remain visible in the periodogram (see Fig. <ref>). We conclude that the mentioned correlations are not the origin of the RV variation. We include 18 public HIRES <cit.> observations to the analysis for better convergence of the fit.
There are two long-period RV variations visible in the observations, which are attributed to a companion with a minimum mass of 0.7 M_J and a period of 1891 days (BD-210397 b) and another companion
(BD-210397 c) of 2.4 M_J minimum mass and a period of 6360 days. When fitting BD-210397 c first, the peak of BD-210397 b reduces from above 0.1% to below 10% FAP. However, the model including only BD-210397 c comes with an unrealistic offset (-14 m s^-1) between the HARPS data before and after the fibre upgrade <cit.> and a stellar jitter of 10 m s^-1. When including BD-210397 b, the stellar jitter reduces to 8.9 m s^-1, the offset is more plausible at 19 m s^-1, and the ฯ^2_red improves from 25 to 17 (without including stellar jitter). The โ1-periodogram <cit.>, also finds the presence of the 1891 day signal after fitting the 6360 day signal. The โ1-periodogram is a variant of the Lomb-Scargle periodogram, but instead of minimising the โ2 norm (sum of squares), it minimises the โ1 norm (sum of absolute values). This allows the โ1 periodogram to be less sensitive to outliers and non-Gaussian noise.
The model is presented in Table <ref>, and the solution is visible in Figure <ref>. The period of BD-210397 c is not well constrained; more observations are required. As the logR'_HK value of BD-210397 is undefined, we cannot directly conclude as to whether or not the high stellar jitter (8.9 m s^-1) is caused by activity; although heavily uncertain, the age (6.2 ยฑ 4.7 Gyr) suggests it is not. However, the standard deviation of the FWHM (ยฑ 0.022 km s^-1) is large in comparison to the other targets presented in this paper, which indicates that BD-210397 might be on the more active side. Combining the S-index amplitude with the stellar mass 0.679 M_โ and log(g) = 4.67 cm s^-1, the stellar jitter is expected to be within the range of 3-9 m s^-1 <cit.>. The stellar jitter found is high but within the expected range.
Another explanation for the stellar jitter could be a very short-period companion. The periodogram indeed shows a โผ 0.5 day signal below 10% FAP level, with its aliases also present. When adding this signal, the jitter is reduced from โผ8.9 m s^-1 to โผ7 m s^-1. A short-period companion helps to reduce the jitter, but there are not enough data points to be sure. To determine whether the high stellar jitter is caused by another companion, by sampling or by aliasing effects, follow-up measurements with short time intervals are required.
The HIRES and HARPS O-C values are visible in Figure <ref>. Where for HARPS the average (absolute) O-C is 8 m s^-1, HIRES on average varies from the model by 10 m s^-1. The HIRES data do agree with the model
found but cover a very limited time span.
ยง.ยง.ยง HD74698
The RVs of the quiet star HD74698 do not show a high correlation with stellar activity indicators apart from the CCF-FWHM (0.26), which has a period โผ8500 days, and disappears when changing the offset between the two datasets. This is even more evident when binning the data every 60 days. As it is dependent on offset, the CCF-FWHM is not detrended and the offset is fixed to 15 m s^-1. This value is expected for a G5V star <cit.>, reduces the correlation between the RV and CCF-FWHM to almost zero, and is the best-fit offset found by the binned data. There are two signals above 0.1% FAP level visible in the RV periodogram, P_b = 15.017 days (K_b = 5.8 m s^-1) and P_c = 3449 days (K_c = 5.3 m s^-1). After adding the first Keplerian model, the correlations with the stellar activity indicators remain low, indicating that both periods do not originate from activity. There is a third signal visible in the periodogram at โผ1000 days (K โผ6.5 m s^-1); it does not correspond to any activity indicator. Apart from DACE, we used two independent programs: KIMA <cit.> and the โ_1-periodogram <cit.>, to verify the model. Both programs also suggest the presence of the 1000 day signal. KIMA finds the three-planet solution to be the most likely, and โ_1-periodogram finds all three periods but does not suggest the 1000 day signal as a prominent one. As the 1000 day signal is below 10% FAP level in the DACE periodogram and strongly depends on the offset between the two datasets, it does not appear sufficiently robust to be included in the analysis.
โ_1-periodogram is also used to calculate the periodograms of the stellar activity indicators, none of the stellar activities correspond to P_b, P_c, or the possible 1000 day signal. Figure <ref> shows the best-fit model, Figure <ref> the phase-folded solution for HD74698 b, and Table <ref> the corresponding orbital parameters. Neither signal originates from stellar activity or the rotational period. We conclude that HD74698 b and c are exoplanetary companions. There are large variations still visible in the residuals, which are potentially caused by a 1000 day period companion. Additional observations can provide more insight into this compound system, for which there is substantial evidence of a third companion. As the minimum masses of the two found signals are in the range from Neptune to Saturn, this star is a very interesting target for follow-up observations.
ยง.ยง Brown dwarf candidates
The mass limits of brown dwarfs, are subject to debate; between โผ 13 M_J (limit of deuterium fusion) and 80 M_J (limit of hydrogen fusion) is the generally applied range <cit.>. However, <cit.> suggest that planet masses can go up to 25 M_J. Here we apply the commonly used limit of 13 M_J in order to differentiate brown dwarfs from massive planets.
The eccentricities of brown dwarfs are usually larger than those of exoplanets. As they are faint, they are difficult to detect by direct imaging; here, long-term RV surveys like ours provide a means for their detection.
ยง.ยง.ยง HD62364
The observations of the low-metallicity star HD62364 reveal a strong high-eccentric RV variation. With a 5138 day period, this signal corresponds to a companion with a minimum mass of 12.7 M_J . The minimum mass of HD62364 b is at the edge of the range of brown dwarfs, but since its mass is higher with the inclination found (see section <ref>) and high eccentricity is more common among brown dwarfs, we conclude that this companion is probably a brown dwarf. The HARPS spectra cover the entire phase of HD62364 b, as is visible in Figure <ref>. There is no strong correlation with any stellar activity indicator and the rotational period is too short to create the observed fluctuation. There is no drift present and the remaining 3.0 m s^-1 stellar jitter is within the expected range for a 1.2 M_โ star with log(g) โผ4 <cit.>.
ยง.ยง.ยง HD56380
The inactive star HD56380 exhibits a strong RV variation above 0.1% FAP level at 3254 days, corresponding to a 33.2 M_J minimum mass companion, without strong stellar activity indicator correlations. The phase is well covered, as is visible in Figure <ref>. We conclude that the RV variation is not caused by activity or the star's rotational period but by a companion. The 1.2 m s^-1 stellar jitter of the Keplerian model is low for a 0.8 M_โ star with log(g) โผ4.5 and a drift does not enhance the fit.
ยง.ยง.ยง HD221638
The relatively quiet star HD221638 shows an RV variation with a period of โผ2 days that correlates with the CCF-FWHM with a coefficient of 0.43 and with the S-index with a coefficient of 0.20. None of the periods come close to the RV variation at 6064 days, which stays above 0.1% FAP level even after detrending the CCF-FWHM. The long-period RV variation corresponds to a companion with a minimum mass of 53 M_J. Adding a linear drift to the orbital parameters improves the fit by ฮฯ^2_red = 0.08, implying the presence of a companion with an even longer period. After detrending the CCF-FWHM and adding a drift, there is no remaining stellar jitter. The phase is fully covered and shown in Figure <ref>.
ยง.ยง.ยง HD33473A
The companion of the evolved star[Though given as a dwarf in Table <ref>, the log(g) and R suggest it is evolved, and the spectral type is a Hipparcos value and does not take into account the more recent parameters from Gaia and/or others.] (2.3 R_โ) HD33473A was discovered by <cit.>. However, the Keplerian solution was incomplete as the observations covered a small fraction of the period and assumed an additional long-term drift.
The increased number of observations allows us to present a significantly improved orbital solution (ฮฯ^2_red = 2.25). As the phase is not fully covered and the model has difficulties converging when the offset is allowed to vary, we fix the offset between the two datasets to 14 m s^-1. This offset corresponds to an approximation for its spectral type G3V, derived from the values mentioned for G2V and G4V in <cit.>. When excluding one observation with excessive activity (ฮlogR'_HKโผ -0.4), the RV variation correlates with the CCF-Contrast with a coefficient of 0.44 (no period above 10% FAP) and with the CCF-FWHM with a coefficient of -0.49. The CCF-FWHM shows a long period below 10% FAP, which appears to be increasing outside of the range of the span of the observations.
After detrending the CCF-FWHM, the strong signal of 50 years remains visible in the periodogram. This period corresponds to a
companion with a minimum mass of 38.3 M_J, which is significantly heavier than initially thought (7.2 ยฑ 0.3 M_J). The rotational period of the star (32 days) cannot account for the signal, nor can HD33473B, the stellar companion at a separation of 10 arcsecs <cit.>. Though a stellar companion justifies the inclusion of a long-term drift in the Keplerian model, we decided not to include one, as a linear drift does not improve the model. The phase is still not entirely covered, as shown in Figure <ref>; more observations could further improve the orbital solution. We refer to HD33473C as the companion to HD33473A, which was previously designated as HD33473A b by <cit.>.
ยง.ยง Binaries
Long-term RV monitoring allows the detection of binaries not yet resolved by direct imaging because they are too faint and their separation is too small. However, as their periods are on the order of decades, the phases of the binaries are not always fully covered. To confirm the origin of the RV variation, we searched for a second component in the CCFs, and resolve binaries as SB2; see <cit.> for an in-depth explanation of the applied method. Briefly, the CCFs are recomputed with an M mask to optimise the detectability of the low-mass companion. The CCFs are shifted to the systemic velocity V_0 and their average is subtracted to remove the first component. The expected radial velocity of the second component is defined by the radial velocity of the main component V_1, the systemic velocity V_0, and the mass ratio q = M_pl / M_โ. The residual CCFs are shifted to this expected value and again averaged. Consequently, the secondary is expected to be at V_0. We inspected various combinations of V_0 and q; by searching for the deepest peak at the varying location of V_0, we found which combination best matches the CCFs. The range of V_0 is defined by what is feasible according to the RV model. In this section, we present the finding of one stellar binary (HD11938) confirmed using this method, and constrain its inclination i. The other presented stellar binary (HD61383) cannot be confirmed this way due to the blending of the primary and secondary peaks in the CCF.
ยง.ยง.ยง HD11938
There are 25 observations of the active star HD11938 that pass the DRS quality control and have S/N > 25. One measurement was reduced with a G2 mask instead of a K5 mask and was therefore reprocessed. As there are only 25 observations that do not fully cover the phase of HD11938B we fixed the offset between the HARPS data before and after the fibre upgrade to 15 m s^-1 as suggested by <cit.> for K4/5V spectral types. There is a clear signal in the periodogram of 35 years. The stellar binary is also visible in the CCFs, where the best solution is at a V_0 of 40.7 ยฑ 0.2 km s^-1 (= ฮณ_03). The mass ratio q in that case is 0.44 ยฑ 0.03, corresponding to a secondary mass of 0.33 ยฑ 0.03 M_โ. Figure <ref> shows the average CCF residual for this V_0 and q combination in red.
HD11938 is an active star (logR'_HKโผ -4.5) with correlations with all stellar activity indicators, but none have periodical signals above 10% FAP level in the periodogram. This strong RV variation signal, with an amplitude of 2.41 km s^-1, is therefore not caused by activity. We fix ฮณ_03 to the found systemic velocity and its error margins V_0 = 40.7 ยฑ 0.2 km s^-1 in three iterations (i.e. 40.5, 40.7, and 40.9 km s^-1). The model presented in Table <ref> corresponds to 40.7 km s^-1, and the presented errors correspond to the models found for 40.5 and 40.9 km s^-1. This is done because the MCMC model drifts away from the found systemic velocity when using a prior. We note that the period is heavily dependent on the systemic velocity. The solution is visible in Figure <ref>. The found minimum mass m sin i = 0.21 M_โ corresponds to an inclination of 39.33^โยฑ 0.07 ^โ. The model does not include detrending or a drift, as both do not improve the model. The relatively large 9 m s^-1 stellar jitter agrees with an active star (without detrending). We conclude that the rotation period cannot account for the signal nor can the stellar activity. The RV variation belongs to a stellar binary companion, separated by 262 mas from its host star. The Gaia archive does not show a secondary within this distance range.
ยง.ยง.ยง HD61383
There are 66 HARPS observations of the metal-poor quiet star HD61383 that pass the DRS QC. The average photon noise is 3.1 m s^-1. To increase phase coverage, the analysis also includes 20 observations from CORALIE <cit.> with an average photon noise of 6.3 m s^-1. The CORALIE instrumental error is fixed at 5.0 m s^-1. The phase of the data is not fully covered; this would require at least 20 more years of observations, as is visible in Figure <ref>. Due to the incomplete coverage of the phase, the model has difficulties converging. We fix the offset between the HARPS data to 14 m s^-1, as HD61383 is a G3V star. There is a strong peak visible in the periodogram at 39.1 years corresponding to a minimum mass of 0.18 M_โ.
The HARPS observations weakly correlate with almost all stellar activity indicators and though there are some with long-term trends, the amplitude of the RV variation is too large to be caused by stellar activity. The binary is barely visible in the CCFs as a secondary peak, but the amplitude is small in comparison to the noise, as it is in most CCFs blended with the primary peak. With the currently available data, we present the orbit, as shown in Table <ref> and Figure <ref>.
ยง TRUE MASS AND INCLINATION FROM ASTROMETRY
Astrometry is a promising technique for measuring the inclinations of our systems, and the single measurement precision of Gaia in particular should allow such measurements (10as, based on Gaia EDR3 ). The S/N reached by Gaia can be estimated by combining the astrometric signal (approximately M_pl M_โ^-1 a_pl d^-1 for M_plโช M_โ) with the single measurement precision and the number of Gaia measurements from the Gaia Observation Forecast Tool[<http://gaia.esac.esa.int/gost/>]. All presented targets have an expected S/N of above 6.2, the minimum value needed to retrieve an astrometric orbit from combined astrometry and radial velocity data <cit.>. However, it is not certain that Gaia covers the full orbit, and as the presented periods are on the order of several years, depending on the companion mass, the signal might get included in the proper motion. The Gaia excess noise values and their significance are an indication of the astrometric signal. For all presented targets, the excess noise significance is larger than 2, meaning there is a significant astrometric signal seen by Gaia <cit.>.
As the individual Gaia observations have not yet been released, we are limited to the currently available proper motion and RA/Dec positions. To derive the true mass and inclination, we use the Python package <cit.>, which can fit Keplerian orbits by utilising a combination of radial velocity, relative astrometry, and available absolute astrometry data. Here we combine the radial velocity observations presented in this paper and the absolute astrometry data that comes from the Hipparcos-Gaia Catalog of Accelerations (HGCA, ). In addition, we use an extension of that allows priors on orbital periods and semi-major axes,[<https://github.com/nicochunger/orvara/tree/period-prior>] and fix the offset between the radial velocity data to the offsets presented in section <ref>, as otherwise finds values outside of the expected offset range. See Appendix <ref> for the resulting corner plots, and Appendix <ref> for the proper motion plots.
For most of the exoplanets, the amplitude and period are too low to properly fit the Hipparcos-Gaia proper motions. Apart from HIP54597 b, HD74698 c, and BD-210397 c, the mass and inclination of the exoplanets cannot be constrained. For HIP54597 b, the constraint is weak and varies between an inclination of 120^โ and 60^โ. This might increase the mass of the planet by 10%-30% but not significantly more. The constraint of BD-210397 c is slightly better, with peaks at 50^โ and 130^โ, favouring the former, and a mass of 2.7 M_J. The inclination of HD74698 c is 90^โยฑ 33^โ; though this is not well-constrained, it shows the minimum mass is likely equal to its true mass. For HIP54597 b, HD74698 c, and BD-210397 c, the inclination is not well-constrained. Their results are therefore not included in Table <ref>. However, we do conclude that their companions have masses in the planetary range. The convergence is better for our more massive companions, and with the found models, three of the four brown dwarf candidates are found to be stellar binaries (HD56380B, HD221638B, and HD33473C). HD61383 has an edge-on orbit (i=86.2^โ), and its true mass is very close to the minimum mass. For HD11938, finds an inclination of 64.0^โ, corresponding to a mass of 0.386 M_โ, which is slightly higher than the mass found by the secondary peak in the CCF. does not find the same RV orbital parameters as the simple RV analysis, which causes discrepancies between all presented minimum masses and true masses. For the vast majority of our stars, the difference is negligible. For HD11938, on the other hand, finds a different systemic velocity. This stems from the fact that the orbit is incomplete and in the RV model the systemic velocity is fixed to the value found by the second component in the CCF. When the systemic velocity is let free, the period can increase up to 41 years and the minimum mass increases to 0.30 M_โ, which corresponds better to the model found by .
The true masses found are in agreement with the Gaia renormalised unit weight error (RUWE) values, which are expected to be 1 for well-behaved single-star solutions. Values significantly greater than 1, with a limit generally placed at 1.4, either indicate the star is not a single star or is otherwise problematic. All presented stars have RUWE values below 1.4, except HD56380 (1.8348), which corresponds to one of our most massive companions (0.36 M_โ). The other massive companion HD11938B does not present a large RUWE, probably due to its very long orbital period (35 years).
Considering that HD56380B is our second most massive companion, we decided to reinspect the CCF following the method described in section <ref> and identified a second component compatible with a massive companion. However, considering that both components are always blended (within 2 km/s), the method does not allow us to derive a precise mass ratio.
ยง DISCUSSION AND CONCLUSIONS
In this paper, we discuss the RV variations of 12 stars. We present the discovery of six exoplanets, one uncertain exoplanet candidate, one brown dwarf, four stellar binaries and the improved orbital solution of the stellar binary HD33473C discovered earlier by <cit.>. An overview of the detections is presented in Figure <ref>.
The biggest difficulty for long-period companions arises from the RV signal entanglement with the star's magnetic cycle. Therefore, for all proposed companions, we discuss the possibility of the RV variation originating from activity. For comparison, we also present an activity-induced signal (HD11608) and a possible exoplanet signal with strong entanglement with its magnetic cycle (HD3964). These two stars show the strength of investigating the correlation with stellar activity indicators. Detrending reduces the activity-induced component but makes it difficult to constrain the orbital period. As HD3964 has an RV period (1086 days) similar to the Hฮฑ-index period (1150), there is no way to accurately determine the RV signature of the possible companion.
HD56380 and HD61383 are relatively old (>12 Gyr). In principle, this is not inconsistent with their average lifetime, but it is uncommon to get so close to the upper age limit for these spectral types. A possible explanation could be that the light of the secondary was blended in the observations, causing the age of the primary star to be overestimated. The ages originate from <cit.>, who use isochrones based on T_eff, [Fe/H], and V magnitudes. The latter is sensitive to blending. As HD56380 and HD61383 are binaries, and given that we find that the secondary peak in the RV CCF is blended with the primary peak for both sources, the age could indeed be influenced by the secondary.
After determining the orbital solutions, overlap with transit data was examined for all companions as expected by the combination of the period and time of periastron. There are no CHEOPS light curves for the presented stars, and the TESS light curves have no superposition with the potential transits. The only companion to have superposition with the TESS sectors is HD74698 b, which does not show a transit or periodicity of 15 days: the orbit is not well-aligned with our line of sight. As the potential transits have uncertainties on the order of months, and for some of our targets even on the order of years, we also used a more general approach and looked at the Lomb Scargle periodograms of the TESS light curves. Again, no periodicities were found, meaning there are no observed transits for our targets, but also that there are no activity-induced signals with the same periods as our companions.
Long-period companions provide interesting systems for follow-up observations. HD3964 has a remaining RV signal of 4 m s^-1, which is likely caused by activity given that the CCF-Bissector is correlated to the residuals, but this could be better constrained by follow-up observations. HD94771 does not show any strong signs of multiple companions and has a relatively high eccentricity, making the stability of a potential multi-planet system less likely. For HIP54597, there is no significant evidence of additional planets, but there is for both BD-210397 and HD74698. As BD-210397 includes a high jitter and HD74698 shows a 1000 day period in the periodogram, both targets are very interesting targets for follow-up observations. Apart from BD-210397, none of the exoplanet's Keplerian models show a high jitter. For BD-210397, the activity is unknown, but according to the S-index, stellar mass, and surface gravity, the jitter falls within the expected range <cit.>. If the jitter is caused by a companion, it is a very short period signal (โผ0.5 days); this can be verified by observations at short time intervals.
It is common for brown dwarfs to have higher eccentricities than exoplanets. According to the NASA exoplanet archive, 94% of the confirmed exoplanets[Defining `confirmed' as companions with well-constrained masses and periods (> 3 ฯ).] have eccentricities e โค 0.5, versus 69% of the brown dwarfs. For e โค 0.3, this value decreases to 84% for the exoplanets and to 56% for the brown dwarfs. Brown dwarfs are less likely to have eccentricities below 0.3 than exoplanets. Our results are in agreement. Only one of the proposed exoplanets (HD94771
b) has an eccentricity of larger than 0.3, while HD62364 has a high eccentricity (0.607). When comparing long-period giant planets to short-period giant planets, placing the limit at 100 days (see also Figure <ref>), a similar trend occurs: 90% of the short-period giant planets have eccentricities e โค 0.3, versus 72% of the long-period giant planets.
According to the planet-metallicity correlation, host stars with higher metallicities are more likely to host a giant planet <cit.>, which is particularly true for hot Jupiters <cit.>. The short-period giant planets (< 100 days) have parent stars with an average metallicity of 0.11 dex. This value is 0.04
dex on average for stars hosting long-period giant planets, a similar value to our small sample of presented stars hosting (long-period) giant planets (0.05 dex). This is in agreement with the findings of <cit.>: giant planets orbiting metal-poor stars have longer periods than those in metal-rich systems.
The findings of <cit.> highlight the possibility that stars with massive giant planets (M โณ 4 M_J) do not follow the same metallicity trend as stars with lower mass giant planets. The stars hosting massive giant planets are on average more metal-poor. Though all planets discussed in the present paper have minimal masses of below 4 M_J, this may indicate that HIP54597 b ([Fe/H] = -0.22, m_2sin i = 2.01 M_J) falls in the massive giant category; however, the results from show this is unlikely.
The discovery of very long-period companions requires years of observations. With the present research, we probe a region of a largely unknown population. The current baseline of 19 years allows us to detect Saturn-mass companions with periods of 10 years and brown dwarfs and stellar binaries with periods of up to 50 years. Though the RV observations do not cover the full phase of the found stellar binaries, we are able to constrain the orbital parameters and confirm the origin of HD11938B using the secondary peak in the combined CCFs, and of HD62364 b, HD5680B, HD221638B, HD33473C, HD11938B, and HD61383B via absolute astrometry. As by-products of this survey, the brown dwarfs and binaries demonstrate the effectiveness of a long baseline. The impact of continuing with observations is made clear by the difference between the orbital solution of HD33473C found by <cit.> and the solution for this object presented
here. Incomplete orbits are prone to errors. Following indications by <cit.> and <cit.>, the long-period Jupiters detected in this work are prime targets for the detection of low-mass inner planets.
The authors thank the ESO staff at La Silla for their diligent and competent help during the observations and for the effort to maintain the instrument operating and stable for so many years. We thank Emanuela Pompei for providing helpful comments. The HARPS spectrograph was built by the contributions of the Swiss FNRS, the Geneva University, the French Institut National des Sciences de l'Univers (INSU) and ESO. This research made use of the Simbad database, operated at the CDS, Strasbourg, France.
This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. NCS acknowledges the support from the European Research Council through grant agreement 101052347 (FIERCE). This work was supported by FCT - Fundaรงรฃo para a Ciรชncia e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizaรงรฃo by these grants: UIDB/04434/2020; UIDP/04434/2020. This publication makes use of The Data & Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) PlanetS, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https://dace.unige.ch.
aa
ยง ORVARA CORNER PLOTS
ยง ORVARA PROPER MOTION PLOTS
figure-1
|
http://arxiv.org/abs/2306.10954v1
|
20230619141136
|
sEMG-based Hand Gesture Recognition with Deep Learning
|
[
"Marcello Zanghieri"
] |
eess.SP
|
[
"eess.SP",
"stat.ML"
] |
Alma Mater Studiorum ยท Universitร di Bologna
[0.1cm]15.8cm0.1mm
[0.5cm]15.8cm0.6mm
Scuola di Scienze
Dipartimento di Fisica e Astronomia
Corso di Laurea Magistrale in Fisica
sEMG-based
Hand Gesture Recognition
with Deep Learning
[t]0.47
Relatore:
Prof. Daniel Remondini
Correlatori:
Dott. Simone Benatti
Dott. Francesco Conti
Dott. Manuele Rusci
[t]0.47
Presentata da:
Marcello Zanghieri
Anno Accademico 2017/2018
Hand gesture recognition based on surface electromyographic (sEMG) signals is a promising approach for the development of Human-Machine Interfaces (HMIs) with a natural control, such as intuitive robot interfaces or poly-articulated prostheses. However, real-world applications are limited by reliability problems due to motion artifacts, postural and temporal variability, and sensor re-positioning.
This master thesis is the first application of deep learning on the Unibo-INAIL dataset, the first public sEMG dataset exploring the variability between subjects, sessions and arm postures, by collecting data over 8 sessions of each of 7 able-bodied subjects executing 6 hand gestures in 4 arm postures. In the most recent studies, the variability is addressed with training strategies based on training set composition, which improve inter-posture and inter-day generalization of classical (i.e. non-deep) machine learning classifiers, among which the RBF-kernel SVM yields the highest accuracy.
The deep architecture realized in this work is a 1d-CNN implemented in Pytorch, inspired by a 2d-CNN reported to perform well on other public benchmark databases. On this 1d-CNN, various training strategies based on training set composition were implemented and tested.
Multi-session training proves to yield higher inter-session validation accuracies than single-session training. Two-posture training proves to be the best postural training (proving the benefit of training on more than one posture), and yields 81.2% inter-posture test accuracy. Five-day training proves to be the best multi-day training, and yields 75.9% inter-day test accuracy. All results are close to the baseline. Moreover, the results of multi-day trainings highlight the phenomenon of user adaptation, indicating that training should also prioritize recent data.
Though not better than the baseline, the achieved classification accuracies rightfully place the 1d-CNN among the candidates for further research.
Il riconoscimento di movimenti della mano basato su segnali elettromiografici di superficie (sEMG) รจ un approccio promettente per lo sviluppo di interfacce uomo-macchina (HMI) a controllo naturale, quali interfacce robot intuitive o protesi poliarticolate. Tuttavia, le applicazioni real-world sono limitate da problemi di affidabilitร dovuti ad artefatti di movimento, variabilitร posturale e temporale, e riposizionamento dei sensori.
Questa tesi magistrale รจ la prima applicazione dell'apprendimento profondo (deep learning) al dataset Unibo-INAIL, il primo dataset sEMG pubblico che esplora la variabilitร tra soggetti, sessioni e posture del braccio, raccogliendo dati da 8 sessioni per ciascuno di 7 soggetti integri che eseguono 6 movimenti della mano in 4 posture del braccio. Negli studi piรน recenti, la variabilitร รจ affrontata con strategie di addestramento basate sulla composizione del training set, che migliorano la generalizzazione inter-postura e inter-giorno di classificatori basati sull'apprendimento automatico (machine learning) classico (ossia non profondo), fra i quali la RBF-kernel SVM dร l'accuratezza piรน alta.
L'architettura profonda realizzata in questo lavoro รจ una 1d-CNN implementata in PyTorch, ispirata a una 2d-CNN con buone prestazioni dimostrate su altri dataset pubblici di riferimento. Su questa 1d-CNN, si implementano e testano varie strategie di addestramento basate sulla composizione del training set.
L'addestramento multi-sessione si dimostra produrre accuratezze di validazione inter-sessione piรน alte rispetto all'addestramento mono-sessione. L'addestramento bi-postura si dimostra la miglior strategia posturale (dimostrando il beneficio di addestrare su piรน di una postura), e produce un'accuratezza di test inter-postura dell'81.2%. L'addestramento su cinque giorni si dimostra la miglior strategia multi-giorno, e produce un'accuratezza di test inter-giorno del 75.9%. Tutti i risultati sono vicini alla baseline. Inoltre, i risultati degli addestramenti multi-giorno evidenziano il fenomeno dell'adattamento dell'utente, indicando che l'addestramento deve privilegiare dati recenti.
Benchรฉ non migliori della baseline, le accuratezze ottenute pongono a buon diritto la 1d-CNN tra i modelli candidati per la ricerca futura.
roman
CHAPTER: INTRODUCTION
arabic
Hand gesture recognition based on electromyographic (EMG) signals is an innovative approach for the development of human-computer interaction, the vast field whose aim is to implement human-computer interfaces (HMIs) and intuitive interaction devices. The research in this field is motivated by the need for intelligent devices able to extract information from data coming from sensors, operating in real time and under significant power, size and cost constraints. The wide range of applications of HMIs with EMG-based intuitive control includes (but is not limited to) robot interaction and industrial robot control, game or mobile interfaces, interactions for virtual environments, sign language recognition, rehabilitation, and control of poly-articulated prostheses <cit.>.
The electromyographic (EMG) signal is the biopotential generated by the ionic flow through the membrane of the muscular fibers during contraction, and is a major index of the muscular activity. EMG data can be acquired either with invasive or non-invasive measuring instruments. Invasive methods employ wire or needle electrodes, which penetrate the skin to reach the muscle of interest. On the contrary, surface electromyography (sEMG) is a non-invasive technique that uses surface electrodes applied on the surface of the skin <cit.>. In the HMI field, building gesture recognition on the analysis of sEMG signals is one of the most promising approaches, since non-invasiveness is an essential requirement for many types of HMIs.
An open challenge in HMI design is the development of solutions based on a robust recognition approach. On the one hand, the implementation of devices showing high recognition capabilities in controlled environments raised industrial interest and led to the availability of commercial solutions based on the EMG-based interaction paradigm. On the other hand, in many real-world scenarios the adoption of EMG-based HMIs is still limited by reliability problems such as motion artifacts, postural and temporal variability, and issues caused by sensors re-positioning at each use.
Nowadays, research efforts focus on solving issues related to postural variability effects and long-term reliability. Such research efforts are being boosted by two factors. (1) The release of public EMG databases suitable for the analysis of variability, whose publication eases the creation of benchmarks for the research community; the Unibo-INAIL dataset studied in this master thesis was realized for this very purpose. (2) The increasing recourse to the deep learning, whose deep hierarchical approach, together with the entirely data-driven feature extraction, promises to speed up the search for effective representations able to empower the recognition models.
ยง EMG-BASED HMIS AND GENERALIZATION ISSUES
The Electromyogram (EMG) is the biopotential signal resulting from muscular activity, and is a major index thereof. It can be sensed by means of non-invasive surface electrodes, giving rise to the surface EMG (sEMG) signal. The processing of sEMG signals is a promising approach for the implementation of non-invasive EMG-based Human-Machine Interfaces.
However, the current state-of-the-art has to cope with challenging issues. The sEMG signal is severely affected by many factors, such as differences between subjects, fatigue, user adaptation and the variability introduced from the re-positioning of electrodes at each data collection session. These issues limit long-term use and reliability of the devices relying on EMG analysis.
In the machine learning framework, these variability factors can be modeled with the concept of data sources, i.e. data subsets coming from different distributions (the concept of multi-source data is defined in Subsection <ref>). Identifying multiple sources makes machine learning on EMG data a challenging task, where the ambition is to implement classifiers capable of good inter-source generalization, e.g. in inter-posture, inter-session or inter-subject scenarios. Up to now, the classification accuracy with Leave-One-Subject-Out cross-validation (LOSOCV) is still much lower than that attained with Within-Subject cross-validation (WSCV) <cit.>.
ยง DEEP LEARNING REVOLUTION
At present, an increasingly important role in sEMG-based gesture recognition (and in human-computer interaction in general) is being played by deep learning. Existing deep learning architectures are mainly based on two kinds of architecture: the Convolutional Neural Network (CNN), able to capture spatial information of the signal, and the Recurrent Neural Network, which allows to exploit the sequential nature of the data. This master thesis is exclusively focused on the CNN architecture, because CNNs are (1) easier to train, and (2) more suitable for deployment in embedded hardware, hence making them more attractive for this work. On the other hand, it must be noted that RNNs are typically more accurate on time-domain data, so the choice between the two models is not clear cut.
In the framework of sEMG-based gesture recognition based on classical machine learning, the pipeline typically consists of data acquisition, data preprocessing, feature extraction, feature selection, model definition and inference <cit.>. The reason behind the new deep learning perspective for sEMG-based gesture recognition is that it mitigates the strong need for feature extraction, feature selection and parameter tuning, which strongly rely on specific domain knowledge. The advantage of a deep architecture is its ability to incorporate feature learning: a consistent part of the traditional pipeline can be entrusted to the training of the algorithm, which has enough capacity to learn effective feature representations on its own. This mitigates the reliance on rigid combinations of preprocessing steps and precise sets of hopefully discriminative features.
The deep learning approach has already started to speed up the research in this field. In particular, the CNN-based sEMG gesture recognition has been studied by Atzori et al. in <cit.>, achieving comparable performance with traditional methods on the Non-Invasive Advanced Prosthetics (NinaPro) database. Another CNN architecture with adaptive feature learning improving inter-subject generalization has been proposed by Park and Lee in <cit.>. Geng et al. <cit.> presented a new CNN architecture for instantaneous sEMG images on three sEMG benchmark datasets. Du et al. <cit.> designed a semi-supervised deep CNN which also exploits the auxiliary information of a data glove: classification performance is improved by training the auxiliary task of regression of glove signal.
ยง PURPOSE OF THIS MASTER THESIS
The purpose of this master thesis is to apply deep learning for the first time to the Unibo-INAIL dataset, the first public dataset of surface electromyographic signals (sEMG) exploring the impact of combined postural and temporal variabilities on myoelectric hand gesture recognition <cit.>. In particular, the deep learning architecture used is a one-dimensional Convolutional Neural Network (1d-CNN), which performs convolutions over the time dimension.
This is a novel approach since the application of deep learning yields new knowledge about the performance of machine learning on this dataset, which was previously analysed only with algorithms belonging to classical (i.e. non-deep) machine learning, applied only to instantaneous signal values. The CNN architecture used in this work is an adaptation of the 2d-CNN module of the hybrid CNN-RNN architecture proposed by Hu et al. in <cit.>. The evaluation of performance is made in such a way to directly compare deep learning and classical machine learning on each of the intra- and inter-session scenarios available as baseline for the various training and validation sets compositions.
At the dataset's state-of-the-art, the variability is addressed with training strategies that improve inter-posture and inter-session generalization of classical (i.e. non-deep) machine learning classifiers, among which the RBF-kernel SVM yields the highest accuracy. The deep model implemented and tested is a CNN architecture reported to perform well on other public benchmark databases. On this CNN, the state-of-the-art training strategies are implemented and tested.
ยง THESIS STRUCTURE
The remainder of this master thesis is structured as follows:
0em
โ Chapter 2 introduces the fundamentals of surface electromyography (from muscular activation potential to signal collection at the electrodes), compares the classical machine learning framework and the deep learning framework of sEMG-based hand gesture recognition, and summarizes the state of the art regarding the classification robustness against the signal variability factors (both in general and on the Unibo-INAIL dataset);
โ Chapter 3 introduces the fundamentals of deep learning, of convolutional neural networks and of the architectural features used in this master thesis;
โ Chapter 4 illustrates the materials and methods of this work: the Unibo-INAIL dataset, the adopted preprocessing and pipeline, the CNN architecture used (together with all the training settings), and the training strategies implemented and tested;
โ Chapter 5 describes the scripts developed and how they exploit the main packages of the open source deep learning platform PyTorch;
โ Chapter 6 presents the results obtained for the various training strategies;
โ Chapter 7 exposes the conclusions and outlines the future work.
CHAPTER: SURFACE ELECTROMYOGRAPHY AND SEMG-BASED GESTURE RECOGNITION
Surface Electromyography is the field that studies the sensing, elaboration and uses of the surface-electromyographic (sEMG) signal, i.e. the electrical signal generated by contractions of skeletal muscles and sensed by non-invasive surface electrodes. For the development of Human-Machine Interfaces (HMIs), building gesture recognition on the analysis of sEMG signals is one of the most promising approaches, since for many applications non-invasiveness is an essential requirement.
This chapter is structured as follows:
0em
โ Section 2.1 introduces the fundamental concepts of surface electromyography, from the muscular activation potentials to the signal collection at the electrodes;
โ Section 2.2 is devoted to sEMG-based gesture recognition: it provides an overview of the techniques and results of both the classical machine learning framework and the deep learning framework, and summarizes the state of the art regarding the search for classifiers that are robust against the signal variability factors, both in the general field and on the Unibo-INAIL dataset.
ยง SURFACE ELECTROMYOGRAPHY
Electromyography <cit.> is the discipline that studies the detection, analysis, and applications of the electromyographic (EMG) signal, the electrical signal generated by muscles in correspondence with contractions. In particular, surface electromyography (sEMG) is a non-invasive technique for measuring and analysing the EMG signal of skeletal muscles by means of surface electrodes.
The electromyographic (EMG) signal is the bio-electric potential that arises from the current generated by the ionic flow through the membrane of the muscular fibers. It is therefore a major index of the muscular activity. During a muscular contraction, the depolarization of the tissue cell membrane, caused by the flow of Na^+ and K^+ ions, propagates along the muscle fibers. The origin of the potential is the electrical stimulus that starts from the central nervous system, and passes through the motor neurons (motoneurons) innervating the muscular tissue, giving rise to the Action Potentials (APs).
EMG data can be acquired either with invasive or non invasive measuring instruments. Invasive methods employ wire or needle electrodes, which penetrate the skin to reach the muscle of interest. On the contrary, surface electromyography (sEMG) is a non-invasive technique that uses surface electrodes that operate on the surface of the skin <cit.>. For many of human-machine interfaces, non-invasiveness is an indispensable requirement. In the sEMG setup, APs can be detected by means of an instrumentation amplifier with the positive and negative terminals connected to two metal plates positioned on the skin surface: the sEMG signal results from by the superposition of all the detected APs underlying the amplifier <cit.>.
The EMG signal amplitude depends on the size and distance of the muscles underlying the electrodes, and typically ranges from 10 to 10. The EMG signal bandwidth stays within 2. The noise sources affecting the EMG signal are many, the most important being motion artifacts, floating ground noise and crosstalk. A major source of interference is Power Line Intereference (PLI) <cit.>, which is due to the capacitive coupling between the body and the surrounding electrical devices and power grid, and is common to most biomedical signals. Despite having nominal main frequency 50 in Europe and 60 in USA, PLI is non-stationary and can present variations in frequency (up to ยฑ2) and variations in amplitude (depending on instrumentation and environment), both mainly originating from the AC power system. Though beyond the scope of this master thesis, the development of filters and removal algorithms to reject PLI effectively is an active field of research.
ยง.ยง Muscular activation potentials
Motor units are the basic functional units of a muscle, and muscle fibers are innervated motor units. When activated, motor units generate a Motor Unit Action Potential (MUAP). MUAPs can be generated in sequence by repeated activations, giving rise to MUAP Trains (MUAPTs). MUAPTs are possible as long as the muscle is able to generate force.
A convenient elementary model of MUAPs can be built with the tools of linear response theory <cit.>. In the linear response model, represented in Figure , a MUAPT can be fully described with two mathematical entities: the sequence of instants of the n pulses of the train {t_i}_i = 1, โฏ, n, and the MUAP linear response function h(t), accounting for the shape of the MUAP and also called the MUAP waveform. Interpulse intervals are defined as {ฮ_i = t_i + 1 - t_i}_i = 1, โฏ, n - 1, i.e. the time intervals between consecutive MUAPs. Each impulse is a Dirac delta impulse ฮด(t) (using dimensionless quantities), and the impulse train ฮดฬ
(t) is given by their sum:
ฮดฬ
(t) = โ_i = 1^n ฮด(t - t_i).
The expression u(t) that describes the MUAPT as a function of time is obtained applying the kernel h on the impulse train ฮดฬ
(t):
u = h โฮดฬ
which more explicitly is
u(t) = (h โฮดฬ
)(t) = โซ_-โ^+โฮดฬ
(t - t') h(t') dt' =
=โ_i = 1^n h(t - t_i)
where โ denotes convolution.
More precisely, MUAP values depend also on the force F generated by the muscle: u = u(t; F). The value of the EMG signal m(t; F) sensed at time t depends on the generated force F as well, and is given by the sum of all the MUAPTs u_unit(t) generated by the probed motor units:
m(t; F) = โ_unitโ U u_unit(t; F)
where U is the set of the probed motor units. A comprehensive scheme following the potential from the AP to the recorded EMG signal is shown in Figure <ref>.
In both invasive and surface electromyography, the observed MUAP waveform depends on the relative position between the electrode and the active muscle fibers. This relative position may not be constant over time. The MUAP waveform and the EMG signal are also affected by any significant biochemical change in the muscle tissue: this is the case of muscle fatigue, which is a known source of variability for the EMG signal.
ยง SEMG-BASED GESTURE RECOGNITION
sEMG-based gesture recognition is a promising technique for the development of Human-Machine Interfaces (HMIs). On the one hand, sEMG has the virtue of being non-invasive, which is essential for many HMIs. On the other hand, entrusting recognition, i.e. signal classification, to automated learning is a successful method for circumventing the complexity of the task by removing the need of a complete physiological understanding of the underlying motor functions.
Automated learning has produced advances with a rich variety of approaches. Whereas the central goal of gesture recognition is signal classification, many other machine learning techniques have been successfully exploited as auxiliary tasks to assist classification at either training or evaluation time, increasing classification accuracy. Examples of auxiliary tasks (detailed in the following subsections) are force estimation by means of regression, and semi-supervised and unsupervised methods for calibration.
However, classical machine learning still requires field-specific knowledge, such as tested and established feature extracting procedures to maximize discriminative power, recourse to the suitable signal domain (time domain, frequency domain, or time-frequency domain), and empirical expertise about the portability of preprocessing and extraction procedures across datasets acquired with different experimental setups.
This need for field-specific knowledge can be overcome by migrating to deep learning algorithms, capable of autonomous feature learning, i.e. to learn good feature representations autonomously, provided sufficient data. The success of deep-learning models has even driven the ambition of reversing the perspective: once trained a satisfactory deep classifier, model explainability can be leveraged to shed light a posteriori on the feature representations, or even on the physiological understanding, lacked a priori. Steps toward demonstrating the feasibility of this approach have already been moved in the related field of electroencephalogram (EEG)-based gesture recognition for Brain-Computer Interfaces (BCIs): for instance, Nurse et al. in <cit.>, have shown correspondences between channel-time domain filters learned by the first layer of CNNs and known activation patterns of brain regions.
ยง.ยง Classical Machine Learning approach
Classical Machine Learning (classical ML), also referred to as traditional or conventional machine learning, is the broad class of non-deep algorithms which includes k-Nearest Neighbors (kNN), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and Random Forests (RF) (all with the related kernel methods), and Multi-Layer Perceptron (MLP) with one hidden layer. In the framework of sEMG-based gesture recognition based on classical ML, the pipeline typically consists of data acquisition, data preprocessing, feature extraction, feature selection, model definition, and inference <cit.>.
However, a major disadvantage of the classical ML pipeline is the strong reliance on domain-specific knowledge, needed for feature extraction, feature selection and parameter tuning. The reason why deep learning is appealing for sEMG-based gesture recognition is that it loosens these requirements, replacing feature extraction with feature learning incorporated in the algorithm training.
The general problem of finding good discriminative hand-crafted features is so hard that the quest for better ways to capture the temporal and frequency information of the signal has characterized decades of research. A clear overview of the typical sEMG features identified and used can be found in <cit.>, and can be summarized by domain as follows:
0em
* time domain: Root Mean Square (RMS), variance, Mean Absolute Value (MAV), Zero Crossings (ZC), Slope Sign Changes (SSC), waveform length, histogram;
* frequency domain: Short Time Fourier Transform (STFT), cepstral coefficients;
* time-frequency domain: Marginal Discrete Wavelet Transform (MDWT).
Two significant studies on sEMG classification with traditional ML techniques were made by Hugdings in <cit.> and Englehart and Hudgins <cit.>, who studied the classification problem of 4 hand gestures and obtained a classification accuracy higher than 90% by working with 200 segments of 4 channel sEMG signals, extracting 5 time-domain features and feeding them to MLP and LDA classifiers, robustifying the assignment by applying a majority vote window to the predictions. Instead, Castellini et al. <cit.> worked on 3 types of grasp motions and achieved a 97.1% classification accuracy using the RMS value from 7 electrodes as the input to an SVM classifier.
The first successes in the classification of a large number of hand gestures were obtained by Kuzborskij et al. <cit.>, using any of the proposed features in both time- and frequency-domain, fed to a SVM classifier with Radial Basis Function (RBF) kernel, and reaching a 70-80% accuracy on the 52 hand gestures of the 8-channel database Non-Invasive Advanced Prosthetics (NinaPro) presented by Atzori et al. in <cit.>. These results where improved by Atzori et al. <cit.>, by considering linear combination of features and using a RF classifier resulting in an average accuracy of 75.32%; and further improved by Gijsberts et al. in <cit.>, by evaluating different kernel classifiers jointly on EMG and acceleration signals, increasing classification accuracy by 5%.
ยง.ยง Deep Learning Revolution
In recent years, sEMG-based gesture recognition has seen a progressive shift from traditional machine learning to deep learning. Deep Learning (DL) is the class of machine learning algorithms that differ from classical ML approaches in that feature extraction is part of the model definition, therefore obviating the need for hand-crafted features.
Existing deep learning architectures are mainly based on two kinds of architecture: the Convolutional Neural Network (CNN), successfully deployed for image classification due to its ability to capture spatial (but also temporal) information of the signal, and Recurrent Neural Network, which allows to exploit the sequential nature of the data and has had successes in speech recognition. The work of this master thesis is exclusively focused on the CNN architecture. Although deep algorithms themselves are not new, they are the most computationally demanding, so that deep classifiers gained attention only relatively recently, thanks to increased availability of data and powerful improvements in computing hardware <cit.>.
The advantage of deep architectures is their ability to incorporate feature learning: a consistent part of the traditional pipeline can be entrusted to the training of the algorithm, which has enough capacity to learn good feature representations on its own. This mitigates the reliance on rigid combinations of preprocessing steps and precise sets of hopefully discriminative features.
The first end-to-end DL architecture was proposed by Park and Lee <cit.>, who built a CNN-based model for the classification of six common hand movements resulting in a better classification accuracy compared to SVM. Atzori et al. <cit.> proposed a simple CNN architecture based on 5 blocks of convolutional and pooling layers to classify the 52 hand gestures from the NinaPro database <cit.>, reaching classification accuracy comparable to those obtained with classical methods, though not higher than the best performance achieved on the same problem using a RF classifier.
Geng et al. <cit.> and Wei et al. <cit.> improved the results across various datasets by adding batch normalization and dropout (layer types explained in Subsections <ref> and <ref>, respectively) to the model architecture, and by using a high-density electrode array, thus benefiting from the setup of High Density sEMG (HD-sEMG). Building the analysis on instantaneous EMG images <cit.> achieves a 89.3% accuracy on a set of 8 movements, going up to 99.0% when using majority voting over 40 signal windows. In <cit.> it is shown that for some movements a significant role is played by a small, group of muscles, and therefore a multi-stream CNN architecture is used that divides the inputs into smaller images, to be separately processed by convolutional layers before being merged with fully connected layers, reporting a increase in accuracy by 7.2% (from 77.8% to 85%).
Successful approaches also involve multi-modality data (also called sensor fusion): Du et al. <cit.> exploit the auxiliary information of a data glove to add semi-supervised auxiliary tasks (regression on glove signals) to a CNN, reporting improved classification accuracy.
ยง.ยง State-of-the-art on the variability factors
Currently, the state-of-the-art of sEMG-based gesture recognition is coping with challenging issues. The sEMG signal is severely affected by many variability factors such as differences between subjects, fatigue, user adaptation and the variability introduced from the re-positioning the electrodes at each session of data collection. These issues limit long-term use and temporal reliability of the devices relying on sEMG analysis.
In the framework of machine learning, these factors of variability can be modeled with the concept of data sources, i.e. data subsets coming from different distributions (the concept of multi-source data is defined in Subsection <ref>). The Multi-source nature of the data makes machine learning on EMG data a challenging problem, where the ambition is to improve the inter-source generalization capability of classifiers. Examples of this are the inter-posture, inter-session and inter-subject scenarios. Up to now, for instance, the performance with Leave-One-Subject-Out cross-validation (LOSOCV) is still much lower than that reached with Within-Subject cross-validation (WSCV) on the main public benchmark databases <cit.>.
Some studies have already dealt with inter-subject variability, resorting to recalibration techniques
or model adaptation methods
often moving from the technique of batch normalization (explained in Subsection <ref>). The network proposed in <cit.> takes as input downsampled spectrograms (i.e. time-frequency representations) of sEMG segments, and improvement is achieved by updating the network weights using the predictions of previous sessions corrected by majority voting. In <cit.> it is assumed that, while the weights of each layer of the network learn information useful for gesture discrimination, the mean and variance of the batch normalization layers store information related to discrimination between sessions/subjects. Moreover, a variation of batch normalization called Adaptive Batch Normalization (AdaBN) is used in <cit.>: only the normalization statistics are updated for each subject using a few unlabeled data, improving performance with respect to a model without adaptation.
In <cit.> transfer learning techniques are used to exploit inter-subject data learned by a pre-trained source network. In this architecture, for each subject a new network is instantiated with weighted connections to the source network, and predictions for a new subject are based both on previously learned information and subject-specific data. Doing so achieves an accuracy of 98.3% on 7 movements.
ยง.ยง State-of-the-art on the Unibo-INAIL dataset
The state-of-the-art of sEMG-based hand gesture recognition on the Unibo-INAIL dataset is exposed by Milosevic et al. in <cit.>. The dataset itself represents the state of the art with regard to databases able to explore inter-subject, temporal, and postural variability, and is extensively described in Section <ref>.
Knowledge about the performance of classifiers on the dataset is limited to classical machine learning classifiers, namely Support Vector Machine (SVM), Neural Network (NN) with only one hidden layer, Random Forest (RF) and Linear Discriminant Analysis (LDA). However, these classifiers have been used to explore a large number of data partitions and training strategies, trying to optimize the generalization capability on new arm postures and new days.
The evaluated algorithms have similar recognition performance, higher than 90% (precise values depending on data partition and training strategy). The Radial Basis Function (RBF)-kernel SVM is at present the one achieving the highest accuracy, both intra-session and inter-session.
With regard to inter-session generalization, the baseline classification accuracy was higher than 90% for intra-posture and inter-day analysis, suffering a degradation of up to 20% when testing on data from different postures or days. This is consistent with the inter-session accuracy decline shown for other datasets (e.g. the 27% morning-to-afternoon decline reported for the NinaPro Database 6 by Palermo et al. <cit.>). The work showed that this inter-posture and inter-day accuracy decline is mitigated by training with combinations of data from multiple sessions. Moreover, results on temporal variability show a progressive user adaptation trend and indicate that (re)training strategies should prioritize the availability of recent data.
However, despite the variety of strategies and the number of results, the state of the art has two limitations. The first limitation is that, as said above, the state of the art on this dataset is limited to classical machine learning algorithms. The second limitation concerns data preprocessing: all the algorithms are applied on singled out, instantaneous signal values (i.e. 4-dimensional data points whose only dimension is channel number), so that the sequential nature of the EMG signal is not exploited at all. This master thesis addresses both these limitations.
CHAPTER: DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS
Deep learning is the sub-field of machine learning which deals with deep neural networks, i.e. neural networks having more than one hidden layer. Deep learning is currently the fundamental approach for the development of artificial intelligence. One of the most important deep neural network architectures is the convolutional neural network, which is the model used in this work.
This chapter is structured as follows:
0em
โ Section <ref> introduces neural networks and deep learning, contextualizing them with respect to machine learning and artificial intelligence;
โ Section <ref> provides the fundamentals of deep networks training, and illustrates some specific techniques and settings used in this work;
โ Section <ref> introduces convolutional neural networks, explaining the essential layer types (i.e. convolutional and fully connected) and also introducing the other particular layers used in this work.
ยง NEURAL NETWORKS AND DEEP LEARNING
Deep learning (DL) is the branch of Machine Learning (ML) which is nowadays the base of the research and applications of Artificial Intelligence (AI) <cit.>, and in the last decade has produced vast advances in many fields such as speech recognition <cit.> and image recognition <cit.>, cancer detection <cit.>, self-driving vehicles <cit.>, and playing complex games <cit.>. In some applications, DL models already exceed human performance.
The superior performance of DL models resides in the ability to incorporate automated data-driven feature extraction selection, i.e. the ability to extract high-level features from raw data by using statistical learning on large datasets. This approach produces effective representations of the inputs space, based on powerful discriminative features, in a different way from the earlier approaches based on non-deep ML, which are built on hand-crafted features or rules designed by expert researchers.
It is important to contextualize clearly the relationship DL has with AI and ML, as illustrated in Figure <ref>. With respect to the broad domain of AI, ML can be considered a vast sub-domain. Inside ML, is the area referred to as brain-inspired computation, whose interest is the developing of programs or algorithms whose basic functionality is inspired by natural brains and aims to emulate some aspects of how we currently understand the brain works <cit.>. Brain-inspired computing is divided into to main branches: spiking computing and DL. Spiking computing, which is beyond the scope of this work, takes inspiration from the fact that communication between neurons happens via spike-like pulses, with information not simply coded in spike's amplitude, but also in the timing of the spikes. In contrast, the branch of brain-inspired computing relevant here is Neural Networks (NNs), which are a well-known ML algorithm.
Neural networks are inspired by the fact that the signal processing performed by a neuron can be modeled as a weighted sum of the input activations, followed by a non-linear function with threshold and bias, which generates the neuron's output signal <cit.> as illustrated in Figure <ref>. By analogy, neural networks are built assembling units (also referred to as neurons) which apply a non-linear function to the weighted sum of the input values they receive.
Neural Networks, like the computations they execute, are structured as layers of units, and an example scheme is shown in Figure <ref>. The neurons belonging to the input layer receive the raw input values, and propagate them to the neurons of the middle layer, called the hidden layer. The weighted sums computed by one or more hidden layers ultimately reach the output layer, whose units compute the final output of the network. Taking the brain-inspired terminology further, the neuron outputs are referred to as activations, and the weights are sometimes called synapses. In addition to the network structure, Figure <ref> also shows the computations made at each layer, which follow the formula
y_j = f(โ_i=1^3 w_ij x_i + b_j),
where x_j are the input activations, w_ij are the weights, b_j are the bias terms, y_j are the output activations, and f(ยท) is the non-linear activation function.
Deep learning is the field of Neural Networks which focuses on Deep Neural Networks (DNNs), which are networks having more than one hidden layer, or, equivalently, more than three layers in total (typically, from five to more than a thousand <cit.>). DNNs have the ability to learn high-level features with more complexity and abstraction than shallow (i.e. non-deep) neural networks. For instance, in image processing, pixels of an image are fed to the first layer of a DNN, whose outputs can be interpreted as representing the presence of low-level features in the image, such as lines and edges. Subsequent layers progressively combine these features, eventually yielding a measure associated to the presence of higher level features (e.g. edges are combined into shapes, then into sets of shapes). As final output, the network produces an estimate of the probability that the highest-level features comprise a particular object or scene. This paradigm is referred to as deep feature hierarchy, and is what gives the DNNs the ability to obtain superior performance.
ยง TRAINING DEEP NETWORKS: GRADIENT DESCENT AND BACK-PROPAGATION
In a DNN, like in machine learning algorithms in general, there is a basic program that does not change while learning to a desired task. For DNNs, the basic program is the structure of the functions implemented by the layers, and learning consists in determining the value of the network's weights and biases, through an optimization called training. Once trained, the network can execute its task by computing the output using the optimized weights and biases. Running the trained networks to evaluate inputs is referred to as inference.
The training scenario relevant in this work is the supervised training for a classification task. In classification tasks, trained DNNs receive input data (e.g. the pixels of an image) and return vector of scores, one for each class. The highest-score class is the one the network estimates as most probable. The main goal of DNN training is to optimize the weights and the bias values so as to maximize the score of the correct class and minimize the scores of the incorrect classes. In supervised learning, the correct class of the data the network is trained on is known. The dissimilarity between the ideal correct scores and the scores computed by the DNN (based on its current weights and biases) is called the loss L, which is the objective function of the training, i.e. function to minimize.
A widely used algorithm for weight optimization is a hill-climbing iterative optimization procedure called gradient descent. In gradient descent, at each iteration t the weights w_ij is updated by subtracting a multiple of the gradient of L with respect to the weights. Element-wise:
w_ij^(t+1) = w_ij^(t) - ฮฑโ Lโ w_ij
where the multiplication factor ฮฑ is called learning rate. Iterating reduces L.
In contrast with weights and biases, which are referred to as model parameters since they are the arguments of the objective function to minimize, the learning rate ฮฑ is not involved in differentiation and hence is a training hyper-parameter, i.e. a constant that regulates the optimization without being affected by training. As the hyper-parameters describing the structure or training of any machine learning model, ฮฑ can be tuned to the optimal value through cross-validation (which can be computationally expensive) or through a shorter preliminary analysis (as done in this work), with a further speed-up commonly yielded by following heuristics instead of grid search. The optimal ฮฑ is typically small (some orders of magnitude below unity), but the its order of magnitude is strongly dependent on data, task, and model architecture. Hence the search must cover different orders of magnitude.
Finer techniques for tuning the learning rate are scheduling and recourse to per-parameter learning rates. (1) With scheduling, ฮฑ is reduced over the iterations, typically with a stepwise or exponential decay, in order to adapt the tuning to different moments of the gradient descent. (2) On the other hand, introducing per-parameter learning rates allows to perform a customized tuning on different parameters or groups of parameters. Both techniques come at the cost of an increased number of combination to explore.
An efficient procedure for computing the partial derivatives of the loss is back-propagation, which derives from the
chain rule of calculus and operates by passing values backwards through the network to compute how the loss is affected by each weight. Computation using back-propagation, illustrated in Figure <ref>, requires some steps used also for inference. To back-propagate through each layer, one has to: (1) compute the gradient of L relative to the weights from the layer inputs (i.e., the forward activations) and the gradient of L relative to the layer outputs; (2) compute the gradient of L relative to the layer inputs from the layer weights and the gradients of L relative to the layer outputs. It is worth to note that back-propagation requires intermediate activations to be preserved for the backward computation, so that training has increased storage requirements compared to inference.
Another popular training method, orthogonal to the techniques exposed so on, is fine-tuning. In fine tuning, the weights of a trained network are available, and are used as a starting point for the iterative optimization. This practice results in faster training than using random initialization. Moreover, the scenario in which the weights are adjusted for a new dataset is the sub-field of machine learning defined transfer learning.
More specific training settings and techniques used in this work are explained in the next subsections.
ยง.ยง Cross-entropy loss
When training neural networks, a convenient choice for the objective function (adopted also in this work) is the cross-entropy function. Originating in the field of probability theory and information theory, the main virtue of cross-entropy is that it leverages the soft assignments produced by the network, interpreting them as probabilities. Given a multiclass classification task on C classes, the loss for the observation x_i is computed by summing the losses due to each soft assignment ลท_ic with respect to the true label y_ic (i.e. the one-hot encoding):
loss(x_i) = - โ_c = 1^C y_iclogลท_ic,
with arbitrary choice for the base; e.g., taking the logarithm in base 2 yields a result in bits, whereas choosing base e yields a result in nats. The overall loss L is then computed by averaging over all the items to classify. When working with an unbalanced dataset, it can be useful to weight the elements by class when taking the average.
ยง.ยง Stochastic gradient descent with mini-batches
Stochastic Gradient Descent (SGD) is a variation of gradient descent which helps avoiding local minima during training. This technique does so introducing randomness in the optimization process by randomly partitioning the training set, referred to as the batch, into B equal-sized subsets called mini-batches. Then, gradient descent is performed using a single mini-batch per iteration, cycling over all the mini-batches. The sequence of B iterations that processes all the mini-batches exactly once is referred to as an epoch. Thus, during an epoch, all the data contribute to the optimization to the same amount. At the end of each epoch, the split into mini-batches is re-randomized, in order to prevent systematic bias generated by a particular order of comparison of the data.
It is important to note that the avoidance of local minima provided by SGD comes at the cost of introducing a new training hyper-parameter, the batch size b (or, equivalently, the number of batches B).
Since some sources reserve the name SGD for the extreme case b = 1, in the remainder of this work the term SGD with mini-batches will be used to avoid confusion.
ยง.ยง L_2 regularization
The technique of L_2 regularization takes its name from the L_2-norm and is a method to counter overfitting. It consists in adding to the loss a multiple of the squared L_2-norm of the vector of all the parameters, and using the new formula L_reg as objective function. In formulas, for a deep neural network:
L_reg = L_0 + L_L_2 =
= L_0 + ฮป_L_2โ_l = 2^N_layersโ_i = 1^n_l - 1โ_j = 1^n_l |w_ij^(l)|^2,
where L_0 is the non-regularized loss, L_L_2 is the regularization term, N_layers is the number of layers, n_l is the number of neurons in layer l, and w_ij^(l) is the weight connecting neuron i of layer l - 1 to neuron j of layer l. The factor ฮป_L_2 is the training hyper-parameter regulating the amount of regularization enforced. Too low ฮป_L_2 has no effect, and too high ฮป_L_2 incurs underfitting. The optimal value can be determined via preliminary analysis or cross-validation, covering different orders of magnitude.
ยง CONVOLUTIONAL NEURAL NETWORKS
Deep networks have a vast variety of architectures and sizes, continuously evolving to increase performance. Convolutional neural networks are a successful class of architectures, which can be introduced only after surveying the strategies used to progressively reduce the storage and computation required by layers: sparsity, structured sparsity, weight sharing, and convolution.
Deep networks can be entirely composed of fully-connected (FC) layers, shown in Figure <ref>, and in this case they are defined Multi-Layer Perceptrons (MLP). In a FC layer, all outputs units are connected to all inputs units, so that all output activations are computed with a weighted sum of all input activations. Although the FC configuration requires significant computation and storage, in many situations it is possible to zero some weights (thus removing the relative connections) without impacting performance. The resulting layer, also shown in Figure <ref>, is called sparsely connected layer.
Opposed to generic sparsity, structured sparsity is the configuration in which each output is only a function
of a fixed-size window of inputs. Even further efficiency is gained when the computation of every output employs the same set of weights. This configuration is known as weight sharing, and strongly reduces the storage requirements for weights. A particular case of weight sharing arises when the computation is structured as a convolution, as shown in Figure <ref>: the weighted sum for each output activation is computed using only a narrow neighborhood of input activations (by zeroing the weights outside the neighborhood), and every output shares the same set of weights (i.e., the filter is space invariant). This gives rise to convolutional (CONV) layers, which are the characteristic building block of convolutional neural networks.
Convolutional Neural Networks (CNNs) are a successful deep network architecture, composed of multiple CONV layers, as shown in Figure <ref>. In CNNs, each layer generates a progressively higher-level representation of the input data, referred to as feature map (fmap), extracting the essential information for the network's task. Modern CNNs have attained superior performance by implementing a very deep hierarchy of layers. CNN are widely used in a variety of applications including image understanding <cit.>, speech recognition <cit.>, robotics <cit.> and game play <cit.>. In this work, CNNs are applied to the task of classifying time windows of a 4-channel signal.
CNNs fall into the category of feed-forward networks. In a feed-forward network, all computations are executed as a sequence of operations taking place from one layer to the next one. A feed-forward network has therefore no memory, and the output for a given input is always identical irrespective of the history of the inputs fed previously.
Each CONV layer is mainly constituted by high-dimensional convolutions, as shown in Figure <ref>. In this computation, the input activations of a layer have the structure of a set of 2d input feature maps (ifmaps), each referred to as a channel. Each channel is convolved with a distinct 2d filter from the stack of filters, one for each channel. The stack of 2d filters being a 3d structure, it is sometimes collectively called a 3d filter. The results of the convolutions at each point are summed across channels, and a 1d bias is optionally <cit.> added to the results. The result of this computation are the output activations constituting one channel of the output feature map (ofmap). Additional output channels can be created by applying additional 3d filters on the same input.
With the notation for shape parameters defined in Table <ref>, the computation executed by a CONV layer is described by the formula
๐[z][u][x][y] = ๐[u] +
โ_k = 0^C - 1โ_i = 0^S - 1โ_j = 0^R - 1๐[z][k][U x + i][U y + j] ร๐[u][k][i][j],
with
0 โค z < N, 0 โค u < M, 0 โค x < F, 0 โค y < E,
E = H - R + UU, F = W - S + UU,
where O, I, W and B are the matrices of ofmaps, ifmaps, filters and biases, respectively, U is a fixed stride size. A graphical representation of this computation is shown in Figure <ref> (where biases are omitted for simplicity).
To align the CNN terminology with the general DNN terminology, is it worth remarking that
0em
* filters are composed of weights (corresponding to synapses in nature);
* input and output feature maps (ifmaps, ofmaps) are composed of activations of inputs and output neurons.
In the CNN used in this work, the CONV layers are CONV-1d layers, i.e. layers performing a 1-dimensional convolution. Though less common, CONV-1d layers follow the same principles as CONV-2d layers, but work with inputs having 1 dimension (plus channel number). The choice to use CONV-1d layers was made in order to act on the time dimension of the signal, while using the 4 sEMG channels as CNN input channels. In addition to CONV layers and FC layers, the CNN implemented in this work contains other elements, namely the rectified linear unit, batch normalization and dropout, whose mechanisms are explained in the following two subsections. These three kinds of intervention on activations are sometimes conceptualized as layers.
ยง.ยง Batch-normalization
Controlling the input distribution across layers can speed up training and improve accuracy. Accordingly, the distribution of a layer's input activations (described by its mean ฮผ and standard deviation ฯ) can be standardized to zero mean and unit standard deviation. Batch Normalization (BN) <cit.> is the technique in which the standardized activations are further scaled and shifted, undergoing the transformation
y = x - ฮผโ(ฯ^2 + ฮต)ฮณ + ฮฒ,
where the parameters ฮณ and ฮฒ are learned from training, and ฮต is a small constant used to avoid numerical problems. Batch normalization is mostly applied between the CONV or FC layer and the non-linear activation function, and is usually turned off after training.
ยง.ยง ReLU activation function
Non-linear activation functions are typically applied after each CONV or FC layer. The most common functions used to introduce non-linearity into a DNN are shown in Figure <ref>. Historically, the sigmoid and the hyperbolic tangent are the most conventional, while the Rectified Linear Unit (ReLU) has become common in the last years due to its simplicity and its ability to make training faster <cit.>. The leaky ReLU, parametric ReLU, and exponential LU are variations of the ReLU explored for increased accuracy. ReLU is mostly applied after the CONV or FC layer or after batch normalization (if present).
ยง.ยง Dropout
Dropout is a technique to improve accuracy by reducing overfitting <cit.>. It works by randomly dropping units (and their connections) from the network during training. This prevents the phenomenon of units co-adaptation, forcing each neuron to learn a feature helpful for computing the correct output.
In particular, the random dropout of the units of the interested layer during training is regulated by the drop probability p, for which a common value is p = 0.5. On every forward pass, each unit is zeroed out independently and randomly, drawing from the Bernoulli distribution parameterized by p. Moreover, the outputs are multiplied by 1/1-p. Dropout is active only during training: during inference, the dropout layer is disabled (or, equivalently, the drop probability p is set to 0). Dropout is mostly applied immediately before the CONV or FC layer.
CHAPTER: MATERIALS AND METHODS
This chapter explains all the materials and methods used in this master thesis. It is structured as follows:
0em
โ Section <ref> exhaustively describes the Unibo-INAIL dataset;
โ Section <ref> details the implemented machine learning pipeline, preprocessing, CNN architecture and training settings;
โ Section <ref> defines the concept, essential in this work, of training strategy based on training set composition, and illustrates the training strategies used.
ยง UNIBO-INAIL DATASET
The work of this master thesis is entirely focused on the Unibo-INAIL dataset. The Unibo-INAIL dataset is a surface electromyography (sEMG) dataset realized to explore the impact of arm posture and temporal variability (either alone or combined) on sEMG-based hand gesture recognition. The dataset was presented by Milosevic et al. in <cit.>, and was built on the preliminary analysis carried out by Benatti et al. in <cit.>. This master thesis evaluates the performance of Convolutional Neural Networks (CNNs) trained on the dataset with all the training strategies described in <cit.>, where the performance of classical machine learning algorithms is reported.
The data were acquired from 7 able-bodied (i.e. non-amputee) subjects performing 6 discrete hand gestures in 4 arm postures, repeated for 8 days, thus probing a total of 224 different data sources, each identified by three discrete indexes (subject, day and arm posture) and containing 6 classes, i.e. five hand gestures plus the rest position. This acquisition protocol, in addition to investigating the signal patterns associated to the 6 classes, allows to characterize the following sources of variability affecting signal and patterns:
0em
* inter-posture variability: for each subject individually, keeping sensors on, different arm postures and wrist orientations cause variability in signal and patterns, due to differences in muscle activity and muscle position (since sensors are only fixed with respect to the skin);
* inter-day variability: for each subject individually, temporal variability of signal and patterns are mainly due to two factors:
* sensor placement: from day to day, the sensors are removed and repositioned, causing differences in signal and patterns, due to the change of the relative position of the electrodes with respect to the muscles;
* user adaptation: user adaptation (explained in Subsection <ref>) is the transient observed when the inter-day differences in gesture execution decrease over time, due to the fact that new users adapt to the repetitive exercise during the first days;
* inter-subject variability: signal and patterns are influenced by the anatomical variability between subjects (even if all able-bodied).
For the research on sEMG-based hand gesture recognition for reliable Human-Machine Interfaces (HMIs), the Unibo-INAIL dataset is a valuable ground for two reasons. The first reason is that the acquisition setup is based on commercial sensors, chosen so as to make the setup is easily repeatable and thus suitable for integrated HMI controllers <cit.>. The second reason is that the Unibo-INAIL dataset is the first public sEMG dataset to date to include both arm-postural and session variability, providing a realistic scenario for evaluating classification algorithms and training approaches.
ยง.ยง Unibo-INAIL collaboration and motivation for the dataset
The Unibo-INAIL dataset is the results of a research project funded by the Istituto Nazionale per l'Assicurazione contro gli Infortuni sul Lavoro (INAIL), in which the University of Bologna (Unibo) was designated for assessing the feasibility of real-time control of poly-articulated hand prostheses by means of pattern recognition algorithms implemented on a microcontroller. The project was inspired by a previous study by Castellini et al. <cit.> on intuitive prosthesis control, which demonstrated that SVMs can recognize different muscle activation patterns with high precision. In particular, the SVMs classify gestures up to a precision of 95% and approximate the forces with an error of as little as 7% of the signal range, sample-by-sample at 25.
The first part of the project aimed to assess how accurate the recognition can be on diverse data produced in different conditions (i.e. intra-session scenario), and to verify whether the system was stable on different sessions (i.e. inter-session generalization). Although in prosthetics sensor (re)positioning is less relevant, since sensors are fixed to the prosthesis and thus much less mobile, this variability factor was included, planning future developments.
The second branch of the project involved the validation of the algorithms in real-time control scenario. The controller was implemented on a microcontroller, in a system in which the real-time, fresh data were acquired with the same embedded setup used for the first part of the project, in order to reproduce the previous system. The algorithm implementation was also made identical by using the open source machine learning library LIBSVM, which is implemented for both Matlab and C <cit.>.
This master thesis continues the first part of the project, aiming to expand the results obtained for classical machine learning algorithms (SVM, shallow NN, RF and LDA) applied on instantaneous 4-channel signal values. This work extends the analysis to deep CNNs applied on 150 time windows of the 4-channel signal.
ยง.ยง Outline of acquisition setup and experimental protocol
The acquisition setup, shown in Figure <ref>, was designed to be reliable and repeatable, with characteristics typical of prosthetic applications. However, since all the data are collected from able-bodied (i.e. non-amputee) subjects, the dataset is useful for any HMI application.
The acquisition setup is based on the Ottobock 13E200 pre-amplified single-ended sEMG electrode (Figure <ref>), which is a commercial sensor. It amplifies and integrates the raw EMG signal to reach an output span of 0-3.3V.
The sensors have bandwidth spanning 90-450Hz and integrate an analog notch filter to remove the noise due to Power Line Interference (PLI), i.e. the capacitive coupling between the subject and the surrounding electrical devices and power grid (detailed in Section <ref>). The output analog signals were acquired with a custom embedded board based on a microcontroller equipped with an internal 16-bit ADC. The digitalized signals were streamed via Bluetooth to a laptop, for storage and off-line data analysis.
The subjects involved were able-bodied (i.e. non amputee) males, 29.5 ยฑ 12.2 years. During the acquisition the subjects worn an elastic armband with 4 Ottobock sensors placed on the forearm muscles involved in the selected movements (i.e. extensor carpi ulnaris, extensor communis digitorum, flexor carpi radialis and flexor carpi ulnaris) as shown in Figure <ref>. Sensors were placed on the proximal third at 30 respectively on the left and on the right side of two axial lines ideally traced on the forearm. Each acquisition consisted in 10 repetitions of each hand gesture, with 3 second contractions interleaved by 3 seconds of muscular relaxation to be later labeled as rest gesture. After each acquisition, gesture segmentation was performed with a combination of manual inspection and an adaptive threshold to separate contractions from rest.
ยง.ยง Multi-source data structure
The fundamental property of the Unibo-INAIL dataset is that it contains 224 different data sources, since the data were collected for all the combinations obtained from 7 seven subjects, 8 days and 4 arm postures. Each subject-day-arm posture combination contains all hand gestures (i.e. the classes), each one repeated 10 times. The dataset can thus be regarded as a collection of 224 autonomous sub-datasets, for which the pattern to learn in order to assign signals to hand gestures is subjected to inter-subject, inter-day and inter-posture variability.
In machine learning, data having this structure are defined multi-source data. For the Unibo-INAIL dataset, each combination identified by a 3-ple subject-day-arm posture combination is a source:
source = (u, d, p) with
u = 1, โฏ, 7
d = 1, โฏ, 8
p = 1, โฏ, 4
totalling 224 sources. Each source can be regarded as a smaller, complete dataset containing 10 repetitions of all the 6 gestures.
NOTE. In this work, the term multi-source data is used according to the meaning it has in statistical learning <cit.>, where it refers to data-subsets having different but similar distributions. The intended meaning is not data coming from different types of sensors or modalities.
The subject-index u refers to the 7 subjects involved. Each one underwent data collection over 8 days: this is the day-index d of the data sources. Arm posture p is the third index, and the four collected arm postures are:
0em
P1. proximal: the only one with the arm not fully extended, and the most common in EMG-based hand gesture recognition literature;
P2. distal;
P3. distal with the palm oriented down: the different wrist orientation aims to introduce additional difference compared to P2 and P4;
P4. distal with the arm lifted up by 45.
These arm postures are displayed in Figure <ref>.
To constitute the classes of the dataset, five common hand gestures used in daily life were chosen: power grip, two-fingers pinch grip, three-fingers pinch grip, pointing index and open hand. Rest position, recorded when muscles were relaxed between two subsequent movement repetitions, was also included as a class, totalling 6 classes:
0em
G_0: rest position: including this class means addressing also the task of gesture detection, in addition to gesture recognition;
G_1: power grip;
G_2: two-fingers pinch grip;
G_3: three-fingers pinch grip;
G_4: pointing index;
G_5: open hand.
The five hand gestures G_1, โฏ, G_5 are shown in Figure <ref>, together with examples of 4-channel sEMG signal patterns of the gestures and the rest position G_0.
NOTE. It is of crucial importance, when handling the Unibo-INAIL dataset, not to mistake arm posture and hand gesture, since they are partitions acting at two different levels: the arm posture is the position of the arm in which all hand gestures (plus rest position) were executed. Each combination of subject, day and arm posture contains 10 repetitions of all the five hand gestures plus rest, and can thus be regarded as a complete sub-dataset containing all the 6 classes.
ยง.ยง User adaptation
User adaptation is the source of temporal variability which consists in a transient observed in the first days of many benchmarks datasets for sEMG-based hand gesture recognition <cit.>. The phenomenon consists in the fact that the inter-day differences in gesture execution decrease over time, due to the tendency of users to adapt to the repetitive exercise during the first days.
In literature, these inter-day differences are detected and measured by analysing how classification accuracy deteriorates when passing from intra-day validation to inter-day validation. With this method, classical machine leaning algorithms have already been able to highlight user adaptation also on the Unibo-INAIL dataset <cit.>, which means that user the adaptation transient is not masked by the temporal variability caused by day-to-day sensor repositioning, which affects the signal on both the earlier and later days.
ยง PIPELINE AND CNN ARCHITECTURE
ยง.ยง Preprocessing: windowing
Data preprocessing consisted in an overlapping windowing scheme: the electromyogram signals were decomposed into segments of duration 150, with a 75% overlap between consecutive ones. Since the dataset was acquired with 4 electrodes and sampling rate 500, for approximately 15min/session, this windowing produced a sample size of the order of M โผ 25.000 windows/session, each with dimensions 75 samplesร 4 channels. Each window was given the label of the hand gesture of its central sample.
The choice of duration 150 and overlap 75% was made with a preliminary analysis exposed in Subsection <ref>, showing that duration 150 yields a higher classification accuracy than 50 and 100.
Window duration and overlap are parameters strictly related to the engineering problem met in HMI development: real-time classification robustness, especially during transients. Longer windows allow to reduce the impact of transients, but the window length must be shorter than 300ms <cit.> to satisfy real-time usage constraints. 150 is a favourable but realistic compromise.
With regard to overlap, 75% is a good compromise to produce an adequate sample size (M โผ 25.000 windows/session) without exceeding in redundancy.
ยง.ยง Three-way data partition
Each of the 224 sessions (i.e. combinations of subject, day and arm posture) of the dataset was subjected to a three-way data partition.
Random 10% holdout. First, a random 10% of the signal windows were held out as test set. Reproduction of identical holdout at each execution was ensured by setting NumPy's pseudorandom seed to a fixed value. No stratification with respect to class was enforced. This test set was used in the very last step of the pipeline, to compute on new data:
0em
* the inter-posture test accuracy of the best postural training strategy;
* the inter-day test accuracy of the best multi-day training strategy.
2-fold partition with gesture integrity. After holdout, a 2-fold partition was applied on the remaining data (again, separately for each data session), to create two sets acting in turn as training set and validation set, in a 2-fold cross-validation scheme. The two folds were created starting from a 10-fold linear split, then putting the odd intervals into Fold 1 and the even intervals into Fold 2. This partition was chosen because it is the one yielding the best classification accuracy in <cit.>, where it is called Training 50%D (D meaning decimal). The motivation of this scheme is shuffling the 10 gesture repetitions while approximately preserving the integrity of each repetition.
ยง.ยง CNN architecture implemented
The architecture of the CNN implemented is an adaptation of the CNN module of the attention-based hybrid CNN-RNN architecture proposed by Hu et al. in <cit.> for sEMG-based gesture recognition on five public benchmark databases (not including the Unibo-INAIL dataset). More in detail, the hybrid architecture is a very large sequential model which stacks multiple parallel 2d-CNNs (identically trained), an LSTM, and an attention module. This model was chosen as a starting point due to its good performance (better than the state-of-the-art at publication) and to its modular structure, which is inspiring for exploring variations.
In this work, the 2d-CNN module from <cit.> was taken and converted to a 1d-CNN. Conversion from 2d to 1d was needed to act on the time windows produced in the preprocessing step (Section <ref>), having dimensions 75 samplesร 4 channels. For a CNN, this format corresponds to 1d images 75ร 1 possessing 4 channels (or โcolorsโ).
The resulting CNN architecture is shown in Figure <ref> and has 9 layers, listed in Table <ref>. The first two layers are 1d-convolutional layers with 64 kernels of size 3. They are followed by two locally-connected layers with 64 kernels of size 1, employed to extract features of the sEMG signal that are temporally circumscribed (i.e. โlocalโ in time). For all these layers, batch normalization is applied to reduce internal covariate shift. The fifth, sixth and seventh layers are all fully-connected layers with batch normalization. Moreover, dropout with probability p = 0.5 is applied to the first two fully-connected layers to provide regularization. The fully connected layers are followed by a 6-way fully-connected layer (6 being the number of classes, i.e. the five hand gestures plus the rest position) and a softmax classifier. Except for the latter, all layers have ReLU activation function.
ยง.ยง Training settings
The optimal parameters regulating CNN training were found via preliminary analyses, observing the network's behaviour when trained with various settings. In particular, the optimal learning rate, number of epochs and scheduling were chosen through the preliminary analysis exposed in Subsection <ref>. The optimal settings, used for all the trainings in the pipeline, are the following:
0em
* random initialization of weights and biases: PyTorch default;
* loss function: cross-entropy loss, implemented by the PyTorch command , which stacks log-softmax () and negative log-likelihood loss ();
* optimization algorithm: Stochastic Gradient Descent (SGD) with the number of mini-batches kept fixed at B = 50 for all trainings (thus with mini-batch size varying proportionally to training set size); mini-batches are re-randomized at each epoch;
* learning rate: = 0.001;
* early stopping: 20 epochs;
* scheduling: learning rate is divided by 10 after epoch 19;
* regularization: L_2 regularization setting PyTorch parameter , which corresponds to ฮป_L_2 = 0.05.
ยง TRAINING STRATEGIES
The very aim of this work is to assess the ability of the implemented CNN model to generalize to data coming from arm postures or days not seen in training. In particular, the focus is on evaluating whether training on more postures or more days benefits the CNN's generalization ability. This is done by implementing the same training strategies studied by Milosevic et al. in <cit.>, i.e. single-session, two-posture, two-day, and five-day training. The positive results of Milosevic et al. (limited to classical machine learning classifiers applied to instantaneous 4-channel signal values) play the role of baseline.
ยง.ยง Single-session training strategy
The simplest training strategy implemented is the single-session training strategy: for each session, i.e. a combination of user, day, and arm posture, 2 CNNs are trained (one for each fold), using only data taken from that session. Classification accuracy is then evaluated via intra-session validation and inter-session validation:
0em
* in intra-session validation, the trained CNN is evaluated on the data of the fold not used for training;
* in inter-session validation, the trained CNN is evaluated the data of a different session, to measure the CNN's ability to generalize; for the single-session training strategy, the inter-session validations computed are the inter-posture validation (all combinations), and inter-day validations on days D2 to D8 after training on day D1.
ยง.ยง Two-posture training strategy
The training strategy developed to address postural variability is two-posture training: for each subject, day, and arm posture pair, 2 CNNs are trained (one for each fold), using only data taken from those two sessions (i.e. that subject, that day and those two postures). The posture pair considered are P1+P2, P1+P3, and P1+P4, where the choice of always including P1 is motivated by the fact that P1 is the only posture with the arm not fully extended, thus the most dissimilar from the other ones. Classification accuracy is then measured via intra-postures validation and inter-posture validation:
0em
* in intra-postures validation, the trained CNN is evaluated on the fold not used for training (the plural intra-postures is because each fold now contains data from two arm postures);
* in inter-posture validation, the model is evaluated on the data of a different posture, to measure the model's ability to generalize between postures.
ยง.ยง Multi-day training strategies
The training strategies proposed to address temporal variability is multi-day training: for each subject, day combination, and arm posture, 2 CNNs are trained (one for each fold), using only data from that subject, those days and that posture. In particular, multi-day trainings are two-day trainings, which use D1+D2, D1+D5, and D4+D5, and five-day training, which use D1 to D5. These combinations were chosen in order to repeat the setup which produced the baseline results. Classification accuracy is then measured via intra-day validation and inter-days validation:
0em
* in intra-days validation, the trained CNN is evaluated on the fold not used for training (the plural intra-postures is because each fold now contains data from two or five days);
* in inter-day validation, the model is evaluated on the data of days D6 to D8 (absent in all multi-day training combinations), to measure the model's ability to generalize to different days.
CHAPTER: IMPLEMENTATION
The pipeline and the CNN described in the previous chapter were implemented in Python scripts whose heart, dealing with CNN definition, training and evaluation, relies on the open source library PyTorch.
This chapter is structured as follows:
0em
โ Section <ref> illustrates in general the scripts developed for the different training strategies;
โ Section <ref> explains more in detail the main PyTorch packages and how they were used.
ยง SCRIPTS DEVELOPED
Although the Unibo-INAIL dataset is publicly available with a series of Matlab scripts to help analyses, these were not exploited. Only one was partially used: the script , devoted to training all the classical machine learning tested in <cit.> with single-session data, corresponding to the first training strategy (i.e. data from one subject, one day and one arm posture; details in Section <ref>). This script was translated into Python and the following advances were implemented:
0em
* data partition (detailed in Section <ref>) was rewritten from scratch as the function , to make it more compact, to add holdout and to make the K-fold partition easily costumizable by simply setting a parameter ;
* windowing preprocessing (see Section <ref>) was added as the function (called inside the previous one, immediately before doing data partition), since the previous implementations only addressed the classifications of 4-channel instantaneous signal values;
* in the core block, devoted to instantiation, training and validation, the four classical algorithms were replaced by the CNN, implemented with the library PyTorch.
The structure of loops cycling on subjects, days and arm postures was approximately preserved.
This revised script was in turn used as a template to implement the two-posture, two-day and five-day training strategies (see Section <ref>), each one in a separate script, totalling four scripts:
0em
* , for single-session trainings;
* , for two-posture trainings;
* , for two-day trainings;
* , for five-day trainings.
Execution times on a single GPU were approximately 8 for and , and approximately 5 for and .
ยง USAGE OF PYTORCH PLATFORM
The CNN instantiation, training and validation were implemented using the Python library PyTorch, an open source deep learning platform for Python, based on Torch, that is widely used in deep learning applications. This work has taken advantage of both PyTorch's high-level features: tensor computation on variables of class (partially analogous to NumPy arrays, but more powerful), and the possibility of GPU computation.
In this master thesis, the CNNs were implemented in compliance with the standard structure of PyTorch scripts for CNNs, and the main PyTorch packages exploited are , for neural networks instantiation and usage, , for automatic differentiation, , for optimization, and for GPU computation. The following subsections explain these packages and the way they were used.
ยง.ยง package
is the package that helps defining the complex neural networks for which operations on s with raw alone are too low-level. In particular, is the base class for all neural network modules, and user-defined models must subclass this class. If desired, modules (submodules) can be nested inside other Modules, creating a tree structure.
While reporting the code written to define the CNN architecture would be excessive, it is worth to show how instantiating the defined architecture is straightforward. It is done via two commands:
The first line instantiates the CNN . The second line instantiates the loss function, termed criterion in PyTorch terminology. The class implements a cross-entropy loss function which stacks a log-softmax operation () and negative log-likelihood loss ().
ยง.ยง package and class
is the package that implements automatic differentiation: gradients with respect to the parameters are automatically calculated directly at the forward pass (by recording the executed operations into computational graphs and re-executing them backward), thus reducing both development and execution time. The classes and functions provided implement automatic differentiation of arbitrary scalar-valued functions, with the only requirement that the variables with respect to which gradients are computed be objects.
is the base class of the package. s are multi-dimensional arrays (similar to NumPy array), which support automatic differentiation. Every variable possesses a flag, which allows to enable or disable gradient computation with respect to that variable. So, typically, data are s with , and model parameters are s with . Disabling gradients is also useful for freezing parts of a model, e.g. when fine-tuning other model parts.
The operations performed on tensors having set to are tracked. After finishing computations, gradients can be computed automatically by calling on the result, i.e. the variable whose value was computed with the function to differentiate. The gradient with respect each is then accumulated in its attribute[Formally, differentiation computes gradients of functions with respect to arguments (here, parameters). Sometimes, in machine learning and deep learning, the abuse of language is made of speaking of derivatives of the parameters. The justification is that differentiation almost always acts on the training objective function. However, in , speaking of gradients of the tensors is correct in the sense that gradient values are stored in each tensor's attribute.].
Formally, is a reverse automatic differentiation system. As manipulations on data are executed, produces a graph recording the operations that originate the result. This yields a directed acyclic graph having input tensors as leaves and output tensors as roots. Gradients are computed automatically by chain rule, by tracing the computational graph from roots to leaves (thus executing back-propagation).
Inside , computational graphs are represented as graphs of objects. During the forward pass, simultaneously creates the graph representing the function that computes the gradient. Then, in the backward pass the graph is evaluated to compute the gradients. A valuable property is that the graph is re-built from scratch at every iteration, which allows to use arbitrary Python control flow statements, that can originate different graph structures at every iteration. This is the define-by-run framework, in which there is no need to encode all possible paths before launching the training, also described with the sentence โWhat you run is what you differentiateโ.
The mechanics were exploited to compute the gradient of the training loss, computed as a cross-entropy. After initializing the model and the cross-entropy objective function by and
(as explained in the previous subsection), the values of the loss and loss gradient were computed at each iteration as follows:
ยง.ยง package
The package implements the most common iterative optimization algorithms. It is used instantiating and handling an object (or more than one), which keeps track of the current state of optimization and, when its method is called, updates the parameters based on the gradients previously computed.
The optimization algorithm used in this work is Stochastic Gradient Descent (SGD) with mini-batches (re-randomized at each epoch), implemented via the command
where is the function which instantiates SGD optimizers (without requiring to declare mini-batch size), are intuitively the parameters of the CNN model , is the learning rate, and corresponds to 2ยทฮป_L_2 determining the amount of L_2 regularization.
For each mini-batch, optimization steps are executed by calling the method of the optimizer, immediately after computing the gradients on the mini-batch via :
This performs one update of the parameters, based on the gradients.
To implement scheduling, was used, which provides methods for non-dynamic scheduling, i.e. learning rate adjustment based solely on epoch number, without adaptive validation measurements. The scheduling chosen after preliminary analysis (results in Subsection <ref>), i.e. division of the learning rate by 10 at epoch 19, was implemented via the command:
where is the optimizer (in this case, a object) whose learning rate is being scheduled, and and are the period and the multiplicative factor of the learning rate decay, respectively. Then, scheduling is applied simply by calling at each epoch.
ยง.ยง package
In addition to supporting automatic differentiation, a further advantage of s over NumPy arrays is that they can be cast to a GPU to improve computational performance.
This can be made using the package . It keeps track of the currently selected GPU, and all allocated CUDA tensors are created by default on that device. A context manager allows to change the selected device. However, once a is allocated, operations can be performed on it irrespective of the selected device, and the results are automatically placed on the same device as the . A 's device can be accessed via the property.
The GPU computation is enabled by adding
at the beginning of the script, to set to if a GPU device is available. A is an object representing the device on which a is or will be allocated, and is constructed so as to contain the device type ( or ) and an optional device ordinal for the device type.
The changes to cast model and data (and thus computations) to the GPU are few and straightforward, and are made via the command . The CNN instantiation becomes
and the evaluation and loss computation becomes
CHAPTER: RESULTS
ยง ACCURACY DISTRIBUTIONS AND REPORTED ACCURACIES
Due to the highly multi-source nature of the Unibo-INAIL dataset, the CNNs trained with each training strategy (single-session, two-postures, two- and five-day) return a distribution of classification accuracies, over the 7 subjects, 8 days and 4 arm postures. This allows to investigate the variability of performance over the dataset. On this basis, in this Chapter all accuracies are reported with:
0em
* average accuracy ฮผ (also called mean for simplicity), computed over the sessions of interest for each case;
* accuracy standard deviation ฯ, which quantifies the performance variability over the data sources;
* standard error on the mean accuracy SE = ฯ/โ(N), used to estimate the uncertainty on the average accuracy.
In particular, the SE does not describe the broadness of the accuracy distribution, but is an estimate of the fluctuations affecting the average accuracy. The 224 sessions (times 2 folds) of the Unibo-INAIL dataset allow to divide by a large โ(N) (varying according to the training strategy studied). Thus the 224 sessions are not only a computational burden, but also allow to report average accuracies with a SE of the order of โผ 0.1%.
In this chapter, the reported accuracies are compared with the baseline achieved by the best classical machine learning classifier reported in <cit.>. However, for the baseline accuracies the values of ฯ or SE are not reported, not allowing for a complete statistical comparison.
ยง PRELIMINARY ANALYSES
ยง.ยง Optimal length of time windows
The first preliminary analysis was made to optimize the window length to be used in preprocessing, consisting in an overlapping windowing scheme as detailed in Subsection <ref>. Since window length must be shorter than 300ms to satisfy real-time usage constraints, the values explored were 50, 100 and 150.
In this analysis, the performance of interest is the accuracy obtained on 2-fold cross-validation on single-session data (i.e. single user, single day, single arm posture), after training on data from the same session (i.e. using the two folds alternately for training and intra-session validation). The results are reported in Table <ref>. With a (94.4 ยฑ 0.2)% accuracy, duration 150 proved to be the best one and was adopted the preprocessing.
Since the dataset was acquired with 4 electrodes and sampling rate 500, for approximately 15min/session, this windowing produced a sample size of the order of M โผ 25.000 windows/session, each with dimensions 75 samplesร 4 channels. Moreover, with regard to overlap, 75% was chosen as a good compromise to produce an adequate sample size (M โผ 25.000 windows/session) without exceeding in redundancy.
ยง.ยง Learning curves
The second preliminary analysis was performed to optimize the learning rate and the scheduling thereof. This exploration was carried on with the same single-session scheme used to optimize the window length described in Subsection <ref>.
The optimal learning rate was searched by looking at single intra-session validation learning curves, like the one shown in Figure <ref> (obtained for subject 1, day 1, arm posture 1, training on fold 1 and validation on fold 2). Using SGD with the training set split into 50 mini-batches (re-randomized at each epoch), the optimal learning rate was found to be = 0.001, which yielded the best compromise between a reasonably fast convergence and a sufficiently low final loss. This optimal value was adopted for all the trainings.
Average learning curves were used to diagnose overfitting, as shown in Figure <ref>: the validation loss reaches a minimum between epochs 15 and 20, than increases again, indicating overfitting. A simple scheduling tactic, consisting in dividing the learning rate by 10 at epoch 20, allowed to improve the minimum of the validation loss, as shown in Figure <ref>. Since this scheduling does not remove the overfitting trend, but only makes it slower, the final choice was to exploit the sudden minimum: the final scheduling chosen consists in dividing the learning rate by 10 at epoch 19, then applying early stopping at epoch 20. This scheduling was used in all the trainings.
NOTE. After this preliminary analysis, validation cross-entropy losses were abandoned in favour of classification accuracies. This was motivated by the ease of interpretation and informal comparison with baseline accuracies.
ยง SINGLE-SESSION TRAINING STRATEGY
The basic training strategy explored is the single-session training strategy: for each session, i.e. a combination of user, day, and arm posture, 2 CNNs are trained (one for each fold), using only data from that session. Performance is then measured via intra-session validation and inter-session validation:
0em
* in intra-session validation, the trained CNN is evaluated on the fold not used for training;
* in inter-session validation, the model is evaluated on the data of a different session, to assess the model's ability to generalize; for the single-session training strategy, the inter-session validations computed are the inter-posture validation (all combinations), and inter-day validations on days D2 to D8 after training on day D1.
With regard to intra-session validation, the overall accuracy is (ฮผยฑSE) = (94.5 ยฑ 0.2)%, with ฯ = 3.5%. This performance is very similar to the baseline value of 94% achieved with a RBF-SVM, which is not consistent within the SE, but a complete statistical comparison is not possible because the baseline ฯ is not available.
Looking at accuracy distributions computed by subject, by day and by arm posture, some interesting findings emerge. Accuracy distributions by subject are reported in Figure <ref> and in Table <ref>; accuracy distributions by day are reported in Figure <ref> and in Table <ref>; and accuracy distributions by arm posture are reported in Figure <ref> and in Table <ref>.
The accuracy distributions (visualized as ฮผยฑฯ) of the 8 days and of the 4 arm postures always overlap within the standard deviations. The situation is different for the accuracy distributions for the 8 subjects: the means present larger variations, and Subject 3 has a mean so lower than the others that its accuracy distribution is not consistent within the standard deviation with the distributions of Subjects 4, 5, and 7. Subject 3's accuracy distribution is also the one presenting the highest standard deviation. Operatively, this indicates that, on average, within Subject 3's sessions hand gesture recognition is a harder task. It is worth to remark that the inter-session setup does not authorize to attribute lower accuracy to larger inter-day or inter-postural variability.
With regard to inter-session validation, Figure <ref> and Table <ref> show the inter-posture validation accuracies and compare them with the intra-posture case; all performances are reported by training posture. The overall inter-posture validation accuracy is 80.6%, corresponding to a 13.9% accuracy drop compared to the intra-posture (i.e. intra-session) scenario. This accuracy drop quantifies the amount of overfitting the single-session training produces with respect to the task of generalizing to different postures. The overall inter-posture accuracy is similar to the baseline value of 79%, but, again, a statistical comparison is not possible because the baseline ฯ or SE are not available.
ยง TWO-POSTURE TRAINING STRATEGY
The training strategy implemented to address postural variability is two-posture training: for each subject, day, and arm posture pair, 2 CNNs are trained (one for each fold), using only data from those two sessions (i.e. that subject, that day and those two postures). Posture pair considered are P1+P2, P1+P3, and P1+P4, where the choice of always including P1 is motivated by the fact that P1 is the only posture with the arm not fully extended, thus the most dissimilar from the other ones. Performance is then measured via intra-postures validation and inter-posture validation:
0em
* in intra-postures validation, the trained CNN is evaluated on the fold not used for training;
* in inter-posture validation, the model is evaluated on the data of a different posture, to measure the model's ability to generalize between postures.
All the results, divided by training posture pair, are shown in Figure <ref> and in Table <ref>.
With regard to intra-postures validation, the overall accuracy is (ฮผยฑSE) = (94.3 ยฑ 0.2)%, with ฯ = 3.5%, which differs only by 0.2% from the intra-posture validation accuracy obtained with single-session (thus single-posture) training (which is (ฮผยฑSE) = (94.5 ยฑ 0.2)%). This difference is comparable to the SEs of the two compared averages, thus not statistically significant. Moreover, the intra-postures validation accuracy yielded by two-posture training is higher than the corresponding baseline value of 90% (standard deviation or SE not available). This indicates that in intra-posture validation the 1d-CNN performs better than the RBF-SVM.
The interpretation of these results is that the 1d-CNN, thanks to its higher capacity compared to SVM, is able to learn the hand gesture patterns coming from two arm postures with the same accuracy as it learns the patterns from a single posture. However, this success is marginal since the real aim is to improve inter-posture accuracy.
With regard to inter-posture validation, the overall inter-posture validation accuracy is 82.0%, corresponding to a 12.3% accuracy drop compared to the intra-postures case. The corresponding baseline value is 83%, which is higher but does not allow for a statistical comparison since it is available without standard deviation or SE. This result is better by 1.4% compared to the inter-posture validation accuracies achieved with single-session training. The accuracy drop quantifies the amount of overfitting produced by training on P1+Pi, with respect to the task of generalizing to non-training postures. This amount of overfitting is slightly smaller compared to one given by the single-session (this single-posture) training, which was 13.9%.
Thus, two-posture training improves the inter-posture generalization of the CNN. The fact that the intra-posture performance is not impacted, means that an amount of overfitting is removed without adding significant bias. This can be attributed to the 1d-CNN's capacity, which enables it to learn more diverse distributions (i.e. patterns from two postures instead that one) while preserving classification accuracy.
ยง MULTI-DAY TRAINING STRATEGY
The training strategy proposed to address temporal variability is multi-day training: for each subject, day combination, and arm posture, 2 CNNs are trained (one for each fold), using only data from that subject, those days and that posture. In particular, multi-day trainings are two-day trainings, which use D1+D2, D1+D5, and D4+D5, and five-day training, which use D1 to D5 (these combinations were chosen in order to repeat the setup which produced the baseline results). Performance is then assessed via intra-days validation and inter-day validation:
0em
* in intra-days validation, the trained CNN is evaluated on the fold not used for training;
* in inter-day validation, the model is evaluated on the data of days D6 to D8 (absent in all multi-day training combinations), to measure the model's ability to generalize to different days.
All the results, divided by training posture pair, are shown in Figures <ref> and <ref> and in Table <ref>, which also include a comparison with single-day (thus single-session) training on D1 alone.
With regard to inter-day validation of the training strategy based on D1 alone, the (66.9 ยฑ 1.1)% accuracy means a 27.9% drop in accuracy compared to the intra-day validation, which yields (94.8 ยฑ 0.3)% accuracy. This proves that the inter-day variability is a larger effect than inter postural variability, whose impact was quantified as a 13.9% accuracy drop (inter-posture validation of the single-session training strategy).
Improvement compared to single-day training is evident for all the multi-day training strategies implemented. The D1-to-D5 training strategy yields the best inter-day validation accuracy, (ฮผยฑSE) = (76.4 ยฑ 1.2)% with ฯ = 9.0%. The five-day training strategy proves to be the best one also in the baseline results based on classical machine learning algorithms. The corresponding baseline value is 77%, which is consistent with the CNN result within the SE, allowing to say that the performance is statistically equivalent.
The second best strategy is D4+D5 strategy, achieving (ฮผยฑSE) = (74.4 ยฑ 1.2)% with ฯ = 9.2%. The statistical difference between the two was checked via a paired samples Wilcoxon test, which is the non-parametric equivalent of the paired samples t-test, chosen for its robustness with respect to sample distributions. The test yielded p_Wilcoxon = 4.0ยท 10^-4, which means that the five-day training strategy is statistically significantly better (in validation) than the other ones.
Moreover, a trend can be noted which can be identified as user adaptation (explained in Subsection <ref>): the training strategies based on later days yield higher classification accuracy. This can be interpreted as the fact that inter-day differences in gesture execution decrease over time, as a consequence of the tendency of users to adapt to the repetitive exercise. This indicates again that training strategies prioritizing the recent data yield better performances.
ยง TRAINING STRATEGIES SELECTION AND TEST
On the basis of the inter-session validation results, two-posture training was selected as the best strategy for postural generalization, and five-day training was selected as the best strategy for temporal generalization. This indicates that training strategies should prioritize data from more than one posture and from recent days. This outcome is the same as in the baseline results obtained with classical machine learning algorithms. After retraining, i.e. repeating the CNNs training using both folds, these two strategies were tested on the 10% of data previously held out as test set (holdout details in Subsection <ref>).
The two-posture training strategy yielded inter-posture test accuracy (ฮผยฑSE) = (81.2ยฑ 0.4)% with ฯ = 7.3%,. The five-day training strategy yielded inter-day (on D6 to D8) test accuracy (ฮผยฑSE) = (75.9ยฑ 0.7)% with ฯ = 8.6%
Test baseline values are not available, since the test step is not present in the pipeline which produced the baseline results. As reference, it is possible to consider the corresponding validation accuracies, which are 83% for the inter-posture validation of the two-posture training and 77% for the inter-day validation of the five-day training (both provided without ฯ or SE). These results are similar to the test accuracies, but the limit of this comparison is that the baseline values, being the validation accuracies of the winner model (i.e. the RBF-SVM), are upward biased. However, the available values are sufficient to conclude that the final CNN performance is comparable with the baseline.
CHAPTER: CONCLUSIONS AND FUTURE WORK
This master thesis is the first application of deep learning on the Unibo-INAIL dataset, the first public sEMG dataset exploring the variability between subjects, sessions and arm postures, by collecting data over 8 sessions of each of 7 able-bodied subjects executing 6 hand gestures in 4 arm postures. With the open-source deep learning platform PyTorch, it was possible to implement and test a 1d-CNN architecture trained with most recent strategies based on training set composition.
The single-session training strategy achieves 94.5% intra-session validation accuracy, but deteriorates to 80.6% in inter-posture validation and to 66.9% (for day 1) in inter-day validation. This proves that inter-day variability has a larger impact than inter-posture variability. A possible reason is that, on each day, the data of the 4 arm postures were collected without repositioning the sensors.
Multi-posture and multi-day training strategies yield higher inter-session validation accuracies. Two-posture training proves to be the best postural training strategy, indicating that the training strategies should prioritize data from more than one posture, and yields 81.2% inter-posture test accuracy. Five-day training proves to be the best multi-day training strategy, and yields 75.9% inter-day test accuracy. All the results are close to the baseline, provided by the accuracy of a RBF-SVM.
Moreover, the results of multi-day trainings allow to highlight user adaptation, the phenomenon which causes the inter-day differences in gesture execution to decrease over time, due to the tendency of users to adapt to the repetitive exercise during the first days. The detection of user adaptation indicates that the training strategies should also prioritize recent data.
Though not better than the baseline, the achieved classification accuracies rightfully place the 1d-CNN among the candidates for further research.
Future work will continue this research line investigating whether the fact that the deep 1d-CNN does not outperform the baseline is preprocessing-dependent or is due to an accuracy limit inherent to the Unibo-INAIL dataset. The question will be addressed using deep learning models relying on different data pre-processing, the first candidate being time-frequency domain analysed with 2d-CNNs.
CHAPTER: RINGRAZIAMENTI
tocchapterRingraziamenti
Al termine di questo lavoro, desidero rivolgere dei ringraziamenti.
Ringrazio il Prof. Daniel Remondini per la proficua supervisione durante il lavoro di tesi, e per lโincoraggiamento che mi ha fornito. Ringrazio il Dott. Simone Benatti, il Dott. Francesco Conti e il Dott. Manuele Rusci per la guida costante nel corso del lavoro di ricerca, e per il prezioso supporto che mi hanno dato. Ringrazio queste persone anche per i commenti, sempre dettagliati e tempestivi, elargiti nel corso della redazione dellโelaborato di tesi. Ringrazio infine il Prof. Luca Benini per avermi permesso di svolgere il lavoro di tesi magistrale presso il laboratorio Energy-Efficient Embedded Systems (EEES, Unibo) da lui coordinato.
1
tocchapterBibliography
Cheok2017 M. Cheok, Z. Omar, and M. Jaward, A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 2017.
Farina2014 D. Farina, N. Jiang, H. Rehbaum, A. Holobar, B. Graimann, H. Dietl, and O.C. Aszmann, The extraction of neural information from the surface emg for the control of upper-limb prostheses: emerging avenues and challenges. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(4):797โ809, 2014.
Meattin2018 R. Meattini, S. Benatti, U. Scarcia, D. De Gregorio, L. Benini, and C. Melchiorri, An sEMG-based human-robot interface for robotic hands using machine learning and synergies. In IEEE Transactions on Components, Packaging and Manufacturing Technology, 2018.
Saponas2010 T. S. Saponas, D. S. Tan, D. Morris, J. Turner, and J. A. Landay, Making muscle-computer interfaces more practical. In SIGCHI Conference on Human Factors in Computing Systems, pages 851โ854, 2010.
Hakonen2015 M. Hakonen, H. Piitulainen, and A. Visala, Current state of digital signal processing in
myoelectric interfaces and related applications. Biomedical Signal Processing and Control, 18:334โ359, 2015.
Tsinganos2018 P. Tsinganos, B. Cornelis, J. Cornelis, B. Jansen, and A. Skodras, Deep Learning in EMG-based Gesture Recognition. In 5th International Conference of Physiological Computing Systems (PhyCS), 2018.
Crammer2008 K. Crammer, M. Kearns, J. Wortman, Learning from Multiple Sources. Journal of Machine Learning Research 9:1757-1774, 2008.
Du2017 Y. Du, W. Jin, W. Wei, Y. Hu, and W. Geng, Surface EMG-Based Inter-Session Gesture Recognition Enhanced by Deep Domain Adaptation. Sensors, 17(3) 2017.
Hu2018 Y. Hu, Y. Wong, W. Wei, Y. Du, M. Kankanhalli, and W. Geng, A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS ONE 13(10), 2018.
Atzori2016 M. Atzori, M. Cognolato, and H. Mรผller, Deep learning with convolutional neural networks applied to electromyography data: a resource for the classification of movements for prosthetic hands. Frontiers in neurorobotics, 10:9, 2016.
ParkLee2016 K.H. Park and S.W. Lee, Movement intention decoding based on deep learning for multiuser myoelectric interfaces. In International Winter Conference on Brain-Computer Interface, pages 1โ2, 2016.
Geng2016 W. Geng, Y. Du, W. Jin, W. Wei, Y. Hu, and J. Li, Gesture recognition by instantaneous surface EMG images. Scientific Reports, 6:36571, 2016.
Du2017-1 Y. Du, W. Jin, W. Wei, Y. Hu, and W. Geng, Surface EMG-based inter-session gesture recognition enhanced by deep domain adaptation. Sensors, 17(3), 2017.
Du2017-2 Y. Du, Y. Wong, W. Jin, W. Wei, Y. Hu, M. Kankanhalli, et al., Semi-supervised Learning for Surface EMGbased Gesture Recognition. In International Joint Conference on Artificial Intelligence, pages 1624โ1630, 2017.
Milosevic2017 B. Milosevic, E. Farella, and S. Benatti, Exploring Arm Posture and Temporal Variability in Myoelectric Hand Gesture Recognition. In Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, IEEE Computer Society, pages 1032-1037, 2018.
DeLuca2006 C. De Luca, Electromyography. Encyclopedia of Medical Devices and Instrumentation, pages 98-109, 2006.
Tassinary L.G. Tassinary, J.T. Caccioppo, and E. Vanman, The Skelemotor System: Surface Electromyography, pages 267-299.
Tomasini2016 M. Tomasini, S. Benatti, B. Milosevic, E. Farella, and L. Benini, Power line interference removal for high-quality continuous biosignal monitoring with low-power wearable devices. IEEE Sensors Journal, 16(10):3887โ3895, 2016.
Rangayyan2015 R.M. Rangayyan, Biomedical Signal Analysis (IEEE Press series in biomedical engineering), 2015. Wiley.
Nurse2015 E.S. Nurse, P.J. Karoly, D.B. Grayden, and D.R. Freestone, A generalizable brain-computer interface (bci) using machine learning for feature discovery. PLoS ONE, 10(6):e0131328, 2015.
Nurse2016 E. Nurse, B.S. Mashford, A.J. Yepes, I. Kiral-Kornek, S. Harrer, and D.R. Freestone, Decoding EEG and LFP signals using deep learning: heading TrueNorth. In Proceedings of the ACM International Conference on Computing Frontiers, pages 259-266, 2016.
Hudgins1993 B. Hudgins, P. Parker, and R. Scott, A new strategy for multifunction myoelectric control.
IEEE Transactions on Biomedical Engineering, 40(1):82โ94, 1993.
EnglehartHudgins2003 K. Englehart and B. Hudgins, B., A robust, real-time control scheme for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering, 50(7):848โ854, 2003.
Castellini2009 C. Castellini, A. Fiorilla, and G. Sandini, Multisubject/daily-life activity EMG-based control of mechanical hands. Journal of neuroengineering and rehabilitation, 6:41, 2009.
Kuzborskij2012 I. Kuzborskij, A. Gijsberts, and B. Caputo, On the challenge of classifying 52 hand movements from surface electromyography. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 4931โ4937, 2012. IEEE.
Atzori2015 M. Atzori, A. Gijsberts, I. Kuzborskij, S. Elsig, A. Hager, O. Deriaz, C. Castellini, H. Mรผller, and B. Caputo, Characterization of a benchmark database for myoelectric movement classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 23(1):73โ83, 2015.
Atzori2014 M. Atzori, A. Gijsberts, C. Castellini, B. Caputo, A. Hager, S. Elsig, G. Giatsidis, F. Bassetto, and H. Mรผller, Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Scientific Data, 1(140053), 2014.
Gijsberts2014 A. Gijsberts, M. Atzori, C. Castellini, H. Mรผller, and B. Caputo, Movement error rate for evaluation of Machine Learning methods for sEMG-based hand movement classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22(4):735โ744, 2014.
Goodfellow2016 I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, 2016. MIT Press, Cambridge, MA.
Wei2017 W. Wei, Y. Wong, Y. Du, Y. Hu, M. Kankanhalli, and W. Geng, A multi-stream Convolutional Neural Network for sEMG-based gesture recognition in muscle-computer interface. In Pattern Recognition Letters, 2017.
Zhai2017 X. Zhai, B. Jelfs, R. Chan, and C. Tin, Selfrecalibrating surface EMG pattern recognition for neuroprosthesis control based on Convolutional Neural Network. Frontiers in Neuroscience, 11:379โ390, 2017.
Li2016 Y. Li, N. Wang, J. Shi, J. Liu, J., and X. Hou, Revisiting Batch Normalization for practical domain adaptation. ArXiv e-prints, 2016.
CoteAllard2018 U. Cรดtรฉ-Allard, C.L. Fall, A. Drouin, A. Campeau-Lecours, C. Gosselin, K. Glette, F. Laviolette, and B. Gosselin, Deep Learning for electromyographic hand gesture signal classification by leveraging transfer learning. arXiv:1801.07756, 2018.
Palermo2017 F. Palermo, M. Cognolato, A. Gijsberts, H. Mรผller, B. Caputo, and M. Atzori, Repeatability of grasp recognition for robotic hand prosthesis control based on sEMG data. In Rehabilitation Robotics (ICORR), pages. 1154โ1159, 2017.
LeCun2015 Y. LeCun, Y. Bengio, and G. Hinton, Deep learning. Nature, 521(7553):436โ444, 2015.
Deng2013 L. Deng, J. Li, J.T. Huang, K. Yao, D. Yu, F. Seide, M. Seltzer, G. Zweig, X. He, J. Williams et al., Recent advances in deep learning for speech research at Microsoft. In ICASSP, 2013.
Krizhevsky2012 A. Krizhevsky, I. Sutskever, and G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. In NIPS, 2012.
Esteva2017 A. Esteva, B. Kuprel, R.A. Novoa, J. Ko, S.M. Swetter, H.M. Blau, and S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115โ118, 2017.
Chen2015 C. Chen, A. Seff, A. Kornhauser, and J. Xiao, Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, 2015.
Silver2016 D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484โ489, 2016.
Sze2017 V. Sze, Y.H. Chen, T.J. Yang, and J. S. Emer, Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proceedings of the IEEE, 105(12):2295-2329, 2017.
Li2018 F.F. Li, A. Karpathy, and J. Johnson, Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition, 2018. <http://cs231n.stanford.edu/>. (Accessed on 28/02/2019).
Sainath2013 T. N. Sainath, A.R. Mohamed, B. Kingsbury, and B. Ramabhadran, Deep convolutional neural networks for LVCSR. In ICASSP, 2013.
Levine2016 S. Levine, C. Finn, T. Darrell, and P. Abbeel, End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1โ40, 2016.
He2016 K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition. In CVPR, 2016.
Ioffe2015 S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Nair2010 V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines. In ICML, 2010.
Jia2014 Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, Caffe: Convolutional architecture for fast feature embedding. In ACM International Conference on Multimedia, 2014.
Hinton2012 G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012
Srivastava2014 N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent Neural Networks from overfitting. Journal of Machine Learning Research, 15:1929โ1958, 2014.
Benatti2014 S. Benatti, E. Farella, E. Gruppioni, and L. Benini, Analysis of robust implementation of an emg pattern recognition based control. In BIOSIGNALS, pages 45โ54, 2014.
Benatti2017 S. Benatti, E. Farella, E. Gruppioni, and L. Benini, A prosthetic hand body area controller based on efficient pattern recognition control strategies. Sensors, 2017.
Castellini2009 C. Castellini, E. Gruppioni, A. Davalli, G. Sandini, Fine detection of grasp force and posture by amputees via surface electromyography, Journal of Physiology Paris, 103:255โ262, 2009. Elsevier.
LIBSVM LIBSVM โ A Library for Support Vector Machines. <https://www.csie.ntu.edu.tw/ย cjlin/libsvm/>. (Accessed on 26/02/2019).
Kaufmann2010 P. Kaufmann, K. Englehart, and M. Platzner, Fluctuating emg signals: Investigating long-term
effects of pattern matching algorithms. In EMBC, 2010.
Asmuss2013 S. Amsuss, L. Paredes, R. Zihlschlacht, N. Rudigkeit, B. Graimann, M.J. Herrmann, and D. Farina, Long term stability of surface emg pattern classification for prosthetic control. In EMBC, pages 3622โ3625, 2013.
He2015 J. He, D. Zhang, N. Jiang, X. Sheng, D. Farina, and X. Zhu, User adaptation in long-term, open-loop myoelectric training: implications for emg pattern recognition in prosthesis control. Journal of neural engineering, 12(4):046005, 2015.
|
http://arxiv.org/abs/2306.04536v1
|
20230607154229
|
JADES: The production and escape of ionizing photons from faint Lyman-alpha emitters in the epoch of reionization
|
[
"Aayush Saxena",
"Andrew J. Bunker",
"Gareth C. Jones",
"Daniel P. Stark",
"Alex J. Cameron",
"Joris Witstok",
"Santiago Arribas",
"William M. Baker",
"Stefi Baum",
"Rachana Bhatawdekar",
"Rebecca Bowler",
"Kristan Boyett",
"Stefano Carniani",
"Stephane Charlot",
"Jacopo Chevallard",
"Mirko Curti",
"Emma Curtis-Lake",
"Daniel J. Eisenstein",
"Ryan Endsley",
"Kevin Hainline",
"Jakob M. Helton",
"Benjamin D. Johnson",
"Nimisha Kumari",
"Tobias J. Looser",
"Roberto Maiolino",
"Marcia Rieke",
"Hans-Walter Rix",
"Brant E. Robertson",
"Lester Sandles",
"Charlotte Simmonds",
"Renske Smit",
"Sandro Tacchella",
"Christina C. Williams",
"Christopher N. A. Willmer",
"Chris Willott"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Faint Lyฮฑ emitters at zโณ6
Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721, USA
Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK
Centro de Astrobiologรญa (CAB), CSIC-INTA, Cra. de Ajalvir Km.ย 4, 28850 Torrejรณn de Ardoz, Madrid, Spain
Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2, Canada
European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo s/n, 28692 Villanueva de la Caรฑada, Madrid, Spain
European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL
Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester, M13 9PL, UK
School of Physics, University of Melbourne, Parkville 3010, VIC, Australia
ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia
Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy
Sorbonne Universitรฉ, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France
European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei Muenchen, Germany
Centre for Astrophysics Research, Department of Physics, Astronomy and Mathematics, University of Hertfordshire, Hatfield AL10 9AB, UK
Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge MA 02138 USA
Department of Astronomy, University of Texas, Austin, TX 78712, USA
AURA for European Space Agency, Space Telescope Science Institute, 3700 San Martin Drive. Baltimore, MD, 21210
Max-Planck-Institut fรผr Astronomie, Kรถnigstuhl 17, D-69117, Heidelberg, Germany
Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool L3 5RF, UK
NSF's National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA
NRC Herzberg, 5071 West Saanich Rd, Victoria, BC V9E 2E7, Canada
A. Saxena et al.
We present the properties of 16 faint Lyman-ฮฑ emitting galaxies (LAEs) at z>5.8 from the JWST Advanced Deep Extragalactic Survey (JADES) spectroscopic data in the Hubble Ultra Deep Field/GOODS-S. These LAEs span a redshift range zโ5.8-8.0 and UV magnitude range M_UVโ -17 to -20.6, with Lyฮฑ equivalent width (EW) in the range โ 25-350 ร
. The detection of other rest-optical emission lines in the spectra of these LAEs enables the determination of accurate systemic redshifts and Lyฮฑ velocity offsets, as well as the physical and chemical composition of their stars and interstellar media. These faint LAEs are consistent with metal-poor systems with high ionization parameters, similar to the general galaxy population at z>6. We measure an average ionizing photon production efficiency, log(ฮพ_ion/erg^-1 Hz) โ 25.56 across our LAEs, which does not evolve strongly with redshift. We report an anti-correlation between Lyฮฑ escape fraction and velocity offset from systemic, consistent with model expectations. We further find that the strength and velocity offset of Lyฮฑ are not correlated with galaxy spectroscopic properties nor with ฮพ_ion. We find a decrease in Lyฮฑ escape fractions with redshift, indicative of decreasing sizes of ionized bubbles around LAEs at high redshifts. We use a range of galaxy properties to predict Lyman continuum escape fractions for our LAEs, finding that the ionizing photon output into the intergalactic medium from our LAEs remains roughly constant across the observed UV magnitude and Lyฮฑ equivalent width, showing a mild increase with redshift. We derive correlations between the ionizing photon output from LAEs and UV magnitude Lyฮฑ strengths and redshift, which can be used to build realistic, observationally-driven reionization models.
JADES: The production and escape of ionizing photons from faint Lyman-alpha emitters in the epoch of reionization
Aayush Saxena1,2E-mail: [email protected]
Andrew J. Bunker1
Gareth C. Jones1
Daniel P. Stark3
Alex J. Cameron1
Joris Witstok4,5
Santiago Arribas6
William M. Baker4,5
Stefi Baum7
Rachana Bhatawdekar8,9
Rebecca Bowler10
Kristan Boyett11,12
Stefano Carniani13
Stephane Charlot14
Jacopo Chevallard1
Mirko Curti15,4,5
Emma Curtis-Lake16
Daniel J. Eisenstein17
Ryan Endsley18
Kevin Hainline3
Jakob M. Helton3
Benjamin D. Johnson17
Nimisha Kumari19
Tobias J. Looser4,5
Roberto Maiolino4,5,2
Marcia Rieke3
Hans-Walter Rix20
Brant E. Robertson21
Lester Sandles4,5
Charlotte Simmonds4,5
Renske Smit22
Sandro Tacchella4,5
Christina C. Williams23
Christopher N. A. Willmer3
Chris Willott24
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
ยง INTRODUCTION
Cosmic reionization is a crucial phase transition in the Universe's history, the understanding of which is an important challenge in observational astronomy <cit.>. The emergence of ionizing UV photons from the first structures to form in the Universe began interacting with the neutral intergalactic medium (IGM), gradually ionizing it to near completion by zโผ6 <cit.>, although certain studies have favoured a later end to reionization <cit.>. To quantify the contribution towards the cosmic reionization budget from ionizing photon sources in the early Universe, a good understanding is needed of the space density of sources, the efficiency of hydrogen ionizing Lyman continuum (LyC; ฮป_0 < 912 ร
) photon production, and crucially, the fraction of LyC photons that manage to escape into the IGM <cit.>.
JWST spectroscopy has offered ground-breaking insights into the state of the interstellar medium (ISM), chemical enrichment of the gas and stars as well as ionizing photon production in galaxies at z>6, pushing towards fainter UV magnitudes than were previously possible from the ground <cit.>. However, accurately measuring the escape fraction of LyC photons () becomes hard already at z>4 mainly due to the increasing neutrality of the IGM <cit.>, which efficiently absorbs LyC photons along the line of sight. A further complication is introduced by the fact that no clear dependence between (LyC) and galaxy properties has been observationally established <cit.>
Therefore, in order to constrain the all-important escape fraction of ionizing photons from reionization era galaxies, it is of utmost importance to find reliable indirect indicators of (LyC). The presence of young, actively forming stars as well as relatively gas and dust-free environments is thought to enable significant (LyC) from galaxies <cit.>, and spectroscopic and/or photometric indicators probing such conditions can be explored as indirect indicators of LyC photon escape <cit.>. Important insights can also be gained from high-resolution simulations of reionization-era galaxies, where a good handle on the escaping LyC radiation can be correlated with prevalent galaxy conditions <cit.> that can then be converted into observables <cit.> and used to predict from galaxies.
Uniquely, galaxies at zโณ6 that show strong emission in their spectra (typically EW() >20 ร
; e.g. ), also known as emitters (or LAEs) can be excellent probes of studying how reionization unfolds over redshifts. The presence of strong emission at zโณ6 often traces the existence of large ionized bubbles in an otherwise neutral IGM (, c.f. ), offering direct observational insights into reionized regions of the early Universe. Further, as intrinsic luminosities are expected to increase with star-formation rates, the fraction of galaxies that appear to be strong LAEs can be an important diagnostic of the ionizing photon production capabilities of reionization era galaxies <cit.> as well as the evolving state of IGM neutral fraction <cit.>.
Considerable information about the neutral gas and dust content within a galaxy can be gained by the observed strength and emission line profile of <cit.>. The separation between the blue and red peaks in the emission as well as the offset from systemic redshift in the absence of a double-peaked profile can be used to infer neutral gas densities <cit.> and dust <cit.>, although it has been shown that the neutral gas distribution may play the more dominant role in controlling escape <cit.>. At z>6, both the number density of LAEs <cit.> and the shape of the line originating from star-forming galaxies residing within ionized bubbles can further be used to estimate the size of those bubbles (e.g. , Witstok et al. submitted).
With a plethora of models available to link the observed properties to both galaxy properties and the state of the IGM at z>6, it is imperative to expand samples of observed LAEs in the reionization era, pushing to fainter magnitudes. Probing emission from UV-faint galaxies has the added advantage of providing much tighter constraints on both bubble sizes as well as the ionized fraction of the IGM <cit.>. Importantly, emission from fainter galaxies can provide additional sightlines from which the impact of galaxy associations on the production efficiency of ionizing photons <cit.> and their transmission through the IGM <cit.> can be studied in detail.
Perhaps most importantly, detailed studies of faint LAEs can inform our understanding of the key drivers of cosmic reionization, particularly testing whether compact star-forming galaxies are indeed contributing the bulk of ionizing photons towards the reionization budget <cit.>, which are often expected to produce large intrinsic luminosities <cit.>. LAEs that have their emission peaking close to systemic redshifts are also expected to have high LyC escape fractions <cit.>. With signatures of hard radiation fields <cit.> and elevated ionizing photon production efficiencies <cit.> measured from LAEs across redshifts, emitting galaxies in the reionization era are exciting laboratories to both test and constrain reionization models. With access to stellar and ISM properties of LAEs at high redshifts thanks to JWST, it is now finally possible to study the potential role of LAEs in driving cosmic reionization.
In an attempt to quantify the production and escape of both and LyC photons from LAEs in the reionization era, in this study we dramatically increase the number of faint LAEs detected at zโณ6 using exquisitely deep spectra from the JWST Advanced Deep Extragalactic Survey (JADES; ). The main aim of this work is to explore the physical properties of faint LAEs in the reionization era, while also investigating the physical mechanisms within the galaxy that control the visibility of emission. We further assess the impact of an increasingly neutral IGM on the emergent emission at the highest redshifts. Finally, using all available spectroscopic and photometric information about our faint LAEs, we estimate their ionizing photon contribution towards the global reionization budget. In companion papers, we also measure the LAE fraction <cit.> as well as the size of ionized bubbles around our LAEs and their clustering (Witstok et al. submitted).
The layout of this paper is as follows: Section <ref> describes the JWST data used in this study as well as the measurement of key spectroscopic quantities that are used in this study. Section <ref> presents the chemical enrichment and ionization state inferred from the spectra of our LAEs compared with other reionization era galaxies in the literature. Section <ref> explores the mechanisms within galaxies that control the escape of photons along the line of sight. Section <ref> discusses the implications for the reionization of the Universe from these new LAE observations and presents quantities that would help build realistic reionization models. The main conclusions of this study are presented in Section <ref>.
Throughout this paper, we use the <cit.> cosmology. Magnitudes are in the AB system <cit.> and all distances used are proper distances, unless otherwise stated.
ยง DATA AND MEASUREMENTS
ยง.ยง NIRSpec data
The JWST observations used in this study are part of JADES, which is a collaboration between the Near-Infrared Camera (NIRCam; ) and Near-Infrared Spectrograph (NIRSpec; ) Instrument Science teams with an aim of using over 750 hours of guaranteed time observations (GTO) to study the evolution of galaxies in the Great Observatories Origins Deep Survey (GOODS)-South and GOODS-North fields <cit.>. We describe the NIRSpec and NIRCam observations and data reduction steps below.
Spectroscopic data presented in this work was obtained using the Micro-Shutter Assembly (MSA; ) on the NIRSpec instrument on board JWST. Two `Tiers' of JADES data was utilized in this study: the Deep Tier NIRSpec observations are part of the GTO program ID: 1210 (PI: Lรผtzgendorf) and in GOODS-S centred near the Hubble Ultra Deep Field (HUDF), obtained between 22 October and 25 October 2022 over 3 visits, and the Medium Tier observations are part of GTO program 1180 (PI: Eisenstein) obtained over a larger area in GOODS-S <cit.>).
For Deep observations, the PRISM/CLEAR setup, which gives wavelength coverage in the range 0.6-5.3 ฮผm with a spectral resolution of Rโผ100 <cit.>, and G140M/F070LP, G235M/F170LP, G395M/F290LP, and G395H/F290LP filter/grating setups were used, whereas for Medium observations all of the above but the G395H/F290LP filter/grating setup were used. For the Deep Tier, three sub-pointings were planned in the same field (although each sub-pointing had minor pointing differences), with each visit having a total of 33.613 ks of exposure in PRISM/CLEAR and 8.4 ks of exposure in each of the gratings. The Medium Tier observations were carried out in parallel to NIRCam observations, and therefore, consisted of several single pointings covering a larger sky area, with 3.8 ks of exposure time in PRISM/CLEAR and 3.1 ks of exposure time in the gratings per pointing. We note that as the sources targeted were generally high-priority targets owing to their possible high redshift nature, it was possible for one target to be covered over multiple Medium tier pointings. We refer the readers to <cit.> and <cit.> for further details about the observational setup, strategy and challenges.
The targets for spectroscopy were selected from existing deep HST-based catalogues as well as JADES NIRCam catalogues <cit.>. Candidate high redshift galaxies with photometric redshifts z>5.7, identified via the classic photometric drop-out' technique <cit.>, whereby the Lyman break in the spectrum of a galaxy is captured in adjacent broad-band filters, were assigned higher priorities. Full details of the target selection and priority classes can be found in the accompanying paper by Bunker et al. (2023c).
The data reduction was carried out using pipelines developed by the ESA NIRSpec Science Operations Team (SOT) and the NIRSpec GTO Team (, Carniani et al. in prep). Some of the main data reduction steps implemented by the pipeline are pixel-level background subtraction, pixel-to-pixel flat-field correction, absolute flux calibration, slit-loss correction, and eventually 2-dimensional (2D) and 1-dimensional (1D) spectra extraction and co-addition. In this version of the reduction, the final 1D spectra are not extracted from the 2D spectra, but result from the weighted averaging of 1D spectra from all integrations <cit.>. Due to the compact size of our LAEs, slit-loss corrections were applied by modelling it as point-like source. A nominal 3-pixel extraction aperture was used to produce the co-added 1D spectra. A detailed description of the data reduction and spectral extraction methods is given in <cit.> (but see also and )).
ยง.ยง Identification of Lyman-alpha emitters
emission in the spectra of galaxies in the parent sample was identified through a combination of template fitting <cit.> of the R100 spectra as well as visual inspection of both the R100 and R1000 (G140M) spectra of all confirmed high-redshift galaxies in the parent sample. Using both these methods, we identified 9 candidate LAEs in Deep and 7 candidate LAEs in Medium at z>5.8. We then measure the line flux by fitting a single Gaussian function to the emission in both R100 and R1000 spectra.
The line in one of the 16 LAEs at z>5.8 presented in this work fell in the detector gap in R1000. With the exception of this galaxy, all visually identified LAEs encouragingly showed clear emission both in the PRISM and in G140M spectra. In Figure <ref> we show the full 1D spectrum from PRISM (R100) as well as a zoom-in on the emission identified in the G140M grating (R1000) from a selection of LAEs in our sample and the spectra of all LAEs are shown in Appendix <ref>. In Table <ref> we list the exposure times for the LAEs identified in this work and the references where the possible high redshift nature of these objects was first suggested.
Overall, we find that the line fluxes we measure from the medium resolution grating are systematically higher than the ones measured from PRISM, as can also be seen in Figure <ref>, which is not surprising given the degradation in spectral resolution that PRISM spectra suffer from at shorter wavelengths. Therefore, going forward we use measurements from the G140M grating, with the exception of one source for which was in the detector gap.
ยง.ยง Systemic redshifts
Accurate `systemic' redshifts were measured by identifying strong emission lines in the higher resolution Grating spectra, which generally consisted of , , and . The redshift was derived by fitting single Gaussian functions to the strongest emission lines and using a S/N-weighted combination of the centroids of the fits to obtain the best redshift solution. For the redshift range of our sources, the , and lines fell in the G395M grating and the line fell in the G235M grating spectra. Vacuum wavelengths for all of these strong rest-frame optical lines were used for redshift determination.
We found that on average, the difference between the redshifts derived from R100 and R1000 spectra were of the order ฮ z โผ 0.004, but the redshifts derived from lines in the medium dispersion gratings were found to be consistent. Therefore, the redshifts that we derive and use further in the study are from the medium dispersion gratings, which also have a much narrower line-spread function (LSF) and are more sensitive to narrow emission lines. The source IDs, JADES source names, redshifts and exposure times are given in Table <ref>. From here on, we use the IDs to refer to the objects presented in this paper. The references for the discovery papers of these targets can be found in <cit.>.
ยง.ยง UV magnitudes and slopes
UV magnitudes at rest-frame 1500 ร
(M_UV) were measured directly from the R100 PRISM spectra. To do this, the spectra were shifted from observed to rest-frame using the spectroscopic redshifts and a 50 ร
-wide boxcar filter centred on 1500 ร
to measure the median flux and error. The measured fluxes and errors were then used to calculate absolute magnitudes and errors. The distribution of the UV magnitudes and equivalent widths from our sample of LAEs is shown in Figure <ref>. The UV-faint galaxies in our sample show systematically high EW(), which is likely due to the flux-limited nature of spectroscopic observations, only enabling high EW LAEs to be identified at fainter UV magnitudes.
To put our sample into perspective, we also show measurements from other LAEs at zโณ6 identified using JWST <cit.> or ground-based observations <cit.> in the Figure. Very clearly, the LAEs presented in this work have much fainter UV magnitudes compared to other LAEs at zโณ6 in the literature.
UV slopes (ฮฒ, where f_ฮปโฮป^ฮฒ) are also measured directly from the R100 spectra by fitting an exponential function using chi-squared minimization to the flux density in the wavelength range 1340 ร
to 2600 ร
, using the <cit.> spectral windows to avoid strong emission and/or absorption features at rest-UV wavelengths. The redshifts, UV magnitudes at 1500 ร
and observed UV slopes are given in Tableย <ref>.
ยง.ยง Lyman-alpha velocity offsets
Using accurate systemic redshifts from the medium resolution grating spectra, we then use the peak of the line detected in the G140M grating spectra of our LAEs to calculate velocity offsets from the expected emission (vacuum wavelength) at systemic redshift. As mentioned earlier, the wavelength calibrations between the different gratings were compared against the lower resolution PRISM spectra were noted to be slightly inconsistent, but the wavelengths across the grating spectra were all consistent with each other <cit.>. Therefore, inferring the observed velocity offset of from G140M spectra should not be affected by systematic offsets.
We trialled two methods to estimate the velocity shifts and errors. The first method involved obtaining the centroid of the line emission by fitting a single Gaussian function, with the error on the centroid (which takes into account pixel-by-pixel uncertainties) giving the error on the velocity offset. However, as emission from high redshift galaxies often appears to be asymmetric due to absorption of the blue wing by both ISM and neutral IGM, the single Gaussian fit did not always accurately coincide with the observed peak of emission. Therefore, the second method we employed involved calculating the offset from the peak pixel of the emission line, with the uncertainty on this measurement being the width of the peak pixel. If the S/N of the emission line is high enough, the peak pixel should always trace the peak of the LSF-deconvolved emission line as well, thereby returning the most accurate empirical measurement of the velocity offset.
ยง.ยง Other emission line measurements
The rest-frame optical emission line fluxes for all LAEs are measured from the higher spectral resolution grating spectra, unless the lines are not clearly detected in the grating. In that case we measure and report the line fluxes from the PRISM spectra. The main emission lines that we measure for our sample of LAEs are ฮปฮป3726,3729 (which appear to be blended), , ฮปฮป4959,5007 and .
We once again fit single Gaussian functions to all of these lines, measuring the local continuum from a wavelength region adjacent to the emission line. Using these line fluxes we also calculate line ratios such as ฮป5007/ ฮปฮป 3726,3729 (O32) and ( ฮปฮป 3726,3729 + ฮปฮป 4959,5007)/ (R23).
ยง.ยง Dust measurements from Balmer decrements
Here we use the Balmer emission line decrements calculated from / (or / when is not within the spectral coverage). We calculate the intrinsic ratios of these line fluxes using pyneb <cit.>, assuming a temperature of 10^4 K and electron density of 100 cm^-3. We assume the dust attenuation curve for the Small Magellanic Cloud (SMC; ), which has been shown to be the most appropriate for high redshift galaxies <cit.>.
Dust attenuation, E(B-V) is then calculated by comparing the observed Balmer line ratios with the intrinsic. We note that the line is detected for all but one galaxy in our sample, and therefore, we primarily use the observed / ratios to calculate dust attenuation across our sample, but for z>7 LAEs where moves out of NIRSpec coverage we use /.
ยง.ยง Ionizing photon production efficiency
We use the flux (or when is not within the spectral coverage) and the monochromatic luminosity at 1500 ร
to calculate the ionizing photon production efficiency, or , given by
(ฮพ_ion/erg^-1 Hz) = N(H^0)/L_1500, int
where N(H^0) is the intrinsic hydrogen ionizing photon production rate in units of s^-1 and L_1500, int is the intrinsic (dust-corrected) luminosity density at rest-frame 1500 ร
in units of erg s^-1 Hz^-1.
The line (or other Balmer lines) luminosity can be used to calculate the intrinsic ionizing photon production rate. Assuming T_e = 10^4 K and n_e = 100 cm^-3,
N(H^0) ร (1-f_esc) = 7.3 ร 10^11 L(Hฮฑ)
where is the escape fraction of LyC photons out of the galaxy <cit.>. To calculate for our LAEs, we assume Case-B recombination, i.e. (LyC) =0 (see Section <ref>, however, for a discussion about non-zero (LyC)).
ยง.ยง Lyman-alpha escape fractions
We now use the strength of Balmer emission lines seen in the spectrum together with the inferred dust attenuation to derive a escape fraction for all LAEs in our sample. Assuming Case-B recombination, n_e = 100 cm^-3 and T_e = 10,000 K, the intrinsic / ratio is 8.2 <cit.>. We then calculate () as the ratio of the observed (dust-corrected) to Balmer line emission to the intrinsic ratio, which for the emission line looks like: () = L()/(8.2 รL()).
The observed properties, which include line fluxes, equivalent widths, velocity offset from systemic redshift and the escape fraction () are given in Table <ref>.
ยง.ยง Comparison samples from the literature
To put our results into a more global context while also increasing the baseline of several physical parameters that were also measured from our sample of faint LAEs, we describe here a selection of literature samples of LAEs at zโณ6 with which we compare our results.
Perhaps the most immediate comparison is offered by LAEs identified by <cit.> using JWST spectroscopy through the CEERS survey <cit.>. We also include CEERS results from <cit.> in this study. Since CEERS is shallower and wider than JADES, it more efficiently selects the rarer UV-bright galaxies by probing a much larger volume at high redshifts.
We also use the compilation of zโณ6 LAEs from <cit.> that have ALMA emission line measurements, enabling robust measurements of the velocity offsets. This compilation includes LAEs from the ALMA REBELS survey <cit.> as well as other LAEs at z>6: CLM1 <cit.>, WMH5 <cit.>, B14-65666 <cit.>, EGS-zs8-1 <cit.>, COS-z7-1 <cit.>, COSMOS24108 <cit.>, NTTDF6345 <cit.>, UDS16291 <cit.>, BDF-3299 <cit.>, RXJ2248-ID3 <cit.>, A383-5.2 <cit.>, VR7 <cit.>, CR7 <cit.> and Himiko <cit.>.
We also include and based measurements from <cit.> as well as <cit.>, which use narrow/medium band photometry to infer strengths in spectroscopically confirmed LAEs at zโผ6 identified from MUSE data, enabling the determination of () and .
Finally, we also use the emission measurements from GNz-11, spectroscopically confirmed to lie at z=10.60 with weak emission detected in the medium band NIRSpec gratings <cit.>.
ยง SPECTROSCOPIC PROPERTIES OF LYMAN-ALPHA EMITTERS AT Zโณ5.8
In this section we explore the general spectroscopic properties of LAEs identified in the JADES Deep and Medium Tier surveys, with the aim of comparing the ionization and chemical enrichment of LAEs with the general galaxy population at zโณ6 as well as evaluating the ionizing photon production efficiencies of LAEs across cosmic time.
ยง.ยง Chemical enrichment and dust
In Figureย <ref>, we show R23 vs. O32 line ratios for our Deep- and Medium-tier samples of LAEs. These line ratios are widely used tracers of metallicity and ionisation parameter respectively, with the former forming a two-valued relation with metallicity.
For comparison, we show z<0.1 galaxies from the SDSS MPA-JHU catalogs <cit.>[<https://www.sdss3.org/dr10/spectro/galaxy_mpajhu.php>], as well as non-Lyman-ฮฑ-emitting galaxies at z>5.5 from JADES Deep (), and measurements from individual and stacked galaxies at zโณ5 not selected on presence or otherwise of <cit.>.
Our LAEs on average appear to be metal poor with high ionization parameters, lying away from the locus of typical star-forming galaxies at z<0.1 from SDSS toward high O32 and R23. Instead, they are more similar to what has been reported for the general galaxy population at z>6.
We do note that LAEs from our Deep Tier survey, which tend to have fainter UV magnitudes (โฒ-20.1), show slightly higher O32 ratios and lower R23 ratios compared to LAEs found in the Medium Tier survey as well as other brighter LAEs at z>6. This is consistent with the finding in <cit.> that zโผ6 galaxies from deep JADES observations show much higher O32, and lower R23 than those measured from stacks of CEERS galaxies <cit.>, which are typically brighter.
This is indicative of higher ionization parameters and lower chemical enrichment in fainter, less massive galaxies at z>6.
Overall we find that the parameter space on this plot occupied by LAEs at zโณ6 is roughly the same as the general galaxy population at these redshifts. This suggests that the detection of emission from a galaxy in the EoR may not necessarily depend on the chemical or ionization state that is in, but may be more driven by opportune sight-lines probing sufficiently ionized regions of the Universe.
We do not measure any presence of dust from Balmer decrements derived from / ratios for our LAEs (in agreement with Sandles et al. in prep), which suggests that such systems are relatively dust-free, which is also a prerequisite for the leakage of significant fractions of Lyman continuum photons from a galaxy into the IGM.
ยง.ยง Ionizing photon production
The average ionizing photon production efficiency across our sample of faint LAEs is =25.56 Hz erg^-1, which is shown as a dashed line in Figure <ref>. When comparing with other measurements for LAEs at the highest redshifts in the literature, we do not see significant evolution in as a function of redshift at z>6.
Interestingly, there does not seem to be any strong dependence of on the equivalent width of emission either. Assuming Case B recombination and () of unity, may be expected to increase linearly with EW(). The fact that there is no clear correlation between these two quantities across a wider sample of known LAEs at z>6 spanning orders of magnitude in brightness suggests that the mechanisms that are responsible for the production of ionizing photons in a galaxy are not the ones that also control the escape of photons from the galaxy. In other words, the neutral gas and dust content, which preferentially affects the transmission of photons, does not seem to closely depend on properties such as stellar metallicities or ages that control the production of ionizing photons.
Combined with the lack of redshift evolution in , the picture that emerges is that the ionizing photon production is not closely linked to the strength of the emergent line emission, as it is likely more dependent on the physical and chemical properties of star-forming regions that do not seem to evolve strongly between z=6-8.5. This may have important consequences for modelling the production and escape of ionizing photons from galaxies within the reionization epoch, which we will revisit in Section <ref>. The chemical enrichment and ionizing properties of LAEs in this study are given in Tableย <ref>.
ยง WHAT GOVERNS THE ESCAPE OF LYMAN-ALPHA PHOTONS?
In this section we explore which galaxy property best traces the velocity offset from the systemic redshift as well as the escape fraction of photons, (), which are widely regarded to be tracing escape channels for hydrogen ionizing LyC photons. The goal of this section is to determine the best tracer for LyC leakage when emission may not be visible from galaxies in the reionization era.
ยง.ยง Low Lyman-alpha velocity offset leads to high Lyman-alpha escape fraction
We begin by demonstrating that the escape fraction of photons is strongly anti-correlated with the velocity offset of the emission compared to the systemic redshift of a galaxy, as shown in Figure <ref>. This anti-correlation can be explained using neutral gas column densities โ a low column density of neutral gas will lead to less scattering of photons out of the line of sight, thereby resulting in both a high observed () as well as low velocity offsets from systemic. Low neutral gas density environments are also though to be conducive to the escape of LyC photons from a galaxy (at least along the same line of sight as ). Therefore, both velocity offsets and/or escape fractions can be important to ascertain the escape of ionizing photons from galaxies that drive reionization.
In the following sections we explore correlations between various galaxy properties and each of velocity offset and () to establish dependencies and/or observational biases that impact the strength and line profile in LAEs at zโณ6.
ยง.ยง Insights from Lyman-alpha velocity offsets
In Figure <ref> we compare the velocity offsets observed for our sample of galaxies with other observables that trace both the underlying stellar populations as well as the state of the ISM. We also include measurements of brighter LAEs in the EoR from the literature to increase the baseline of any trends that may become apparent.
We find that the velocity offsets anti-correlate with EW() as shown in Figure <ref> (top-left), which has previously been reported in the literature across redshifts <cit.>. This anti-correlation mainly stems from the resonant scattering of photons by the neutral gas within the galaxies โ a higher velocity offset compared to the systemic redshift is indicative of more resonant scattering of the emergent photons, which results in decreased flux observed along the line-of-sight. Therefore, the same scattering mechanism is responsible for increased offset from systemic velocity as well as the reduction of EW() across galaxies.
With high EW emission that peaks close to the systemic redshift likely tracing low covering fractions of neutral gas, galaxies that exhibit such profiles and strengths are also likely to be leaking significant amounts of LyC photons <cit.>. For the sample of UV-faint LAEs probed by the JADES sample presented in this paper, we find the equivalent widths to be higher and the velocity offsets to be lower compared to UV-bright LAEs in the literature (Figure <ref>, top-right), which may indicate that UV-fainter LAEs could be more likely to host conditions required for efficient as well as LyC escape, which we will explore in detail in the later sections.
Particularly within the context of ionized bubbles within which LAEs in the EoR must reside to be able to freely transmit photons along the line of sight, smaller velocity offsets are also expected to trace large ionized bubble sizes <cit.>, which would lead to considerably less attenuation by the intervening IGM and therefore, higher transmission of leading to high equivalent width measurements.
Comparing the offset with spectroscopic indicators of the ionization parameter (i.e. the ratio of ionizing photons to the hydrogen density) probed by indicators like the O32 ratio, we do not find any strong correlations between the two quantities (Figure <ref>, bottom-left). A high O32 ratio has been proposed as an indicator for both high and LyC escape <cit.> and we do find that our UV-faint LAEs with small velocity offsets on average show high O32 ratios. However, given a lack of strong correlation between O32 and velocity offset we conclude that high O32 ratios are a necessary but not a sufficient condition for efficient ionizing photon escape <cit.>.
We similarly do not find any strong correlation between velocity offset and EW(), which is a good tracer for star-formation rates and consequently the production rates of Hydrogen ionizing photons (Figure <ref>, bottom-right) and has been proposed as a robust indicator of (LyC) <cit.>. The requirement of a relatively high EW() is perhaps similar in nature to the requirement of high O32 from LyC leaking galaxies, necessary but insufficient on its own to enable efficient LyC escape.
ยง.ยง Insights from the Lyman-alpha escape fraction
We now explore the dependence of () measured directly from the spectra with other galaxy properties and show the dependence of () on EW(), M_UV, O32 and EW() in Figure <ref>.
We find a strong correlation between () and EW() (Figure <ref>, top-left). Since () is calculated using the observed ratio of to (or ) emission, this strong correlation suggests that the (or ) line fluxes do not scale in proportion with the emission in LAEs with with higher EW emission.
We note that the observed () (as well as EW) increases consistently with decreasing UV magnitudes (top-right), which may be indicative of decreasing neutral gas covering fractions that potentially play a more important role in dictating the strength of the observed emission than the intrinsic production of ionizing photons. However, We note that the lack of low () (or low EW()) detections from the faintest galaxies may also be a consequence of the flux limited nature of the spectroscopic data used in this study. It is also worth noting that the lack of high () observations from the UV-brightest galaxies is once again indicative of increasing neutral gas fractions in more luminous/massive systems, which likely attenuates and/or scatters the flux along the line-of-sight.
Comparing with O32 and EW(), we find that neither of these quantities correlates strongly with () across our sample. We note that high O32 ratios and/or EW() are perhaps needed to have a higher chance of observing high (), but there is considerable scatter in the O32 ratios that we measure for our LAEs, with some LAEs showing O32 <3. Therefore, the O32 ratio may not be a good predictor of the expected () (and consequently LyC) from galaxies in the reionization era.
ยง.ยง Role of the IGM in attenuating Lyman-alpha emission at z>6
Finally in this section, we look at the evolution of () with redshift, focusing particularly on zโณ6 where the IGM is expected to play a dominant role in attenuating emission, unless the LAEs live in large ionized bubbles. In Figure <ref> we show () as a function of redshift, colour-coded by M_UV for our LAEs along with others known at zโณ5.8. A sharp decrease in the () is clearly apparent.
It is also interesting to note that at any given redshift, UV-fainter galaxies exhibit higher (), as we had previously noted. The question that arises from this is, what is driving the decrease of () with redshift? Is the increasing neutrality of the IGM with redshift more dominant than the observational biases associated with being able to only observe UV-bright galaxies at high redshifts from flux limited studies?
To explore this effect, in Figure <ref> we show the () as a function of redshift for only the UV-brightest galaxies with M_UV < -19.5 (arbitrarily chosen) to study the evolution in a more flux complete sample of LAEs across a large redshift baseline. This time around, we colour code the data points with velocity offset, which is a good proxy for the size of the ionized bubble around the LAE.
Figure <ref> clearly shows that for UV-bright galaxies at z>6, the decrease in () is accompanied by an increasing velocity offset at the highest redshifts, indicating that the decline of EW() seen with redshift in UV-bright LAEs is driven by the reduction in the sizes of the ionized bubbles traced by the velocity offset from systemic redshift. This demonstrates that increased IGM attenuation at the highest redshifts is playing an important role in the observed evolution of EW(), at least within the UV-brightest sample, by attenuating the emergent flux and the measured .
Therefore, a combination of selection effects as well as increasingly neutral IGM, which manifests itself as smaller ionized bubbles around LAEs at the highest redshifts play an important role in regulating (). Estimates on the sizes of ionized regions around JADES LAEs have been presented in a companion paper (Witstok et al. submitted) and offer a powerful probe of the spatial as well as temporal evolution of the IGM neutral fraction in this field.
The comparisons we have presented in this section demonstrate that to use emission to infer significant LyC photon leakage from galaxies in the reionization era, both high () and low velocity offsets compared to systemic redshift are required. We have found that the dependence of both quantities on other spectroscopic and photometric galaxy properties are filled with complexity and are impacted by observational biases (most importantly the flux limited nature of spectroscopic observations in a field). However, the detection of emission from a z>6 galaxy is a powerful probe nonetheless at identifying LyC leakage, as has also been noted at lower redshifts <cit.>.
In the next section we attempt to move beyond a simple () and use all of the available spectroscopic and photometric indicators to estimate (LyC), which is the quantity that is needed to capture the contribution of galaxies to the reionization budget of the Universe at zโณ6.
ยง IMPLICATIONS FOR LYC PHOTON PRODUCTION, ESCAPE AND REIONIZATION
Although the presence of strong emission peaking close to the systemic velocity has been used to infer high LyC escape fractions <cit.>, the physics that control the escape of LyC photons from star-forming galaxies are much more complicated <cit.>. The neutral gas content within a galaxy, in particular, can affect the and LyC photons differently, which combined with the line-of-sight dependence of both and LyC photon escape can often complicate the inference of LyC photon escape from alone. For example, one of the most well-studied LyC leakers, Ion1 at zโ3.8 <cit.> actually does not show any emission, which demonstrates the complex relationship between and LyC photons.
Therefore, in this section we fold in other photometric and spectroscopic properties of our faint LAEs to make a more informed inference on the LyC escape fractions. Several observational studies as well as simulations have attempted to connect the leakage of LyC photons to spectroscopic properties. Some of the most exciting observational results linking LyC leakage to galaxy properties are being delivered by the Low-z Lyman Continuum Survey (LzLCS; ). State-of-the-art high-resolution cosmological simulations like sphinx^20 are also now being used to study the dependence of LyC photon escape on galaxy properties <cit.>, which offers much more control on the sample sizes and selection functions when attempting to use observations of galaxy properties to predict (LyC).
Here we use the relationship between (LyC) and galaxy properties derived by <cit.> to predict (LyC) for our sample of LAEs. <cit.> specifically focused on observables that trace conditions within galaxies that enable both the production as well as escape of LyC photons. Briefly, these conditions mainly require the galaxies to have (i) relatively high star-formation rates (sSFR >10^-9 yr^-1); (ii) stellar ages in the range 3.5-10 Myr, a time long enough for the first generation of supernovae to have cleared out channels in the ISM for LyC escape, while short enough that UV photons are still being produced in abundance by the stellar population, and (iii) low dust and neutral gas content.
Using these criteria, <cit.> report a six-parameter equation to predict the angle-averaged (and not sight-line dependent) (LyC) based on observed galaxy properties. These parameters include the UV slope, ฮฒ, dust attenuation E(B-V) (typically measured from the Balmer line decrement), line luminosity, M_UV, R23 and O32. All of these parameters have been observed for our faint LAEs from JADES, which makes predicting (LyC) using Equation (4) from <cit.> relatively straightforward. We note that to predict the angle-averaged (LyC), the properties of the emission line, which can often be highly sight-line dependent, are not taken into account by <cit.> when estimating (LyC) <cit.>. The calculated (LyC) for our LAEs are given in Table <ref>.
In Figure <ref> we show (LyC) calculated using observed galaxy properties compared with () measured directly from the spectra (left) and the velocity offset from the systemic (right) from our faint LAEs as well as those that we calculate using observed quantities from <cit.>. We find that the predicted (LyC) remains below () for all our LAEs, but two LAEs from <cit.> have higher (LyC) compared to ().
In general we do not find any strong correlation between (LyC) and (). We do find, qualitatively, that (LyC) anti-correlates with velocity offset, but we note that a low velocity offset does not guarantee a high (LyC) and that there is considerable scatter in the plot. From a sample of low redshift LyC leakers, <cit.> also found that (LyC) tends to always be lower than (), and that (LyC) weakly anti-correlates with velocity offset from systemic, consistent with our findings.
We do, however, note the lack of any LAE with high velocity offsets showing high (LyC), which seems to suggest that low velocity offsets are a necessary but not a sufficient condition to enable high LyC photon escape, mainly tracing the absence of high column density neutral gas, and that galaxies that show large velocity offsets compared to systemic likely trace highly dense neutral gas conditions, which may not be conducive for significant LyC escape fractions.
Before assessing the co-dependence of ionizing photon production and escape from our faint LAEs, we note that when calculating for our LAEs using Equation (<ref>) in Section <ref> we assumed (LyC) to be zero. However, with (LyC) predictions for our LAEs, we now calculate , which is corrected for the fraction of ionizing photons that escape out of the galaxy, thereby not contributing towards line emission. In this section going forward, we use the corrected value, .
We now explore the dependence of (LyC) on the corrected ionizing photon production efficiencies in Figure <ref>. Interestingly, we find that sources with the highest (LyC) do not necessarily show high values of . Interestingly there is a mild anti-correlation between the two quantities, which may be not be entirely unexpected as when non-negligible fractions of ionizing photons begin escaping from the galaxy, there are fewer photons available to produce the Balmer line (as well as strong nebular line) emission <cit.>. This would lead to low values inferred when simply assuming Case-B recombination, which can consistently explain this observed mild anti-correlation.
Another important effect that may be driving the scatter between (LyC) and could be the expected time delays between significant production of ionizing photons and the emergence of escape channels that facilitate the escape of those photons. As noted in <cit.> (but see also ), for a burst of star-formation it isn't up until โผ3.5 Myr since the starburst is triggered that supernovae begin to clear channels in the ISM to allow significant LyC photon escape. Very early on in the starburst, there is a very high production rate of ionizing photons, but these photons are unable to escape out of the H ii regions. Therefore, the age of starburst and the time delay between the peak of ionizing photon production and the emergence of escape channels may lead to the observed scatter.
Finally, we explore the dependence of the product of the ionizing photon production efficiency and the ionizing photon escape fraction ((LyC) ร ), which is an important quantity needed to assess the contribution of individual star-forming galaxies to the reionization budget of the Universe. Since this study has been limited only to strong LAEs at z โณ 6, here we explore the dependence of the ionizing photon output into the IGM as a function of UV magnitude, EW() and redshift, which we show in Figure <ref>.
We begin by assessing the dependence of log((LyC) ร ) on UV magnitude, finding that it increases very mildly (with a large scatter) with decreasing M_UV as shown in Figure <ref> (top-left), following the linear relation:
log(f_escรฮพ_ion^corr/erg^-1 Hz) = 0.03 (ยฑ 0.01) M_UV + 25.04 (ยฑ 1.87)
This lack of strong correlation for LAEs clearly disfavours enhanced ionizing photon output from UV-fainter LAEs, and may have important consequences for models of reionization.
We further find that the ionizing photon output from LAEs also remains roughly constant over a range of EW(), best fit with the linear relation:
log(f_escรฮพ_ion^corr/erg^-1 Hz) = 3.1ร10^-4 (ยฑ 1.7ร10^-6) (EW(Lyฮฑ)/ร
)
+ 24.36 (ยฑ 0.02)
The little to no dependence of log( ร ) on the equivalent width of emission is shown in Figure <ref> (top-right; note that the x-axis is in log scale), implying that purely the observed line strength is not a good independent indicator of the ionizing photon output from LAEs in the epoch of reionization.
Finally, since our LAEs (combined with those from ) cover a large redshift baseline, we also derive the dependence of the ionizing photon output of LAEs with redshift, finding the best-fitting relation:
log(f_escรฮพ_ion/erg^-1 Hz) = 0.04 (ยฑ 0.01) z + 24.08 (ยฑ 0.53)
Once again, we find that the ionizing photon output of LAEs remains roughly constant at zโณ6 as shown in Figure <ref> (bottom). This is consistent with a picture whereby the the production and escape of ionizing photons may be dictated by physical processes that operate on shorter timescales (i.e. intense star-formation or SNe activity).
The best-fitting relations of ionizing photon output with UV magnitude EW and redshift can be used to estimate the total number of ionizing photons contributed by LAEs at a given redshift, depending on the space density of LAEs. Since the increasing neutrality of the IGM makes it impossible to obtain a complete luminosity function at z>6, certain assumptions about the evolution of the luminosity function may need to be made <cit.>.
Alternatively, if a good handle on the emitter fraction (unaffected by the IGM attenuation) can be obtained at z>6 by extrapolating from fractions measured at lower redshifts <cit.>, then our best-fitting relations can also be used to estimate the total ionizing photon contribution towards the reionization budget from LAEs at z>6. Any dependence of the LAE fraction on UV magnitude must also be taken into account for this calculation <cit.>.
ยง CONCLUSIONS
In this study we have presented detailed properties of 16 faint emitting galaxies at z>5.8 from the JWST Advanced Deep Extragalactic Survey (JADES) Deep and Medium Tier NIRSpec MSA surveys. These new emitters, spanning absolute UV magnitudes of -17.0 to -20.6, are generally fainter compared to LAEs that were previously known in or near the epoch of reionization, opening up a new window into studying the properties of faint galaxies that reside within ionized bubbles in the reionization era.
Using measurements directly from the low resolution (R100) PRISM as well as medium resolution (R1000) grating spectra, we report the detection of other rest-frame optical emission lines such as , , and . The detection of these lines enables a reliable measure of their spectroscopic redshift against which the velocity offsets of emission can be accurately measured. In general, these LAEs have blue rest-UV spectral slopes (-2.1 to -2.7) and little to no dust measured from Balmer decrements.
Using rest-optical line ratios, we find that our LAEs appear to be metal poor with high ionization parameters, properties that are typical of JWST-detected faint star-forming galaxies at z>6. These properties combined with steep UV slopes and no dust indicate that all of our LAEs are young, star-forming systems. We further measure the ionizing photon production efficiencies () directly from Balmer line emission and find that our LAEs on average have log(/Hz erg^-1) โ 25.56, which does not seem to evolve strongly with redshift. We also do not find a strong dependence of on the strength of emission.
Using the escape fraction (calculated using Balmer line emission) and the velocity offset of the peak of the line compared to systemic redshift, we study the galaxy properties that govern the escape of photons from reionization era galaxies. We note that the escape fraction of photons is anti-correlated with the velocity offset as well as equivalent width, consistent with expectations from emission models as well as high-resolution galaxy simulations employing radiative transfer to track the escape of emission.
We also find that LAEs that are fainter in the UV show higher escape fractions, although this could be attributed to the flux limited nature of our spectroscopic surveys. We do not find strong correlations between escape fraction or velocity offset with key ISM indicators such as / ratios or equivalent widths. We conclude that the escape of emission is a complicated process, and may not necessarily depend strongly on the state of the ISM and stellar populations at any given time, especially at z>6 when the IGM attenuation also plays an important role.
We find a gradual decrease in escape fractions with redshift, indicative of increasing IGM attenuation in diminishing strengths at the highest redshifts. By making a UV cut to remove selection effects, we find that the escape fraction still evolves weakly with redshift, but the escaping emission is considerably more offset compared to systemic velocity at the highest redshifts, indicative of decreasing ionized fractions and sizes of the bubbles that must surround these LAEs.
Making use of several photometric and spectroscopic indicators for our LAEs, we then predict escape fractions of hydrogen ionizing Lyman continuum photons. We find that with the exception of one LAE in our sample, the LyC escape fraction is always lower than the escape fraction, with no significant correlations between the two. We also do not find any significant correlation between LyC escape fraction and , which can likely explained by the reduced Balmer line emission in the presence of significant ionizing photon escape or by time delays between the production and escape of ionizing photons.
By combining the production and escape of LyC photons (i.e. ร ), we find that the quantity that is actually responsible for delivering ionizing photons from the galaxies to the IGM remains relatively consistent across UV magnitudes and EW(), but increases gradually with redshift. Using these dependencies and assumptions about the emitter fraction at any given redshift, more realistic models of reionization can be constructed.
Deeper and wider spectroscopic surveys in the future will help expand the samples of known LAEs in the reionization era. The availability of other spectroscopic indicators tracing the nature of stellar populations, ISM ionization and chemical conditions would be key to assess the role of emitting galaxies in driving cosmic reionization, helping build more realistic models charting the reionization history of the Universe.
AS thanks Harley Katz, Richard Ellis, Jorryt Matthee and Anne Verhamme for insightful discussions. AS, AJB, GCJ, AJC and JC acknowledge funding from the โFirstGalaxiesโ Advanced Grant from the European Research Council (ERC) under the European Unionโs Horizon 2020 research and innovation programme (Grant agreement No. 789056). JW, RM, MC, TJL, LS & JS acknowledge support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 "QUENCH". JW also acknowledges support from the Fondation MERAC. SA acknowledges support from the research project PID2021-127718NB- I00 of the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI). RB acknowledges support from an STFC Ernest Rutherford Fellowship (ST/T003596/1). KB acknowledges support from the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. SC acknowledges support by European Union's HE ERC Starting Grant No. 101040227 - "WINGS". ECL acknowledges support of an STFC Webb Fellowship (ST/W001438/1). DJE is supported as a Simons Investigator and by JWST/NIRCam contract to the University of Arizona, NAS5- 02015. BDJ, BER and MR acknowledge support from the NIRCam Science Team contract to the University of Arizona, NAS5-02015. RS acknowledges support from a STFC Ernest Rutherford Fellowship (ST/S004831/1). The work of CCW is supported by NOIR-Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1180 and #1210.
aa
ยง 1D SPECTRA OF LAES REPORTED IN THIS STUDY
|
http://arxiv.org/abs/2306.10085v2
|
20230616121928
|
Current Trends in Digital Twin Development, Maintenance, and Operation: An Interview Study
|
[
"Hossain Muhammad Muctadir",
"David A. Manrique Negrin",
"Raghavendran Gunasekaran",
"Loek Cleophas",
"Mark van den Brand",
"Boudewijn R. Haverkort"
] |
cs.SE
|
[
"cs.SE"
] |
1]Hossain Muhammad [email protected]
1]David A. Manrique [email protected]
2]Raghavendran [email protected]
1, 3]Loek [email protected]
1]Mark van den [email protected]
2]Boudewijn R. [email protected]
[1]Software Eng.ย & Technology clusterDepartment of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
[2]Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, The Netherlands
[3]Department of Information Science, Stellenbosch University, Stellenbosch, South Africa
Current Trends in Digital Twin Development, Maintenance, and Operation: An Interview Study
[
==========================================================================================
Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Although growing in popularity in both industry and academia, digital twins and the methodologies for developing and maintaining them differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research study with 19 professionals from industry and academia who are closely associated with different lifecycle stages of the corresponding digital twins. In this paper, we present our analysis and findings from this study, which is based on eight research questions (RQ). We present our findings per research question. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for their development and maintenance. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin's lifecycle.
ยง INTRODUCTION
Digital Twins (DTs) have captured the interest of industry and academia in recent years, because of their promise to better understand, monitor and improve
systems.
The concept of DTs often encompasses the notion of a real-world entity and a digital counterpart that mimics certain aspects of the former. We believe that both industry and academia are playing a crucial role in further developing this concept and DT's design, development, operation, and maintenance. Although growing in popularity, the concepts and practices around DTs are significantly different between instances. According to Zhang et al.ย <cit.> there is no general consensus on the nature of the real-world entity, the required fidelity of the digital counterpart, or the terms used to refer to these entities.
In this exploratory research, we aimed to better understand the concepts around DTs, their differences and similarities. To achieve this, we interviewed 19 individuals from industry and academia who are involved in DT development, maintenance, and usage. During these interviews, within a DT system we primarily focused on the development, evolution, and maintenance of the digital counterpart and associated data. To guide this research we initially identified research questions RQ1โ7 listed below.
We defined the eighth research question (RQ8) during the analysis phase when we identified interesting insights on the future vision of DTs.
RQ1: How are digital twins defined in practice?
RQ2: How does reuse of existing (software) artifacts influence the lifecycle of DTs?
RQ3: How is consistency maintained among the cross-domain models that are developed independently?
RQ4: What technologies and methodologies are used to integrate models in a DT?
RQ5: What practices are used to design and develop the orchestration and data exchange between models in DTs?
RQ6: What techniques and tools are used to validate a DT and its overall dynamic behavior?
RQ7: What properties need to be validated in a DT
for its consistent dynamic behavior?
RQ8: How will DTs and their development evolve in the future?
These RQs aims to investigate the practices and understandings of DTs from a software perspective, thus exploring DT specific software challenges discussed inย <cit.>. The motivation for this study is to explore how current interviewees have been using DTs, what tools, frameworks and methodologies they apply to develop, maintain and operate DTs in different domains; as well as interviewees' understanding of DT. Specifically, we are interested in understanding the practices in three main areas known to be challenges in digital twin development and maintenance:
* Consistency of models used in DTs created across engineering domains;
* Orchestration of such cross-domain models in DTs;
* Validation of DT dynamic behavior.
To present our work, we structure this paper in different sections. Sectionย <ref> describes the background of this research, Sectionย <ref> the research method we followed. In Sectionย <ref>, we present a short summary of the domains covered by the interviewees and corresponding DT's applications. We present our findings related to the eight RQs in Sectionย <ref>, based on the analysis of the information we gathered during the interviews. Sectionย <ref> explains the threats to validity of our research and how we attempted to minimize them. Finally, Sectionย <ref> discusses our observations and concludes the paper with recommendations for future research directions.
ยง BACKGROUND
The concept of Digital Twins (DTs) was introduced by Grievesย <cit.> in 2003;
Grieves modelled DTs with three dimensions i.e., the physical entity, virtual model and connection, which facilitates the physicalโvirtual interaction. Since then researchers and practitioners have used DT as an umbrella term to refer to something from a simple simulation to a complex virtual entity closely mimicking a real-world counterpart. For example, Bielefeldt et al.ย <cit.> focus on ultra-realistic multi-physical computational models in their definition of DT, whereas El Saddikย <cit.> defines a DT as a digital replica of a physical entity whether living or non-living.
Tao et al.ย <cit.> extended the original DT model by Grieves and proposed a five dimensional (5D) model depicted in Figureย <ref>, i.e.,ย M_DT = (PE, VE, Ss, DD, CN) where PE refers to the physical (actual)
real-world entity with various functional subsystems, VE is the corresponding high-fidelity digital model that reproduces certain abilities and properties of the PE, Ss represents the services for PE and VE, DD encapsulates the domain knowledgeโdata from both PE & VE and their fusion, and finally, CN is the connection among parts of the DT.
In order to avoid any ambiguity, in the following sections we use the terms actual entity () and virtual entity (VE) to respectively refer to real-world entity, which can be an existing or foreseen engineered or naturally occurring physical system or process, and the corresponding digital counterpart.
Since the inception of DTs, research has focused on understanding the concept, the development of DT's applications, or exploration of different implementation technologies used. An example is the work of Liu et al.ย <cit.> that analyses the concept of a DT, the technologies used, and DTs' main industrial applications. That research is based on systematic literature review (SLR) analysing over 200 publications.
Tao et al.ย <cit.> analyses the state-of-the-art in development and applications of DTs, aiming to outline key technologies enabling DT development, classify current and future applications, and lay out possible gaps and challenges.
The
research used an SLR, analyzing 50 papers and eight patents.
Sharma et al.ย <cit.>, based on an SLR of over 80 papers, analysed
and proposed solutions for the research and implementation gaps of
DT technology, such as IoT (internet of things), machine learning and data. This research concluded that regulation and security mechanisms are essential for the proper implementation of DTs due to its cross-domain nature. They also concluded that there are multiple technical and domain specific challenges that require more research to be resolved.
Gรผrdรผr et al.ย <cit.> explores how DTs can help the infrastructure industry.
The research methodology used semi-structured expert interviews with non-technical executives from industry in the UK. This approach allowed the researchers to collect their opinions, related to non-technical challenges, on the value of DTs.
Dalibor et al.ย <cit.> conducted an SLR on 356 papers. This paper analyses DTs with a bottom-up approach, exploring different implementations to investigate expected DT properties and how DTs are deployed, operated and evaluated. In addition, the authors developed a DT feature model.
They explored different implementation techniques, tooling and development processes.
The majority of the research shown above is based on SLRs focused on DT practices and understanding from a high-level systems perspective. The empirical research by Gรผrdรผr et al.ย also approached DTs from a business and high-level systems perspective.
ยง RESEARCH METHODOLOGY
Considering the exploratory nature of our research questions we opted for semi-structured interviewsย <cit.>. This provided sufficient flexibility for the participants to express themselves while allowing us to collect data on our topics of interest. In this section, we introduce and explain our research methodology. We followed Strandberg's interview lifecycleย <cit.>, with our steps depicted in Figureย <ref>. Next, we expand on each step
and explain the related activities.
ยง.ยง Planning
This phase entailed the regulatory activities that our universities required for collecting data from human participants, and preparing a questionnaire consistent with the RQs, serving as guideline during the interviews.
ยง.ยง.ยง Ethical review and research data management
Ethical review is a process followed by our universities that enables researchers to perform research activities in accordance with accepted ethical standards and existing regulations. This process ensured that measures and infrastructure were in place for maintaining data security and confidentiality as we collected personally identifiable information for (prospective) interviewees and recorded the interviews which potentially contained sensitive information.
ยง.ยง.ยง Questionnaire Design
We prepared a questionnaire to act as a guideline to keep the discussion in our semi-structured interviews focused.
It was developed in line with the Interview Protocol Refinement (IPR) frameworkย <cit.>, comprising four phases:
* Ensure interview questions are aligned with RQs:
We took an iterative approach. For the first iteration, we listed our initial RQs and from these derived the initial set of interview questions. We tagged the interview questions with corresponding RQs. This allowed us to identify under-represented research questions and adapt the interview questions accordingly. We repeated this step until the questionnaire stabilized.
* Constructing an inquiry-based conversation:
We categorized the interview questions into (1) background, (2) key, and (3) concluding ones. Based on this categorization and suggestions of Hove and Andaย <cit.>, we sorted them and rephrased some, enabling an inquiry-based
conversation.
* Receiving feedback on the questionnaire:
We performed several review rounds among the authors of this paper and a pilot interview with a researcher working in the model-driven software engineering domain
to check how well participants might understand the questionnaire. Wherever we identified significant difference between interviewee
perception and our intention, we rephrased for better understandability.
* Conduct pilot interview:
We performed mock interviews with colleagues, allowing us to try our questionnaire, receive feedback, and gather experience as interviewers. This helped to further mature the questionnaire.
Tableย <ref> shows the resulting set of interview questions and their associations with the RQs. All the interview questions except for the first two are connected to one or more RQs. These two questions allowed us to start the
conversation, get introduced with the interviewee, and contributed to a conversation-like interview. Furthermore, with the final
question we asked the interviewees' opinion on Tao et al.'s 5D DT modelย <cit.> (see Sectionย <ref>). While asking this question, we showed an image of the 5D model and briefly explained it. To avoid influencing the interviewees during the rest of the interview, we intentionally asked this question at the end.
ยง.ยง Finding interviewees
We aimed to interview practitioners and researchers who are actively involved in the development, maintenance and use of DTs. We started with our own network and created an initial set of potential interviewees that matched our search criteria. Additionally, we verified the DT related involvements of these individuals based on their Google Scholar or LinkedIn profile. Furthermore, as we conducted the interviews, we requested participants to propose potential interviewees from their network, which added two individuals to our list. In the end, we invited 25 persons. We received 22 responses, 20 of them positive; in the end, we conducted 19 interviews since one respondent stopped responding.
We used emails
to the potential interviewees to introduce ourselves and explained the purpose of our research. With the emails, we also included a consent form where we explained details of our research
about data processing and our measures for ensuring data anonymization and privacy, which allowed us to create a level of trust with the interviewees.
While we did not share our questionnaire with the interviewees to avoid prepared and potentially biased answers, we provided them with a description of our topics of interest to allow them
to prepare if they wanted to.
ยง.ยง Performing interviews
As we conducted semi-structured interviews, we did not follow any concrete structure and the questionnaire in Tableย <ref> acted as a guideline. However, we prepared standard texts that we read to the interviewee at the beginning and end of the interview. These introduced the interviewers, checked whether the interviewee had any questions or confusion regarding the consent they provided earlier, and at the end, thanked them for their participation and requested their feedback on the interview.
We recorded video and audio of all 19 interviews. These interviews took place between September 2021 and February 2022.
ยง.ยง Transcription
The interviews altogether accounted for just over 26 hours of recorded video with audio. To generate a word-to-word transcripts of these recordings, we used automated transcript generation followed by manual verification and revision. As most interviews were conducted and recorded using Microsoft Teams, we could use the generated transcript of the corresponding recording. We manually verified and revised each transcript twice to ensure correctness.
ยง.ยง Data analysis
We used the transcripts for further analysis
based on a thematic analysis methodology for qualitative analysisย <cit.>. We utilized LaMaย <cit.>, a web-based tool for collaborative labeling and thematic analysis, to collaborate on this analysis.
To restrict access and ensure data privacy, we deployed this platform locally.
ยง.ยง.ยง Generating and anonymizing artifacts
The aim of this step was to generate a set of artifacts from the transcripts of the interviews. We define an artifact as an independent piece of text that focuses on a specific subject and contains sufficient context information for understanding that subject. To generate them we manually went through each transcript focusing on text spoken by the interviewee and separated text fragments whenever we identified different subjects being discussed. At this stage we only tried to identify changes in subject, not subjects themselves. It was interesting to see that the change of subject occurred not only when new questions were asked but also while discussing one single question. As we generated these text fragments we kept sufficient context information for them to be understandable. When this was not the case, we added a few keywords as context, marking such an addition with square brackets, e.g., to indicate that the word โtheyโ (at that point) refers to a โ[digital entity and its 3D visualization]โ. We also generated artifacts by splitting one artifact into two or more during the labeling step, which is explained in the next section. Typically, we splitted artifacts if we found more than one key messages in it. At the end, we had 748 text artifacts of various sizes. Furthermore, we anonymized the transcripts during this manual artifact generation: all personally identifiable information was replaced by unique identifiers that we stored separately for traceability purposes. The anonymization was essential for performing unbiased analysis in later phases of our research.
ยง.ยง.ยง Labeling of artifact and topic generation
After generating the artifacts
for all transcripts, we labeled them using LaMaย <cit.>. We define a label as a short text that sufficiently captures the core message of an artifact. To reduce bias in labeling, each artifact was labeled by two labelers. We resolved conflicting labels by agreeing on one label through discussion.
During the labeling process, the labeler could use an existing label or create a new one. In LaMa, these labels were accompanied by a description explaining how and when a label should be used, which was crucial for reuse of existing labels. Moreover, while labeling we encountered artifacts that lacked sufficient information or context. We labeled these artifacts with two predefined labels: No value or Not understandable. An example is an artefact discussing an interviewee research policy in relation with their clients. This artefact was classified as No value, since it does not discuss any information related to DT development.
Once we had over 70% of the artifacts labeled, we started with topic creation in parallel. In this case, we define a topic as a clustering of labels that can collectively provide a complete message on a specific subject. We chose an iterative approach while creating the topics. Based on our initial overview of the existing labels, we created our first set of topics. The topic Different understandings of DT is one of the first topics we created. We created this topic because we noticed more than 50 labels related to DT definition.
These topics and the list of labels were revisited at regular intervals, resulting in one or more of (1) creation of new topics, (2) redefining a topic at a higher level of abstraction, (3) breaking up a topic into multiple topics, and (4) moving labels from one topic to another. We repeated this until reaching convergence.
ยง.ยง.ยง Relating topics to RQs and perform analysis
In this last step, we focused on answering the research questions. To do that we created a matrix with the final set of topics and our RQs that enabled us to identify the correlations between the two. To answer the research questions, we consulted this matrix to identify related topics. Subsequently, we carefully went through the topics of interest and corresponding labels and artifacts to answer the research questions.
ยง DEMOGRAPHICS OF INTERVIEW PARTICIPANTS
As explained in Sectionย <ref>, our aim was to interview individuals who are actively involved in the development, maintenance, and use of DTs. Out of the 19 interviewees, ten were primarily from industry and nine
from academia. In this section, we provide an overview of their domains and DTs' applications.
We classified the professional domains of the interviewees into six categories. Seven interviewees claimed that their work involves two different domains and one mentioned being involved in three domains. Tableย <ref> shows the distribution of domains among the 19 interviewees.
As visible here, manufacturing and chemical process industry
and high-tech products are the two most dominant categories.
As for the DTs' applications, all the interviewees mentioned that they have multiple applications for theirs. We therefore analysed the correlation between the domain and the DT's application.
Figureย <ref> shows this correlation. For the analysis, we classified the applications in eight categories as follows in alphabetical order:
* Analysis or improvement of the operation of a product or process. Examples provided by interviewees are bug detection, optimization, system behavior analysis and simulation for decision making or process configuration.
* Control of a process or a product. Such applications aim to take corrective actions from a monitored state towards a desired one.
* Demonstration of alternative solutions or configurations for a physical system. This tends to use visual tools, such as 3D modelling tools.
* Design and development of a product (hardware or software) or process. Examples presented by interviewees are
prototyping, use of simulation for design or design improvements.
* Monitoring a property within a process or a product, aiming to compare the current state against a planned one.
* Predictive maintenance concerns the supervision and prediction of an equipment of product condition. The aim of such a DT application is to determine the state of health of the supervised entity and anticipate its maintenance.
* Testing products or processes; examples shared by interviewees are virtual commissioning, verification and validation or experimentation.
* Training operators or users of a specific machine or product.
Figureย <ref> shows the correlation between the domains of the interviewees and the applications of their corresponding DTs. The total value, shown in black, represents the sum of all the applications in a specific domain. The # application types, shown in blue, represents the number of application categories of a specific domain. The DTs in the high-tech products and manufacturing and chemical processing domains have the highest DT application diversity, each with seven application types. In these two domains, the most frequent DT applications are design and development, and testing. Furthermore, with the exception of the healthcare domain, all the other domains use DT for design and development. The other two most popular DT applications are testingโused by four domainsโand analysisโused by three domains.
ยง RESULTS AND FINDINGS
This section presents the answers to our eight research questions. These answers are based on analyzing the data collected during the interviews.
Some of the discussions were not strictly related to the defined RQs but still yielded interesting insights. We present such additional findings in Sectionย <ref>.
ยง.ยง Definitions and understanding of DT (RQ1)
As indicated in Sectionย <ref>, DT is used as an umbrella term and across different domains can have many different definitions, interpretations, and understanding. With RQ1 we aimed to understand these, and the similarities and differences between them, based on the data collected from the interviews.
A variety of definitions of DTs were discussed by the interviewees and there was no uniformity in these definitions.
Some interviewees have defined DTs with certain boundaries at the start of interview yet over the course of interview mentioned additional aspects of DT, which extend their initial description.
In the following text, we present views on DTs observed from the interview data.
ยง.ยง.ยง Virtual representation of an entity
We intended to find how many interviewees agreed to the fundamental understanding of a DT being
a virtual representation of some entity.
All the interviewees mentioned that a DT is
a virtual representation of some entity, by using different terms such as โdigital counterpartโ, โdigital copyโ, โvirtual replicaโ, โvirtual prototypeโ or โmodelโ. Five interviewees used terms such as โaccurateโ, โpreciseโ and โhigh fidelityโ to describe that a DT should be a high fidelity representation of an entity. According to ten interviewees
the level of fidelity is determined by the DT's purpose and application.
When further talking about DTs, interviewees discussed the type of entities the DT could virtually represent. These could be (1) a real world object with physical dimensions;
(2) a real world process or organization or even a concept without physical dimensions, such as a human resource process, a logistics process in manufacturing,
fuzzy concepts
and others.
One interviewee mentioned that in EU standardization, there is an umbrella term called โentity of interestโ (EOI) to describe entities for which virtual representations are created, encompassing both types of entities mentioned above. EOI as a term matches what we call (Actual Entity) in this paper.
Eight interviewees implicitly or explicitly discussed a DT being a virtual representation of not necessarily a physical object, but of both physical and non-physical ones; indirectly referring to
a virtual representation of an .
Four interviewees did not explicitly mention whether DTs should be a virtual representation of an , but discussed their DTs being a virtual representation of a real world object with physical dimensions.
Ten interviewees also mentioned that DTs need not necessarily be a virtual representation of an existing , but could also be of an at the design stage. The current confusion in the description of DTs on whether the is part of the DT itself or not came up during the interviews.
Four interviewees expressed that it is not since the word โdigitalโ refers only to virtual objects and not physical objects. From the above, it can be understood that there is some level of alignment in the understanding of DT as a virtual representation of some entity which could be physical or non-physical, and which may or may not already exist.
ยง.ยง.ยง Components of a DT
Through the interviews, we wanted to understand what components interviewees considered part of DTs and two questions were aimed towards this.
Some interesting varying responses were observed such as one academic interviewee describing the components of a discrete event simulation (DES) library as components of their DT.
Another academic interviewee discussed the different types of data specific to their DT's application as components of their DT. One industrial interviewee mentioned Computer Aided Design (CAD) models and a software component used for creating model elements as the main components of a DT. However, during the course of the interview they did implicitly communicate other components or aspects of a DT. The different components of a DT discussed during the interviews are listed below. All these components of DTs as described by interviewees have been represented in Figureย <ref>, where the numbers represent the number of interviewees from academia and industry respectively, who discussed those specific components of DTs.
* Models: Interviewees
mentioned different types of models, e.g.,ย those representing geometry, physics, behavior and interactions; design and simulation models; descriptive and predictive models; 3D models; mathematical models; mechanical models; building information models (BIMs); CAD models; and others. All interviewees agreed explicitly or implicitly
that models are an important component of DTs.
* Data: Interviewees
also discussed different types of data such as measured data from sensors; data from system, design data; historical data; reused data from the relevant product line which is in operation;
data acquired during DT operation; data from people who are part of the process; data from subject matter experts; data acquired during the entire lifecycle of a DT; and others. It can be concluded
that all interviewees agreed explicitly or implicitly
that data is an important component of DTs.
* Purpose: Thirteen interviewees expressed that a DT should have a purpose and some further mentioned that this purpose is the driving factor for DTs to be developed. On the contrary, one industrial interviewee explicitly mentioned that DTs should not have a purpose. This interviewee further discussed that DTs should not be developed with a purpose and they have a purposeless existence, which is in stark contrast to what the thirteen interviewees mentioned above. However, this interviewee clarified that once the DT is developed, it can be used for whichever purpose is needed.
The purpose mentioned by the different interviewees can be correlated to the โservicesโ component in the 5D model of DTs proposed by Tao et al.ย <cit.> (discussed in Sectionย <ref>).
* : Already discussed in Sectionย <ref>.
* Communication between and its virtual counterpart:
As part of the interview, we intended to understand the level of communication between the and its virtual counterpart.
All but one interviewee discussed the synchronization from the former to the latterโeither automated or manually.
Such synchronization implicitly conveys a unidirectional communication
from to its corresponding virtual counterpart.
Eleven interviewees discussed that a DT should have bi-directional communication between the two entities.
In addition, two interviewees also expressed that the virtual replica should not always be connected to its but only when this is needed. Five interviewees shared that the synchronization between and its virtual replica should be in real time. However, it was not explicitly discussed further by any of the interviewees what was meant by `real time' which could possibly have different interpretations in different domains.
We identified seven interviewees who mentioned that the frequency of synchronization depends on the purpose or application. One interviewee explicitly mentioned that the connections between the and its virtual counterpart cannot be considered to be part of the DT since this interviewee's perspective was that it is only an infrastructure needed to create and to operate DTs and thus, it cannot be considered to be a part of DTs.
* Other components of a DT: Apart from the components described above, we found other interesting components mentioned by interviewees which could be a part of the DT, which are discussed below.
An academic interviewee emphasized algorithms, including AI (Artificial Intelligence) based ones, which could be used for control, decision making, monitoring, or other purposes. Another academic interviewee claimed that knowledge graphs which capture and describe knowledge about DT components should also be a component of a DT. Two interviewees explicitly mentioned that humans play an important role in a DT and thus, they should also be a component in a DT. One industrial interviewee considers that the main component
in a DT at the lowest abstraction level
is time. He further elucidated that time is an important component
which can be sped up or slowed down, move backward or forward and also, diverge from a specific point in time.
ยง.ยง.ยง Relation to the 5D DT model
As explained in Sectionย <ref>, we collected and analyzed the opinions of our interviewees on the 5D DT model by Tao et al.ย <cit.> (explained in Sectionย <ref>). In this section, we present our observations from this analysis.
We were able to map some of the DT components mentioned by the interviewees, as discussed in Sectionย <ref>, to the 5D model.
This mapping is shown in Figureย <ref>, where the number of interviewees from industry and academia agreeing to the components is shown, respectively.
As shown in the figure, we identified considerable number
of mappings between the components mentioned by the interviewees and the physical entity, virtual entity, and data components of the 5D model. As discussed earlier, the purpose of a DT mentioned by different interviewees can be correlated to the services component in this model.
As depicted in Tableย <ref>, 11 interviewees mentioned that they could relate to the 5D DT model to a certain extent and agreed to this model albeit with some changes. Four interviewees from industry explicitly disagreed with this model. Whereas another four interviewees neither explicitly agreed nor disagreed with this 5D DT model and discussed their perspectives on this model. It is also important to mention here that we were not able to obtain the opinion of one interviewee due to time constraints.
Three interviewees suggested that some connections in the 5D model are not needed and might be removed such as the connection between PE and other components. They further elucidated that the connection from PE might not be needed
in some cases such as when PE may not exist or when PE may not be capable to communicate.
Two of those three further expressed that some of these connections need not be bi-directional, but can be uni-directionalโsuch as the connection between data and PE, data and VE, and others. Some
interviewees also mentioned that the nature of these connections were not clear enough. For example, one academic interviewee explicitly mentioned that this model should also clearly specify what flows in each direction from one component to another. According to two interviewees, humans play an important role in DTs and suggested to include them as another dimension. Two interviewees specifically mentioned the services part of the model to be highly important, and that a service could be interconnected with services from other DTs, thus, enabling service level interaction.
We identified two interviewees who mentioned that they see data and VE combined as a single component. One of these interviewees clarified that they consider VE to represent data. One interviewee from industry emphasized using percentages in this model to represent the relative importance of each of the components. He further elucidated that the importance of each component could vary based on the DT application or based on how the DT is developed: in certain DT applications he had worked on, in his opinion the data was more important than models and this should be represented in the 5D model of that DT. We identified other interviewees who mentioned that data is an important part of the DT, while a few others mentioned that services are the most important part of a DT.
One industrial interviewee specifically indicated
that he does not see a single VE for a DT, but several VEs which could possibly be inter-connected or separated. Overall, it can be understood that while eleven interviewees agree to the 5D model, there are several changes suggested by them to this model and thus, they do not completely agree with it. While some of these changes do apply to a generic DT, some changes are also specific to the DT that the interviewee worked on.
ยง.ยง.ยง Discussion
Based on our findings presented in this section, there is no common understanding of DTs across the nineteen interviewees.
It is relevant to mention that this lack of common understanding has been discussed as one of the non-technical challenges in DT development by Van den Brand et al.ย <cit.> and it is highly important to overcome this.
Moreover, we aimed to understand the alignment of interviewees on the different parts that make up a DT, especially the highly relevant ones such as the relation and communication between and virtual counterpart.
It can also be observed from our findings that interviewees did not agree on all the components of a DT.
Our inference from the interview data is that there is no uniformity in the definition of DTs nor in the understanding of the components that make up a DT. Despite this disparity, some agreement existed on certain components of a DT, specifically, models, data, and the synchronization between the and its virtual counterpart.
ยง.ยง Influence of reuse on DT lifecycle (RQ2)
As indicated by Walravens et al.ย <cit.>, developing DTs is a cross-domain and resource intensive task.
We believe reuse of existing artifacts can significantly reduce the development and maintenance costs of DTs. Answering RQ2 validated this belief and improved understanding of artifact reuse in the context of DTs.
We identified fifteen out of 19 interviewees acknowledging some form of artifact reuse while working with DTs. Industrial participants mentioned reuse more frequently.
Based on the interviews, we identified two kinds of artifact reuse: data reuse and software component reuse. We further identified knowledge reuse, which we believe is worth discussing despite not being about artifacts. In this section, we discuss the aforementioned reuse scenarios, which we summarize in Tableย <ref>, and challenges we identified during our analysis.
ยง.ยง.ยง Software artifact reuse
We use this term to collectively identify the reuse of static and dynamic models and software components developed independently or extracted from one or more separate software intensive systems. Tableย <ref> shows the types of reused software artifacts we identified through our analysis.
We identified the following factors that encouraged or motivated software artifact reuse:
* Reduced resources for development: Four interviewees mentioned that reusing software artifacts lead to shorter delivery time and reduced development effort. According to them, specially the ones from industry,
reuse is essential as it greatly affects time to market. We additionally identified two cases where DTs were developed based on one or more existing DTs, allowing the developers to leverage existing artifacts and reduce development effort significantly. These identified benefits confirm our assumptions on the benefits of software artifact reuse, which was based on prior publications reporting similar benefits for more traditional software systemsย <cit.>.
* Ease of use: According to six interviewees, ease of use and built-in support for integration provided by corresponding tooling encouraged them to reuse software artifacts. Based on our analysis, we divided these integrated tooling environments into two categories: commercial and in-house tools. While the commercially available tools are used both in industry and academia, the tools built in-house are exclusively used by the corresponding companies. Four interviewees mentioned Unityย <cit.>, a well-known game engine, as a tool they use for developing geometry and physics models. Prespective[<https://prespective-software.com/>], a Unity-based 3D design and simulation platform, was mentioned by two industrial interviewees. Furthermore, our analysis suggests that in-house tools are often developed based on requirements defined by the organizations themselves and therefore are only suitable for their specific needs.
* Transfer of knowledge captured within models: Two interviewees expressed their concern about personnel changes and loss of gained domain knowledge as a result.
In both cases, models or software artifacts are being used as a way to encapsulate, preserve, and transfer such knowledge. This suggests
that such use of software artifacts allows distributing this knowledge within the organization and increases the possibility of reuse.
ยง.ยง.ยง Data reuse
Our analysis suggests that data reuse is an integral part of DT development. Seven interviewees explicitly mentioned it,
three did so implicitly. Here we focus primarily on the explicit mentions
. Unlike software artifact reuse, we found the motivation for data reuse to be mostly similar, related to the understanding of a natural or engineered system based on the data gathered from it, and modelling certain aspects of the system in one or more DTs. These DTs are later used for monitoring or enhancing the system. Although the nature, DT's applications, and data sources for such reuse vary significantly, we identified three major types of data reuse:
* Data related to :
According to our analysis, this form of reuse is often related to historical data collected from the corresponding . Two interviewees mentioned collecting historical data from across the product lifecycle including design, manufacturing and operation. This data is often used to create a more accurate virtual representation of the physical system. Depending on the followed DT concept, this representation can be purely data-based or a combination of software and data where the later is used to enhance the former. We analysed various DT concepts discussed during the interviews in Sectionย <ref>.
* Data from similar systems:
Four interviewees mentioned developing DTs reusing data gathered from similar systems, namely related product line and similar digital shadows.
Three interviewees mentioned reusing data from a product line that is closely related to the DT under development. Our study suggests that this is particularly useful when the DT is being developed alongside a physical system that is not mature enough. One interviewee mentioned that they construct a digital shadow first to learn configuration parameters that are later used during DT development.
* Ontology: We found that both in industry and academia, use of ontologies facilitates
data reuse, especially when the data is produced and consumed in different contexts.
In cases like this, ontologies are used to represent knowledge and describe various data properties. Three interviewees claimed that existing ontologies play an important role in their development of DTs. These interviewees are from high-tech products and building & construction
domain, often known to be highly multidisciplinary.
ยง.ยง.ยง Reuse of knowledge
We define knowledge as skills and experiences interviewees gather through education, training, and professional activities.
It can be argued whether knowledge forms an artifact since it is not tangible and hard to measure. However, we identified key topics with the potential to significantly influence DT lifecycles. One interviewee estimated around 80% of their work to involve reuse of experiences and knowledge. This estimation provides a good notion of the influence of knowledge reuse. Another interviewee emphasized preserving knowledge, specially considering that people might leave an organization.
ยง.ยง.ยง Challenges in reusing artifact
Although about 80%
of the interviewees mentioned practicing some form of artifact reuse and acknowledged positive effects of it, we often identified cases where reuse was restricted to various degrees; we list the most frequent restrictions identified by our analysis.
* Legal issues:
We found that legal measures or clauses often restrict or prevent artifacts from being reused, especially when industrial stakeholder are involved. These measures include non-disclosure agreements (NDAs), intellectual property (IP) rights, and privacy concerns. Liability concerns can also have restrictive consequences, especially when multiple organizations are involved. With four interviewees explicitly mentioning it, we find this to be the most frequent
challenge affecting both data and software reuse.
* Lack of explanation: Our study suggests that the lack of appropriate description or documentation can severely reduce the possibility of artifact reuse. We found that data reuse is more affected by this issue.
Three interviewees mentioned that various data is often collected to be used for specific purposes. Due to lack of meta-data, explanation, and knowledge lost over time, such data becomes meaningless rendering reuse practically impossible. We identified that poorly documented or undocumented software components suffer from similar consequences.
* Incompatibility and integration issues: This affects both software and data reuse. For the latter, this is often related to data formats being incompatible with available tooling. We also identified cases where precision and fidelity of available data restrict reuse.
Software reuse also suffers from incompatibility issues that restrict the integration of existing components into newer systems. Lack of configurability, lack of interoperability between legacy and newer systems, and interface inconsistency are some of the factors that contribute to this issue. Our study also suggests that lack of proper documentation of software components can lead to perceived lack of interoperability.
* Lack of software engineering skills: We believe that domain expertise plays a crucial role in the development of DTs, which has been indicated in several publicationsย <cit.>.
As a result, domain experts are often closely involved in the development activities. However, according to one interviewee, developing reusable software components requires advanced software engineering skills that are not common among the involved domain experts. As a result, the developed artifacts may lack reusability.
* Lack of methodology or tool: This issue was identified by four interviewees as the reason behind limited reusability of existing artifacts. One of them claimed that while developing software artifacts, future reusability is often disregarded due to the lack of an appropriate development methodology within the
organization.
Consequently, identifying reusable components and determining the degree of reusability of the identified ones becomes very difficult. We also found that most industrial DTs are developed using tools that are highly specialized and often built in-house. As a result, artifacts built using these tools are not reusable in a different context. Furthermore, one interviewee from industry mentioned that organizations often promote the uniqueness of their products, reducing the organizational mindset for reuse of existing artifacts.
* Additional effort: Our analysis suggests that reuse of artifacts often takes effort.
Four interviewees mentioned that it required additional development and
validation efforts.
The necessity to adapt existing software components for a new purpose is a major reason for this.
Furthermore, previously undetected defects are another reason for this. One interviewee mentioned that a defect within a software component reused from another vendor caused a simulation to break at certain conditions and it took them significant effort to identify and fix the defect.
* Reuse not possible:
Although generally beneficial, we did identify situations where reuse is impossible or might even have adverse effects. One interviewee shared that not reusing allowed them to be more independent, flexible, and avoid vendor lock-in. Another interviewee emphasized that an NDA signed for a project prevented them from reusing artifacts produced in that project in another one. Furthermore, we identified the following dominant factors preventing artifact reuse: lack of appropriate tooling; artifacts developed without considering reuse; unexpected unavailability of artifacts; and exceptional system requirements. Although some such factors are similar to the points discussed earlier in this section as reasons for restricted reuseability, we also identified cases where they prevented reuse.
ยง.ยง.ยง Discussion
Based on the findings presented
in this section, we observe
that reusing existing artifacts can positively influence the development of DTs. However, we noticed that this is not practiced widely
in the organizations that develop and use DTs. We already discussed several reasons and challenges related to this. Although resolving some of these might not be trivial (e.g. legal issues), we believe it is possible to address issues related to inadequate tooling, lack of reuse attitude, and insufficient software engineering skills with moderate organizational effort. Furthermore, during the interviews we noticed that organizations are recognizing the potential of reuse in the context of DTs and gradually moving towards developing, maintaining, and using reusable software components. We noticed a trend of developing modular or configurable DTs, often
using a component-based integrated development environments for developing DTs by combining reusable components.
Reuse of existing data and software artifacts has the potential to significantly optimize the development lifecycle of a DT. However, except for some limited cases, it is not practiced widely due to challenges such as legal restrictions, inadequate tool support, lack of information, and experience.
ยง.ยง Consistent cross-domain models (RQ3)
Inter-domain collaboration is essential for the development and maintenance of DTs. Within a cross-domain environment, we identified the maintenance of consistency among cross-domain software models as a challengeย <cit.>. With RQ3 we aimed to investigate this challenge further.
We wanted to understand how consistency is defined in practice, identify key causes for inconsistencies, and tools and techniques used to manage them. Twelve interviewees mentioned that they have encountered or put measures in place to handle inconsistencies. In the following sections we present our findings based on the analysis of the data we collected from these interviewees.
ยง.ยง.ยง Types of inconsistencies
Based on our analysis, we divided the identified inconsistencies into the following five categories.
* Interface inconsistency: Hisarciklilar et al.ย <cit.> defined interface inconsistency as mismatching values, terminologies, or schemes among connected interface elements. We identified this inconsistency mostly in cases where two or more entities, often cross-domain, communicate and do not share any compatible interfaces (i.e., parameters). Our analysis suggests that this is the most frequently encountered inconsistency, with four interviewees explicitly mentioning it.
* Tool and data format inconsistency: We identified six interviewees who recognized such inconsistencies as a challenge. Our analysis shows that the development of DTs is almost always a cross-domain effort. In projects such as these, the stakeholders are from a variety of domains and use domain-specific tooling to develop cross-domain artifacts. From the interviews, we identified situations where these tools are completely or partially incompatible. As a result, artifacts developed in one tool can not be imported or used in another tool, primarily due to incompatible data formats.
* Representation inconsistency: As DTs are often complex cyber-physical systems, various diagrams (i.e., UMLย <cit.>, SysMLย <cit.>) are used to conceptually represent parts or the complete system, usually during the design phase. In our analysis we identified cases where the actual implementation deviates from the design, which we identify as representation inconsistency. One interviewee provided an example of such inconsistency where the Simulinkย <cit.> implementation was done differently compared to the design done using UML sequence diagram.
* Configuration inconsistency: From the interviews, we identified two highly dynamic software systems offering many configuration parameters that can be used to greatly change the system behavior.
Configuration inconsistency can occur when dependencies exist between configuration parameters and changes are made to them without considering the dependencies. One of the interviewees mentioned that they usually do not change their software, but update configuration parameters to adjust the system, which at times introduced inconsistencies.
* -VE inconsistency:
As analysed in Sectionย <ref>, the concept of a DT
often encompasses a certain degree of synchronization between and the corresponding VE to facilitate replication of certain behavior or features. We identified two types of inconsistency in this context. The first kind is about inconsistent VE- communication, which is often a special kind of interface inconsistency (discussed earlier). The other inconsistency occurs when the virtual and actual entities exhibit differences in certain behavior that is expected to be similar. As a result, identical operations performed on both and VE can provide different results rendering the DT ineffective.
ยง.ยง.ยง Major reasons for inconsistency
Our analysis of the interviews suggests that inconsistencies can emerge for numerous reasons, which largely depends on the nature of the corresponding DT. Therefore, we believe extracting a complete list of reasons is not possible.
However, during our analysis, we identified the following four most frequent causes of inconsistencies.
* Lack of standardisation:
With five interviewees explicitly mentioning it, we identified this as one of the most frequent reasons for both model and data related inconsistencies. Two of them mentioned that they are not aware of any standardisation within their project, resulting in significant overhead in terms of communication and development efforts. We also identified cases where data was collected, stored, and exchanged using a non-standard format despite the existence of established standards. It was unclear from the interviews why existing standards were not followed.
Furthermore, one of the interviewees mentioned that developing and following a set of standards for exchanging or storing information is extremely challenging due to the sheer number of different domains involved in their DT project.
* Lack of proper collaboration:
DTs, specially the industrial ones, are typically highly multi-domain multi-team projects. Within such environments, various tools can be used for developing artifacts, often with tool-specific semantics. Our analysis suggests that lack of proper collaboration in such diverse environments can lead to inconsistent artifacts. Two interviewees identified lack of communication among different domains or teams working on dependent components as a key reason for inconsistency.
* Insufficient tooling or methodology: During our analysis, we identified a general lack of inconsistency detection and mitigation methodology and corresponding tooling. Such methodology or tooling either did not exist or was claimed to be ineffective or inefficient. Although we expected this shortcoming to be more widespread, surprisingly we found only two interviewees explicitly mentioning facing difficulty related to it.
* Reusing existing artifacts: We discussed artifact reuse, its benefits and challenges in Sectionย <ref>. Additionally, we found that reuse can also lead to inconsistencies. Our analysis suggests that this is often a consequence of lack of appropriate testing before reuse or the reused artifact lacking sufficient documentation. One interviewee mentioned that they had reused a software component from a previous project without importing all the related tests. As a result, they were not able to detect inconsistencies that lead to catastrophic failure of the new project.
ยง.ยง.ยง Inconsistency mitigation practices
During the interviews, we tried to understand how inconsistencies are being handled in practice in the context of DTs. Here we discuss the most prominent of the measures we identified during our analysis.
* Formal or informal communication: Earlier we mentioned general lack of tooling or methodology as one of the reasons for inconsistency. In fact, we believe a majority of inconsistencies are avoided by established practices within an organization,
way of working, in-person communication, or personal knowledge.
One interviewee validated this belief by mentioning the ineffectiveness of a well-defined workflow at their company
and how they often needed to resort to informal in-person communication to resolve problems. An interviewee from the healthcare domain mentioned that they consult with their colleagues from automotive and aerospace domain where, according to the interviewee, inconsistency issues are better understood.
* Use of standards: We found that the usage of various standards is key for avoiding inconsistencies and maintaining consistencies. Our analysis suggests that these standards can be globally accepted or custom-built for an organization. International standards like Functional Mock-up Interface (FMI) and Functional Mock-up Unit (FMU) were mentioned by two interviewees. One interviewee mentioned using standards defined by the European Union (EU) without mentioning specific ones. Furthermore, usage of custom-built standards
was mentioned by two interviewees.
* Use of external tool or technology: Four interviewees mentioned using external tools for handling inconsistency issues. In this context, we found that semantic technologies play a key role in understanding of data and, in some cases, conversion between different data formats. Two of the four interviewees emphasized the usage of ontologies and related technologies (i.e., SHACL[Shapes Constraint Language <https://www.w3.org/TR/shacl>]). One interviewee mentioned semantic labeling of data and use of graph databases (i.e., Neo4j[Neo4J - a graph data platform <https://neo4j.com>]) which allows mapping multiple ontologies and automatic data format conversion.
* Use of in-house tooling: We discussed software related inconsistencies in the context of multi-tool environments in Sectionย <ref>. Our analysis suggests that one way to reduce the number of inter-tool inconsistencies is to avoid multiple tools and using a single one.
One interviewee explicitly mentioned developing in-house tooling and using it for their DT development allowing them greater flexibility and avoiding inconsistencies.
We found similar strategies used within two other organizations.
* Testing: In safety critical domains (e.g., aerospace, healthcare), early and frequent testing or benchmarking is one strategy to identify potential problems including inconsistencies.
* Use predefined checklist: Two interviewees explicitly mentioned the presence of a predefined checklist or workflow for identifying inconsistency issues.
ยง.ยง.ยง Discussion
Our analysis suggests that inconsistencies are actual issues in the context of DTs and directly or indirectly affected over 60% of our interviewees. The nature and source of these issues are highly diverse and depend on the actual DT implementation, involved methodologies and tooling, organizational and personal constraints. Consequently, we believe that it is highly challenging to categorise these inconsistencies where this categorisation is also complete.
Furthermore, we identified that inconsistency issues are often not
categorized as such. Instead, they are treated as regular issues encountered typically during system development or maintenance. Therefore, we believe there is a lack of awareness of the need for specialized methodologies or techniques for identifying and mitigating inconsistencies. We think this might have contributed to the inadequacy of related tooling as discussed earlier.
Furthermore, we identified situations where a vast majority of inconsistency issues are being avoided by simply using a single tool for development or developing large monolithic models. Besides, we think there is a large gap between academic innovation and industrial practice in the context of inconsistency management. For example, Torres et al.ย referred to several academically developed consistency management approaches in their systematic literature reviewย <cit.>. However, we could not identify any commonality between this list and the approaches discussed by our interviewees, which we present in Sectionย <ref>.
Therefore, we believe that there are opportunities for further research and development for inconsistency management tools in the context of DTs. However, we also identified that certain domains, i.e., aerospace, construction engineering, automotive, are more aware and mature about
inconsistency issues. This is largely facilitated by standards or conventions accepted across related organizational entities.
In the context of DTs, inconsistency issues are common and can adversely affect development and maintenance activities. We identified appropriate communication and usage of various standards as the most frequently practiced measures to avoid such issues. Contrarily, we noticed that the absence of these measures are the major reasons behind the emergence of inconsistency issues. Furthermore, we observed a general lack of tooling and methodology for effectively handling consistency. Therefore, we believe that further research and development is needed to understand these issues and develop tools and techniques to avoid or mitigate them.
ยง.ยง Model integration (RQ4)
RQ4 explores the topic of model integration in DTs. Model integration is the process of bringing together models to create the DT's virtual entity. These models will interact to mimic a desired behavior from the . We intended to understand the approaches and design decisions the interviewees take when designing a DT.
An overview of the interviewees approaches and design decisions is shown in Tableย <ref>. The second column shows our classification of the findings. The third and fourth column show the number of interviewees who discussed a topic
related to a specific class of findings.
Not all interviewees discussed
all the topics listed in Tableย <ref>; hence within each topic classification the number of interviewees does not add up to 19 (total number of interviewees).
ยง.ยง.ยง Integration approaches
From our analysis, we observed two main integration approaches, namely multi-tool and single-tool. Fourteen interviewees explicitly mentioned the approaches used for model integration in DTs.
As shown in Tableย <ref>, nine interviewees use a multi-tool approach, while five use the single-tool approach.
Multi-tool approach. This approach is used when a heterogeneous modeling environment is present, in which distinct modeling tools are combinedย <cit.>. Each model, in this approach, needs encapsulation which has a defined interface to be able to establish communication with other models.
The nine interviewees using the multi-tool approach agree that this approach has advantages for cross-domain collaboration because it allows different tools to be used. Other reasons mentioned to use this approach is information hiding of model details also known as
ย model masking
for IP (Intellectual Property) protection and reduction of model complexity. Here, technologies are used so the model becomes a โblack boxโ with only its input and output exposed, but the model details are hidden.
However, this approach has many challenges such as data format consistency (discussed in Sectionย <ref>)
, data and model semantics, and relationship complexity between the models.
Single-tool approach. This approach requires
to generate or transform all the models into a single software tool. The single tool will perform the execution of all the models as a whole; hence we refer to it as execution platform.
Our analysis showed that the selected execution platform in such cases was MATLAB
. In addition, our analysis shows two strategies from interviewees, in the single-tool approach.
The first strategy is to use the same tool which is used for model execution for creating the models as well, e.g., two interviewees
use MATLAB as their modelling environment and execution platform.
The second strategy is to transform the original models into a format which is supported by the execution platform
. Interviewees using this strategy mentioned that this requires re-work from them
. An example of this practice is the use of Python as an execution platform that can support the execution of models made in Python or MATLAB. If there is a model in Modelica then this model is transformed into MATLAB, which is a supported platform, requiring extra work from the modeller.
During the interviews, interviewees mentioned that this approach might limit cross-domain collaboration, but reduces the integration effort significantly.
ยง.ยง.ยง Communication among models
As mentioned earlier in this section, an important ingredient of model integration is the communication between models. We aimed to understand how the interviewees implement the communication between models. We identified 15 interviewees who discussed this topic. Our analysis showed two key topics mentioned by interviewees: (1) important factors that influence communication design, and (2) implementation approach.
Influencing communication design factors. We identified
three factors influencing model communication,
namely DT application
, software dependencies, and stakeholders involvement. According to nine interviewees, the DT's application dictates the required communication frequency or the implementation approach
. However, they mentioned that stakeholders' involvement is crucial, because it will determine
the implementation feasibility
by providing supporting knowledge, resources, and software.
Moreover, they mentioned the importance of knowing the dependencies and requirements of the software modelling tools or platforms involved
, to avoid operational failure or additional work.
Implementation approach. We identified
two main approaches to implement
the communication, namely use of standardised
communication protocols and in-house technology.
Five interviewees shared that they use standardised communication protocols.
Most mentionedโi.e.ย by three intervieweesโwas OPC (Open Platform Communication). Among these five interviewees, three
mentioned that they chose a specific communication protocol because the execution platforms support it.
The second implementation approach is the used in-house technology for model communication. This approach was mentioned by two interviewees.
Both interviewees described their technology as an entity (e.g., communication layer) that centralized the data exchange among the models
, these data can be accessed by any model. A model needs to specify which data to extract. We consider this technology to be similar to a publish-and-subscribe pattern for data exchange.
In conclusion, our analysis shows that the most popular
implementation approach is the use of communication protocols, used by five out of the six interviewees that discussed this topic.
ยง.ยง.ยง Technology used
During the interviews, we intended to understand the type of technologies used for model integration. We found that fourteen interviewees described different
technologies that are used for this purpose.
Tableย <ref> shows a summary of this subsection, where the technologies used varied greatly, but they can be clustered in four groups:
Functional Mock-up Interface (FMI)
, DSL, own-design and tool-provided technologies.
* FMI: Three interviewees specifically mentioned the use of FMI technology to integrate their models. This technology has been used in DTsย <cit.> and is supported by over 170 modelling tools[<https://fmi-standard.org/>].
According to three interviewees from industry,
two reasons to use this technology are its maturity and compatibility with various modelling tools.
* DSL: Three interviewees mentioned the use of DSLs for model encapsulation and for establishing communication between these models.
Interviewees also use this technology for other aspects such as DT architecture, which is discussed in Sectionย <ref>,
and to unify data semantics among the models.
* Own-design: Four industrial interviewees indicated that they developed their own technology to integrate models in their DTs. Their technology is based on developing the interfaces for each modelling tool they have used
, e.g., if the interviewees have models in MATLAB and ANSYS modelling tools, they design an interface for each of them. According to these interviewees
this method gives them flexibility, and it can be expanded according to their needs. However, they admit that it requires effort and time every time there is a new modelling tool.
* Tool provided: Another four interviewees mentioned that they use what is supported by their execution platform. As with the monolithic approach explained in Sectionย <ref>, these interviewees have two choices: either to transform incompatible models or to develop them in a format supported by their execution platform. According to these interviewees, the main reason to choose the technology is because of their experience with the execution platform.
ยง.ยง.ยง Tooling type
The interviewees mentioned two distinct uses of
tooling.
First, tools to develop models for their DTs, for which they all mentioned using commercial software such as MATLAB. Second, tools to execute all the models, as discussed in Sectionย <ref>.
We divided the identified execution platforms into two categories:
developed in-house and commercial. Tableย <ref> shows a preference for the development of in-house tooling among the interviewees, particularly industrial interviewees. One of the interviewees shared that they used a combination of in-house and commercial tooling, which was therefore also considered in the statistics shown in Tableย <ref>. Six interviewees did not mention the tooling used for DT integration.
* In-house: The data collected
suggests that interviewees from industry prefer to develop their own tools for DT development. The main driver to do so
seems to be the DT's application and its domain requirements. Two interviewees mentioned the need for specific execution and modelling requirements for their event-based DTs, driven by the DT application needs. Another two based their tooling on visual models, which required specific communication technologies and the integration with specific software tooling from their stakeholders. The final four interviewees mentioned that they use modelling tools (commercial and in-house) frequently used in their own domain and thus required to work together. Hence, they decided to create their own tools.
To summarize, industrial interviewees seems to prefer to develop their own tools to realize their specific integration requirements.
* Commercial: The use of commercial tools varies between modelling (e.g., MATLAB), CI/CD (Continuous Integration/ Continuous Development),
cloud frameworks, and system design tools. According to the interviewees, the main reason for their use is the efficiency of component integration. However, commercial tools have limited support for external software.
As a consequence, these interviewees are forced to transform their models to a supported format
; this might mean re-work or limit collaboration.
One interviewee mentioned combining commercial and in-house tooling as helping the collaboration between domains, but having the challenge of toolsโ semantic uniformity.
ยง.ยง.ยง Challenges
Seven interviewees shared the challenges they face integrating models for DTs. Based on our analysis we decided to classify these challenges in three, namely challenges in model heterogeneity, data heterogeneity, and complexity.
Model Heterogeneity. These challenges
are related to the type of models that needs to be integrated into the DT. We identified two challenges related to models: integration of legacy code models and integration of models from different software platforms.
Three interviewees mentioned that these challenges
seem to be particularly difficult and required further research to address them.
Data Heterogeneity. The data heterogeneity challenges
refer to data format and semantics; both challenges are present due to cross-domain collaboration. An example of a semantics challenge is when two terms referring to the same concept are used in different domains, such as โpressure dropโ and โpressure gradientโ. Multiple terminology can generate confusion which might delay the development. According to our analysis, interviewees seem to find ways to address this challenge. For the data format challenge, a solution is the development of a communication layer to homologize data formats between models. For the semantics challenge, an interviewee
uses semantic web technology to standardize the semantics.
Complexity. The complexity challenges
are related to models and the DT as a whole. Two interviewees defined complexity of a model as the level of fidelity. In addition, they also stated that high fidelity does not necessarily translate to better models.
Thus, these interviewees
suggest to consider the purpose of the DT as the key factor for design, to define the fidelity of the models.
Another factor related to model complexity
is model constrains,
such as numerical constraints of a model. Two interviewees
mentioned that understanding the model limitations is critical because model constraints issues can be confused with
integration issues.
An example is when a model is constrained to positive numbers as input, and subsequently is integrated with another model which can yield negative output values. If the constraints of the former model are not known by the person who is integrating, then a test might yield an error. Particularly, if the system is tested in out-of-bound conditions of the former model. This error might be confused with a software integration error, rather than a limitation of the system.
The second complexity challenge discussed is related to the system as a whole. Two interviewees stated that a DT can be composed from several components increasing the complexity, hence, the difficulty of understanding the relation between the components. In addition,
understanding those relations becomes critical to solve integration issues.
ยง.ยง.ยง Discussion
Based on our analysis the multi-tool approach for integration seems to be more popular among the interviewees. Our opinion is that the multi-tool approach offers better maintainability and cross-domain cooperation, due to the separation of entities. On the other hand, interviewees agreed that this approach requires more effort and knowledge of software engineering. Among the interviewees, the most popular approach to model communication is using communication protocols such as OPC.
According to our analysis, the selection of communication seems to be influenced largely by the experience of the developer. Through the interviewees, we found diverse technologies used for integration. However, two technologies were mentioned by three interviewees each: DSLs and FMI.
Regarding tooling for integration, industrial interviewees seem to prefer in-house tooling (seven out of nine industrial interviewees). Our analysis of the challenges suggests that tooling and technologies to facilitate cross-platform integration are required, in addition to consistency management methods discussed in Section
ย <ref>.
The preferred approach for the integration of models is a multi-tool approach which requires interface development. The preferred technology for such interface seems to be the use of standardized communication protocols.
Although there is no clear preference for integration technology, two technologies seem to be frequently used, FMI and DSL. Finally, the main challenges of integration are three. First its model heterogeneity because models can be of different types of develop in different platforms. Second, its data heterogeneity, which relates to difference in data type and semantics. Third, DT's complexity which is related to two issues. First, the fidelity of the models. Second, the several components that a DT can have.
ยง.ยง Model orchestration (RQ5)
We define DT model orchestration as the definition of the communication actions and execution sequence of the modelsย <cit.>. To achieve these actions the orchestration should do interfacing evaluation activities such as data compatibility checks and indicate the beginning and end of a model executionย <cit.>.
With RQ5 we aimed to understand interviewees' perceptions and practices related to such orchestration in DTs. Based on our analysis, we identified five main topics of discussion related to model orchestration, namely understanding
, implementation, technology, tools, and challenges. An overview of those topics is shown in Tableย <ref>'s first column. The second column shows our classification of the findings to facilitate reading the results. The third column shows the number of interviewees who discussed an item from a specific class.
ยง.ยง.ยง Understanding
In this section, we cover the interviewees' discussion on model orchestration understanding. We divided the discussion into two topics: interviewees' explanation of what orchestration is, and its components.
Definition. Eleven
interviewees shared their definitions of model orchestration. They all agree that orchestration is the scheduling of model execution
in their DTs. Only four of them specifically mentioned that the method of data exchange is part of the orchestration.
In addition, these eleven
interviewees also expressed their opinion on the importance of orchestration. From that we concluded that model orchestration is highly important, as three interviewees explicitly expressed it and another six implicitly did so. Yet two interviewees argued that orchestration is not needed in DTs, because the complexity of their current DTs is not high.
Moreover, one interviewee expressed that the orchestration, if defined in a formal mathematical manner, can be used to reason about a physical system, not only to define the execution of the models.
Components. Five interviewees specifically shared, in their view, the necessary components to design the orchestration of models. All other interviewees mentioned that their orchestration implementation is dependent on purpose and domain, thus they did not define specific components for orchestration. Tableย <ref> shows the components mentioned by the five aforementioned
interviewees and the number of mentions for each component.
Concerning the trigger, eight interviewees explicitly stated that it is a key component of orchestration. Nevertheless, the type of trigger depends on the DT's application and domain. We observed two distinctive trigger definitions as a function of the DT's application. Two interviewees working on control applications stated that the orchestration should be made based on a time schedule, where the data exchange between models and the execution of each model should be synchronized based on a global clock. Another interviewee,
working on event-based applications, mentioned that the definition of the trigger for each event is the most important factor to schedule each model execution step. The remaining five interviewees explained that the trigger for model execution depends on DT's application and domain.
Regarding the scheduling approach, interviewees mentioned
two types, namely sequential scheduling and concurrent execution. In regards to the data exchange method, interviewees defined it as the scheduling of data exchange between models, e.g., First In, First Out (FIFO).
Our analysis shows two different roles of time in orchestration. The first role is as a trigger for model execution in control applications, known as time scheduling. The second role is event record in event-based applications, by using time stamps for each event, which is also shown as an important component shown in tableย <ref>.
In conclusion, our analysis shows a general consensus on orchestration as all activities ensuring correct scheduling of model execution
. The majority of the interviewees consider orchestration important for the development of DTs. Moreover, these interviewees agree that the orchestration design requires a definition on the scheduling and method for data exchange
, in addition to defining a global time for the DT application and labeling produced data with time-stamps. Other components for the orchestration design seems to depend on the DT's applications and domain.
ยง.ยง.ยง Implementation approach
This subsection discusses the implementation approaches shared by the interviewees for model orchestration.
Thirteen interviewees shared their specific implementation method.
Six interviewees stated to use a pragmatic approach, while the other seven shared specific implementation approaches, depending on their domain.
Pragmatic. Our analysis suggests that the pragmatic approach aims to design the scheduling of models by reproducing the behavior, e.g., if two tasks occur simultaneously, then concurrent execution is used. According to the interviewees, this approach requires iterative testing, design, and implementation.
Our analysis shows that interviewees using the pragmatic approach design the orchestration by directly
writing code to define the scheduling of models.
DT application specific. We found seven interviewees who stated that the DT application determines the orchestration approach. During our interviews, we collected six different approaches that are shown in Tableย <ref>. Each approach defines how to implement the models' scheduling. The only approach used by more than one interviewee, with control applications, is time-scheduling, in which time triggers execution for each model. In addition to the DT application, the orchestration approach seems to be related to the knowledge
and domain of the interviewee. There are two examples shown in Tableย <ref>. The first example is related to the DT's application of design & application, where the orchestration approaches are two. The first is based on defined rules that activate the models. The second uses concurrent execution of all the models.
According to these two interviewees, they chose their orchestration approach because of their knowledge in software and co-simulation, respectively. The second example is on the DT's application of analysis, where the approaches for orchestration are standard workflow and event-trigger. These two interviewees explicitly chose their approach based on their domain.
In conclusion, the interviews indicate that the approaches are a function of the DT's application, and of interviewee knowledge and domain. In addition, around half of the interviewees seem to design the orchestration by attempting to pragmatically reproduce the behavior.
ยง.ยง.ยง Technology
This subsection explains the technologies for orchestration that interviewees discussed during the interview. Ten interviewees
shared the specific technologies
they used. We have classified these into three categories:
model-based, own design, and tool provided.
* Model-based. Five interviewees stated their preference in using technologies that are model-based to design the orchestration.
The two technologies described by these interviewees were DSLs and ontologies. Four interviewees use DSLs to design the scheduling of the models. Three of them defined their own DSL, while another uses SysML. One of them also uses their DSL for system verification. Another interviewee uses an ontology to link data between models and thereby orchestrate data exchange.
* Own design. Three industrial interviewees explained that they designed their own technology for orchestration.
The technology is based in their expertise and domain. None of them explained their technology in detail, but rather shared how it works at a higher abstraction level.
We identified two distinctive technologies based on the trigger component, explained in Subsectionย <ref>. First, the technology that supports time as a trigger. Second, the technology that supports events as triggers.
* Tool provided. Two interviewees explained that the orchestration is performed by their execution platform, thus they do not know what technology is used for the orchestration. Their execution platform is a modeling tool like MATLAB, Anylogic[<https://www.anylogic.com/>], and SymPy[<https://www.sympy.org/en/index.html>].
In conclusion, our analysis suggests that the most popular technology is model-based, particularly DSLs.
We observe that all orchestration technologies focus on the scheduling of the models, but not on how to exchange data among them.
ยง.ยง.ยง Tool type
In this subsection, we explain how the interviewees select the tools to develop a DT. From 19 interviewees 13 shared the tools they use, in particular their execution platform, which performs the orchestration in a DT. Six interviewees from industry developed their own tool, while seven (five from academia and two from industry)
use a commercial tool. One interviewee is classified in both classifications since he uses a combination of in-house and commercial tools.
* In-house. Six interviewees explained that they have developed their own execution platform to schedule model execution. Our analysis suggests that the main reason for doing so
is to minimize the integration effort to deploy their DTs in existing information systems. For example, one interviewee mentioned that he developed his orchestration tool using C# because their company used C# in their information system. However, from our analysis, we also observed that these interviewees aim to develop specific requirements for
their DT that required distinct execution functionalities.
An example from one interviewee,
is the need to control the global time of execution, which he described as being able to execute a model from the present to the future, or from the present to the past. Based on our findings, we observe two main reasons for in-house tool development: (1) to minimize employment integration effort; and (2) to develop specific execution requirements.
* Commercial. We identified three types of commercial tools used as execution platforms. The first
is to use
tools
to orchestrate the models. This type of tool is used by two interviewees who also use the single-tool approach for the integration of models as explained in Sectionย <ref>. The second type is the use of system design tools such as IBM Rhapsodyย <cit.> and HEEDS[<https://www.plm.automation.siemens.com/global/en/products/simcenter/simcenter-heeds.html>], which is used by two interviewees. These tools facilitate the use of external software
as long as they
are supported by the vendor. The third type
encompasses tools that support a DSL to sequence entities for execution; this tool type is used by two interviewees. The tools mentioned by interviewees are PDDL (Planning Domain Definition Language) and Dezyne from Verum[<https://www.verum.com/DiscoverDezyne>]
. The former also supports formal verification
.
Based on our findings, we observed that the use of commercial tools is slightly preferred over the development of in-house tools. Interviewees did not specifically mention why they preferred commercial tools, but three mentioned some reasons: previous tool knowledge, external software support, and facilitating DT system integration. The last reason was indicated by the interviewees who use a modeling tool for orchestration because they considered it easier to transform all models into a single modeling tool than orchestrate cross-platform models.
ยง.ยง.ยง Challenges
Through the interviews, different challenges were mentioned by interviewees. We have classified them in three groups:
model fidelity, model understanding, and interoperability. From 19 interviewees 10 (53%) mentioned such challenges.
Model fidelity. The model fidelity challenge refers to issues related to the performance because of the accuracy of models. We found three interviewees who discussed this challenge. The fidelity of a model is defined by interviewees
as the level of accuracy between the and its model. This challenge particularly deals with conflicting requirements between real-time execution and high-fidelity models. The interviewees mentioned that solutions to this challenge such as increased computational power are not always available or could increase execution complexity by adding distributed execution.
Model understanding. Model understanding encompasses
two types of issues. The first is related to an unclear understanding of the function of the model in the system
, which can lead to misuse of the model. An example is having numerical issues mentioned in subsectionย <ref>. The second issue is a lack of understanding of model relationships, e.g.,ย unit consistency and input/output data structure formats, as mentioned in Sectionย <ref>.
Interoperability. We identified three main challenges mentioned in this category. The first challenge, discussed by three interviewees, is related to the cross-platform nature of models for DTs
, which yields
semantic challenges.
Another challenge is the design of model scheduling
, discussed by two interviewees, particularly when the complexity of the system increases.
They mentioned that the scheduling of models should be designed based on the desired
purpose; hence, a different purpose requires different orchestration. Finally, another interviewee mentioned that the biggest challenge is related to the interaction of different types of models, e.g., combining continuous and discrete time models
due to their different nature and solver strategies.
ยง.ยง.ยง Discussion
Our analysis suggests that the majority agrees that orchestration is to correctly schedule models' execution. In addition, there are five key components to implement the orchestration: a trigger for model execution, a scheduling approach, data exchange method, global time, and time-stamps as shown in Tableย <ref>.
During our analysis, we observed various scheduling approaches which are highly influenced by the DT's application and the developers' knowledge. We believe that to facilitate orchestration design, more research should be done to create a general approach.
Technology selection for orchestration seems to favor model-based approaches, with DSLs as the most popular.
Our findings suggest that influencing factors for tool selection are the interviewee's domain and previous knowledge of specific tools.
We believe that the challenges related to model fidelity and model understanding can be tackled by clearly defining a DT's purpose and developing or modifying the models accordingly. Regarding the interoperability challenge, we believe that research on tools to facilitate interoperability, particularly in cross-platform and
model type interoperability can tackle this challenge.
The main task of orchestration is to correctly schedule models' execution. In addition, the execution trigger is a key component for orchestration. The definition approach for the trigger is highly dependant on its domain and DT application.
Moreover, the orchestration design seems to require much domain knowledge and is highly influenced by the DT's application and the developer's knowledge. Technology and tool selection is also highly influenced by their DT's application. Further research is needed on tools to facilitate the interoperability of cross-platform and cross-nature model types.
ยง.ยง Validation and verification techniques and tools (RQ6)
RQ6 was aimed at understanding what specific techniques and tools are used to verify and validate DTs, and their overall dynamic behavior. Here we refer to dynamic behavior as the behavior observed during the collective execution of the models and other components in VE.
It was observed that when answering the related interview questions, interviewees tended to use โsystemโ to interchangeably describe either the DT or the VE or the .
In the rest of this subsection, we discuss the techniques used for verification and validation of systems, as brought forward in the interviews.
ยง.ยง.ยง Importance of validation
We believe validation of DTs to be highly important, as it reduces the possibility of errors in their functioning and in their behavior. The importance or need for validation was explicitly and implicitly addressed during the interviews, which are presented below.
An industrial interviewee explicitly mentioned the impossibility of designing systems free of errors and thus, the need for validation.
The need to validate a DT in order to make it reliable was described by one interviewee as relating to reflecting the with sufficient fidelity and to trust in calculation and by necessity consider boundary conditions. He further described that the DT needs to be open to changes and subsequent retraining, recalibration, and revalidation.
Furthermore, our analysis suggests that validation is required
in cases where a highly accurate DT is being used.
Such highly accurate DTs of critical modules are then reused for different purposes across product lines, hence, across multiple DTs.
ยง.ยง.ยง Challenges in validation
Some of the general validation challenges of DTs as mentioned during the interviews are discussed here.
Two industrial interviewees expressed that validating a DT is much harder than validating a real system and it is infeasible to validate every aspect of a DT due to its numerous possibilities and degrees of freedom, myriad of parameters and corresponding calculations.
Our study suggests that in some
domains, such as the space domain,
testing is more resource- and time-intensive than development. An academic interviewee speculated validating a DT to be challenging because the composition of multiple models and the resulting emergent behavior complicate matters. Measuring the quality of DTs has been voiced as a concern in the interviews, considering the lack of a standard methodology to do so.
ยง.ยง.ยง Verification and Validation techniques
The different verification and validation techniques and strategies put forward by interviewees have been depicted in Figureย <ref>.
The observations from the interview may not necessarily encompass all aspects of DTs which need to be validated, but only those the interviewees stumbled upon in their DT or which they consider of highest importance from their standpoint or from literature.
According to one interviewee, validation of a DT could be done by understanding how well it has served its purpose, e.g., optimization, decision making, or predictive maintenance. Another interviewee shared that validation is part of the process of updating models based on using the continuous data from the . It has been observed that validation of DTs is highly dependent on their purpose, the DT's application domain, and the types of models and data used in the DT. For example, an interviewee from industry mentioned that when DTs are created as visualizations for marketing, validation is not required. However, he also mentioned that
when high fidelity, consistent behavior, and reliability in DTs are required, validation is crucial. Interviewees discussed various techniques for verification and validation based on different cases; we next present our findings on these.
* Comparing & VE behavior
This technique concerns behavior comparison between VE and ,
with the aim to check differences. We identified thirteen interviewees who discussed this informal validation technique
. This is done in different ways at different abstraction levels.
One way is using observational tests at high level, where the behavior of and VE are observed together using 3D visualization and checked for synchronicity and differences. At times, specific inputs or measurements from s are given to both and VE and behavior matching is checked.
In cases of DTs used for predictive maintenance, validation is done by initially observing the VE-based predictions and observing and comparing the output of the to those later on to check the accuracy of the predictions.
Deeper observational tests are done by creating visual representations such as graphs or 3D visuals of the behavior of both and VE and superimposing them to observe the extent of overlapping and differences. In other cases
, and VE behaviors are translated into events and actions in a Gantt chartย <cit.>.
The timing and sequence of actions are compared between and VE to check if there are any differences. Furthermore, in these cases, the system dependencies from both and VE are also compared, by creating respective dependency graphs and comparing these. Validation is a challenge when there is a combination of continuous and discrete behaviors in and VE. In such cases, continuous signals are transformed into discrete ones and then the behaviors of and VE are compared to check for equivalence. In some cases
, to compare distributions using statistics, the amount of deviation is quantified using Kullback-Leibler divergence testsย <cit.>.
There are some challenges with this type of validation. Being
dependent on measured data from the , it is unreliable according to two
interviewees, due to
incorrect data stemming from measurement errors, faulty equipment, or incorrect interpretations. Moreover, we speculate that there could be other issues in a DT such as consistency issues at runtime which may not be discovered by the aforementioned methods.
* Formal methods & tools:
We identified two interviewees from industry and one interviewee from academia who mentioned using formal methods in DT development. Formal verification techniques have been used to validate the behavior of DTs with the help of tools such as Verum's Dezyneย <cit.> and Coco[<https://cocotec.io/>]
.
An interviewee from industry mentioned that DSLs have been used to specify the behavior of a system and transform such specifications into timed automata models in UPPAALย <cit.>, allowing model checking. However, model checking is not quite scalable, considering state space explosionย <cit.>. He discussed that in order to address this challenge,
recurring behavioral patterns were identified and validated using model checking, rather than the entire system. In this way, some level of correctness guarantees were
provided.
Another interviewee mentioned that formal methods have been used for consistent execution of models in DTs: formal semantics for such execution were defined and were helpful to understand differences in execution between models. He also shared that
for their model interfaces, they formally proved that the components adhere to the interfaces to avoid interface violations caused by component changes.
He mentioned that this ensured consistent behavior when integrating components. He further discussed that in such cases, model-based testing is also done, to ensure that relationships between provided and required interfaces are not violated.
As witnessed by the above, three interviewees mentioned formal techniques for validating DTs; no others did. We speculate that the lower adoption of this method could be attributed to scalability issues.
As mentioned before, formal techniques such as model checking are not scalable, due to state space explosion problemsย <cit.>. Thus, formal methods have disadvantages related to scalability and broad applicability across domains.
* Testing and corresponding tools:
DTs also often undergo testing across their entire lifecycle, in order to check adherence to requirements of a DT and its components. At times, some
interviewees used the term โtesting' to discuss two items, namely, using scripts to test the system
and comparing the and VE behavior using observational tests.
Due to the lack of clarity in this term being used for different approaches, we did not perform quantitative analysis of interviewee responses for testing.
As mentioned earlier, in some domains such as the space domain, testing is highly time and resource intensive. In such cases, careful consideration is needed on when and what aspects to test, based on the DT and its context.
Moreover, in these cases, testing effort is then balanced with effectiveness. For instance, after resolving an integration issue, only local tests are done. On the other hand, full regression testing would be executed when replacing an entire sub-component.
An observation worth mentioning here is that this is not only specific to testing DTs, but generally used in the context of testing software systems.
Different types of tests have been mentioned by interviewees such as model-based testing, integration testing, and unit testing. In some cases, static analysis is also performed to detect coding errors,
thus, helping gain confidence about the system.
Some tools mentioned by interviewees are Axini's[<https://www.axini.com/en/>]
, Matlabย <cit.>, and Unityย <cit.>
which are used for creating and testing models on the fly.
As a related challenge, one interviewee explicitly mentioned that testing requires more effort than development, estimating a factor of three to four difference, in DTs in the space domain. As mentioned earlier, testing is a very resource- and time-intensive one requiring the right facilities and people to be available.
ยง.ยง.ยง Strategies for DT validation and to facilitate validation
We list below the strategies which interviewees discussed for validating DTs and for facilitating such validation.
* Validation after model reduction:
Detailed multi-physics models are complex in nature and might not allow real-time execution, hence, behavior computation at global scale is not possible. The complexity of such models is therefore reduced to obtain simplified models which work in real time (using neural networks, for instance). However, the complete detailed methodology for model reduction in this context was not discussed during the interview. This model reduction helps in facilitating the VE's behavior validation.
* Early validation of assumptions in the case of uncertainties:
We identified two interviewees who
discussed using assumptions about the VE's intended behavior during design, related to uncertainties, unforeseeable events, or unpredictable environments.
Such assumptions need earliest possible validation, with one interviewee suggesting to have a shorter design loop. The behavioral assumptions could be validated against the , if possible. However, in domains such as the space domain, where it is not, and where no relevant stakeholders are available for validation either, the interviewee mentioned that an assessment of risks of unforeseeable events and uncertainties is done,
in order to reduce the possibilities of failure to acceptable levels.
* Validation by
increasing complexity:
We found three interviewees who advocated bottom up DT validation, gradually increasing complexity. Even when comparing the behavior between and VE, it could be started with simple experiments, followed by more complex ones. The models in a DT could be validated initially and then, the integration of models could be validated.
A modular approach can also be adopted where instead of validating the entire DT at once, critical parts of the system are validated initially, followed by other parts and then the integration of all parts.
* Validation by operating DT at out-of-bound conditions:
We identified one industrial interviewee and one academic interviewee who
mentioned validating a DT by operating it at out-of-bound conditions. Such pushing beyond design space helps in determining whether the scientific principles based on which the DT was designed, still hold. This technique is used to test functional issues in DTs and especially, whether the design of a DT encompasses more than just the data from the field, and to check for issues in cases of extrapolation.
This technique also helps to find defects and to test whether behavior
associated with every DT parameter is well defined, e.g., when operating at out-of-bound conditions whether it gives the right error messages in unrealistic scenarios.
This method of testing helps to increase confidence and reliability of DTs. According to one interviewee, this technique has also been used as a test of quality of DTs in the print industry.
* Continuous validation of DTs:
Three interviewees discussed about continuous validation of DTs. They mentioned that models in a DT undergoes updates due to data continuously communicated from the in the field, feedback from subject matter experts or field service engineers, or bug fixes. When such updates occur, benchmarks are run continuously to validate the DT and ensure that the same overall behavior is exhibited by DT before and after updating.
One interviewee also advocated validation after reuse of artifacts during DT development.
* Other cases of DT validation: Validating a DT by using another DT, by simulating the behavior of the DT before deploying, was also discussed by an interviewee, referring to Ahlgren et al.ย <cit.>.
In the chemical domain, a DT is validated by comparing it with the design information in terms of mass and heat balance equations. One interviewee from academia envisioned that over time validation of their
DTs would be done by having a human-in-the-loop.
ยง.ยง.ยง Not validating the DT
Eighteen interviewees discussed validating DTs by using the above mentioned methods,
with just one interviewee indicating not validating their DT. He shared that there are currently no tests in place to check if the DT is functioning as intended. Moreover, any issues in the DT behavior can only be observed when the simulation is running. He elucidated that since he is already aware of how the DT should behave and deviation in behavior can be observed during the execution, no validation is performed on his DT.
ยง.ยง.ยง Discussion
Based on the observations presented above, all but one interviewee currently perform some form of validation of the complete DT or parts of it. In fact, we identified cases where it is necessary to validate the DT continuously as it undergoes changes across its entire lifecycle.
From this we can infer that validation of DTs is highly important and it is performed widely across the engineering spectrum in both academia and industry.
Furthermore, we also identified several challenges involved in validating a DT. One major challenge is that most validation techniques can only cover certain aspects of a DT
and not all. Thus, our study suggests a multi-faceted approach, combining multiple techniques, is required to validate the different aspects of a DT.
We identified 13 out of 19 interviewees who are currently validating their DT by comparing the behavior of and VE. In addition, we found three interviewees who are currently using formal methods to validate their DTs. Moreover, testing has also been used as a technique for validating DTs. We also found one interviewee who does not validate their DT currently. Our analysis suggests that the choice of validation method depends on the DT's purpose, domain, and application; and requires a multi-faceted approach, possibly combining multiple aforementioned techniques.
ยง.ยง Properties for validation (RQ7)
The goal of this research question is to understand which validation properties are considered important and need to be validated in the context of DTs. During the interviews, ten interviewees explicitly mentioned one or more such properties in relation to their respective DTs. We discuss these properties and associated challenges based on the interview data analysis.
ยง.ยง.ยง Introduction and challenges
We intended to understand which DT aspects the properties for validation should cover. Two interviewees provided a high level generic overview on this. One industrial interviewee mentioned that the properties to be validated in a DT lie on many levels. Another interviewee expressed that the properties should enable the observation of critical things which might go wrong in DTs.
During the interviews, the challenges with validation properties in DTs were discussed. A challenge mentioned by two interviewees from academia,
was how to measure the quality of DTs and which properties could be used for this.
They further mentioned the challenge of quantifying the properties which could be used as a measure of a DT's quality. This challenge also entailed how these properties could be defined in order for them to be computable.
We intended to identify such properties which need to be validated in a DT.
ยง.ยง.ยง Properties for validation of DTs
Different types of properties to be validated in a DT were discussed by the interviewees. Interviewees used the terms โpropertiesโ and โparametersโ while discussing this topic. We classify the properties at high level into (1) behavioral properties, and (2) qualitative properties. We discuss these classes below.
Behavioral properties:
From the validation method based on comparing and VE behaviorโdiscussed in Sectionย <ref>โit can be understood that VE fidelity is an important property that interviewees consider. In addition, five interviewees explicitly mentioned that time difference in execution between and VE is an additional property for which validation is needed.
We identified six interviewees from academia and nine from industry who conveyed explicitly or implicitly that the properties to validate depend on DT's purpose, application, or domain. These properties related to the domain, DT's application or purpose will be discussed below. In addition, properties related to dynamic consistency of DTs will also be discussed below.
* Temporal properties related to domain, DT's application or purpose:
Several temporal properties were discussed by the interviewees. An interviewee from the space domain emphasized the importance of timing requirements in this domain and thus, of validating these properties in the DT. One property discussed was the availability of sufficient margin in timing budgets to meet software deadlines.
He further mentioned properties specific to space missions where multiple computers are used to avoid failures which could lead to loss of onboard human life. These properties are whether the secondary computer switches and takes over control when the primary computer fails within the time margin available; and the retrieval of
stored data during the mission.
One academic interviewee mentioned query execution time as an important property for connected and autonomous vehicles. He further expressed that in semantic web use for DTs, low latency is an important property. Another academic interviewee shared that the maximum time taken for the DT to perform an action is an important property to consider. One academic interviewee mentioned the property of communication time between nodes, and an industrial one mentioned validating certain parameters in the communication layer, though which properties this pertained to was not made explicit. One interviewee from industry also expressed that timeliness is an important property to be considered in a DT. Real time properties, in the form of activation time and software deadlines, were discussed by two interviewees from industry.
* Other properties related to domain, DT's application or purpose:
Other properties specific to domains were also discussed by interviewees. In the case of the lighting domain, e.g., a DT comprising several lamps in a room, these can be simple properties such as whether both lamps are only ON or OFF together.
One interviewee from academia discussed properties for temperature control in a building such as how much data is required and needs to be stored for decision making, and how much redundant information is present in the stored information.
* Properties related to dynamic consistency:
Some of the functional properties to be validated in DTs which were discussed are deadlocks and bottlenecks. One interviewee mentioned different types of deadlocks such as various software deadlocks:
circular reference deadlocks; deadlocks related to data permutation; behavioral deadlocks resulting from the interplay of different behaviorsโfor instance, interplay of behavior related to kinematics, geometry, and time; and other types of deadlocks. Several temporal properties were also discussed, related to dynamic consistency issues, such as latency in communication between modelling tools; round-trip time and properties on how swiftly a tool sends and receives messages, and response times.
Qualitative properties:
Some qualitative properties were also specified by an academic interviewee for a conglomerate of DTs such as modularity and composability.
ยง.ยง.ยง Discussion
From our findings, we noticed that some interviewees expressed their concerns
on how to measure the quality of a DT and how to quantify the relevant properties. We also observed that works such as Dalibor et al.ย <cit.> also discuss this concern on quality assurance and requirements for DTs. An academic interviewee mentioned using acceptance tests based on the Kano modelย <cit.>
to measure the quality of requirements in a DT. However, the effectiveness of this model and its applicability for DTs across different domains was not discussed.
Our interviewees, from a range of domains, uniformly agreed that the properties for DT validation depend on the DT's domain, purpose, or application.
Fifteen interviewees discussed that DT properties to be validated depend on a DT's domain, purpose, or application. In addition, behavioral and qualitative properties have been discussed in the interviews. Specific functional and temporal properties are of interest to some of these interviewees as being key to address dynamic consistency issues in a DT.
ยง.ยง Future vision of Digital Twins (RQ8)
One interview question was aimed to understand intervieweesโ perspective on the future evolution of DTs. During our analysis, we identified interesting and significant outcomes from this discussion. In order to accommodate and coherently present these results, we formulated RQ8. Please note that unlike all other RQs, this RQ was defined after conducting the interviews.
ยง.ยง.ยง DT as a tradeable asset
Two interviewees from industry expressed that the DTs could evolve in the future to become a tradeable asset which would be made available alongside its corresponding :
when an
is traded between two parties, its corresponding DT could also be traded or provided access to. Furthermore, at times, organizations outsource their projects to a third party for developing the . During its development, the third parties might possibly create a DT of that for improving the 's design. In such cases, the organization outsourcing its project would not only expect the third party to develop the , but would also want access to or ownership of the corresponding DT.
This transferring or sharing of DTs across the entire lifecycle has been predicted by these two interviewees to be a trend in the future of DTs.
This could be helpful in two ways. Firstly, third party organizations working with this โsuch as for maintenance, operations, or other collaborationsโmight require use of the corresponding DT, in order to efficiently provide their services and make them effective. These third parties could be provided access to the DT then since the organization owning the also owns or has access to the corresponding DT.
Secondly, when organizations gain ownership of or access to DTs along with the relevant data, they have the option to experiment with this DT to see how they can make the best use of the newly traded
for their organization.
This experimentation with DTs in a way helps to realize the complete capabilities of an and thus to make the best use of this .
One of the aforementioned two
interviewees further specified that the ownership of data in a DT becomes a challenge when providing services based on such data. This interviewee speculates that when a service is provided to one party, using data owned by another party, then the data could be used to generate revenue
and thus also become a tradeable asset. However, the other interviewee mentioned
that shared ownership of a DT also brings in an additional complexity in terms of reliability of the shared DT and who is responsible when any impactful incident occurs.
ยง.ยง.ยง Future evolution of DTs in real world applications
Below we present the predictions made by the interviewees on how the role of DTs would evolve in real world applications.
* A world of DTs interacting with each other:
During our analysis, we identified predictions pertaining to the interactions among DTs of different vendors.
Particularly, one interviewee who discussed transferring ownership of DTs across its entire lifecycle mentioned that when DTs become a tradeable asset for most real world products traded, this could result in several DTs for every organization or person using the corresponding s.
Two interviewees shared that these DTs from different domains around us could probably then interact with each other and exchange useful information for use-cases such as improving decision making, accurate diagnosis, etc.
* DTs increasing 's
autonomy and adaptability:
Two interviewees emphasized that s
could possibly become more autonomous and self-adaptive with the help of their DTs. One interviewee predicts that
DTs will become part of the
and thus, systems can reason about themselves with the help of their DT:
e.g.,ย detect when they are not functioning properly or optimally and request a human to further diagnose the issue. Once diagnosed, they might then optimize themselves to a certain extent, as long as it is within their scope of control, such as changing a few parameters, disabling functions, deploying necessary software and others. This would avoid the need for human intervention for making such changes. Moreover, when the environment around the
changes or when the operator wants to use that system in a different way, then with the help of its DTs, the system would be able to autonomously adapt itself to these changes, increasing flexibility of usage.
* DTs in improving automation and design support:
Two interviewees predicted DT usage to enable increased automation of real world processes. They mentioned that changes which need to be done in an ,
could be applied directly on the corresponding DT by giving commands; and that the DT could then automatically make those changes in the .
They further predicted that the DT will play an important role in design support when developing new s.
One interviewee mentioned that DTs will direct the designers and help in selecting the appropriate components needed for a factory floor when setting it up. Another interviewee expressed that with DTs' application of machine learning techniques, DTs will become more intelligent and could possibly become an interactive design support assistant. This clever assistant will help to solve problems and prevent pitfalls in the design that an engineer makes. For example, when systems are integrated and interact with each other to exchange data, the DT would possibly detect where and what inconsistencies occur in such interactions and help in fixing them in the design.
* Role of AI in DTs:
While
the terms โArtificial Intelligenceโ, โMachine Learningโ (ML) and โReยญinforceยญment Learningโ (RL) were mentioned during different parts of the interview, six interviewees specifically mentioned these terms while discussing future trends of DTs. According to one interviewee from academia, ML and RL can be combined with DTs to help to learn about complex systems (i.e., safety critical systems) in a virtual environment, when this is difficult to do on
the real-world system. Furthermore, he mentioned that ML algorithms could be used to learn control software using the DT, and then control the corresponding .
Another interviewee from industry suggested that integration of AI and ML with DTs will be the biggest step for the next 10 years and can help to improve predictive maintenance of real-world systems. An academic interviewee mentioned that AI will be useful in the development of DTs when using data obtained from the
for automated model improvement or refinement.
* Other future predictions on DTs:
Two interviewees also predicted that there would be a shift in the way s
are developed in the future where a DT would always be completely part of the development of systems. One of these interviewees identified a related challenge regarding resistance against accepting the results from DTs in certain communities as they require a confirmation of those results from the real-world system. He believes that this would possibly change in the future and most communities would start accepting results from DTs.
ยง.ยง.ยง Future evolution in the development of DTs
Below we discuss the various visions on future, improved DT development.
* Two industrial interviewees mentioned lack of standards as a challenge. They believe that such standards would become available for, e.g.,ย developing and maintaining DTs, and for managing and combining data in DTs
.
* One interviewee mentioned the interoperability challenge related to integrating different simulation software tools (discussed in Sectionย <ref>) would be solved in the future. Moreover, he mentioned that a platform supporting tool interoperability would be available for DT developers to use in the future. We observe that this seems dependent on the definition and proper implementation of standards to ensure such interoperability.
* Two interviewees mentioned improvements in the ease-of-use and intuitiveness of DT development tools.
They mentioned the current challenge in DT development as requiring software knowledge in order to develop them; and they believe this would change in the future. They expect DT development tools to become more intuitive such that DTs can be developed by defining simple functions, adding minimal code, and dragging and dropping components, without requiring assistance of software experts. This would enable experts from different domains with minimal software knowledge and expertise to develop DTs with ease.
* One interviewee from industry also mentioned that visualization in DTs would possibly improve in the future in order to present data to humans in a convenient way leading to better interpretations by humans.
* An academic interviewee compared DTs to software and mentioned that DTs will hence have to be versioned, tested, validated, and certified in the future, like regular software.
* Another interviewee from industry used the term โSimulation as a Serviceโ to describe a cloud-based service similar to a DT, which would possibly be available in the future for performing Finite Element Method (FEM) analysis by simply uploading CAD models and selecting the part where the analysis needs to be done, and providing the results.
* Two interviewees also explicitly mentioned that DTs should evolve over their entire lifecycle to serve new purposes, i.e., provide new services in addition to what they were initially developed for. One of them further mentioned that the speed of this evolution depends on the DT's application and context.
ยง.ยง.ยง Discussion
Future predictions were made on three different aspects of DTs, namely, the trade aspect of DTs, the influence of DTs in engineering applications, and the evolution of DT development in the future. Some of the future predictions made by the interviewees have not been discussed in any literature to our knowledge. We believe that some of the aforementioned future applications of DTs such as increasing AEโs autonomy and adaptability, and improving automation and design support, are highly important. We further believe that the influence of DTs in engineering applications could substantially grow
in the future. It was interesting to observe several interviewees discuss future improvements in DT development. However, there was no explicit timeline specified by the interviewees on when they expect these improvements to come into existence. With DT development, we believe that improving the ease-of-use and intuitiveness of DT development tools which would enable non-software experts from any domain to develop DTs with ease, would be a game changer and greatly improve the adoption and use of DTs across domains.
Predictions on the business model of DTs have been made such as DTs and their related data becoming a tradeable asset whose ownership could be transferred or shared across the lifecycle. AI is expected to become highly pronounced in DTs in the future. DTs have been predicted to improve the automation and self-adaptability of systems; and also to help in the design support for such systems. Current challenges in DT development such as lack of intuitiveness, standards, and interoperability are predicted to be resolved in the future.
ยง.ยง Additional findings
As mentioned in Sectionย <ref>, this research was conducted using semi-structured interviews, allowing the interviewee to discuss any topic. We dedicate this section to presented results from topics discussed which do not fit any of our research questions. Topics mentioned by at least seven interviewees are discussed below, and summarized in Tableย <ref>. The first column shows our classification of the findings. The second column shows the number of interviewees who discussed a specific class in our classification.
ยง.ยง.ยง Architecture
This section aims to explain the DT's architectural choices shared by 11 interviewees. We identified two key architectural properties mentioned by our interviewees, which are
re-usability of the components and maintainability of the system. According to the interviewees, the main objective of the architecture is to aid rapid DT development. We observed four architectures mentioned by the interviewees, but we only report on the one with at least three mentions.
A block-based architecture for the VE was mentioned by six interviewees, four industrial and two academic. For each interviewee the entity encapsulated in such a block is different. For two academic interviewees, the block is a model that can exist at different levels of abstraction, e.g., a component of a machine, the complete machine, or the entire manufacturing system. For them each block must be configurable, to define limitations on the models and make them unique. Another two industrial interviewees explained that their notion of a block is a component of a machine or process, but never the entire system, e.g., in a belt conveyor DT, the blocks can be the belt, the motor, etc. For them, the separation of components should facilitate VE maintainability. Another industrial interviewee described each building block as a stage of the life cycle of the product, which can be assembled to compose the DT; e.g., a block is the design of a wind turbine, and another block is the wind turbine operation. Finally, another industrial interviewee explained that each engineering domain builds a block constituted of several models which later are encapsulated and connected to other blocks.
According to our analysis, block-based architectures with their separation of concerns between entities of a different natureโsuch as components of the or engineering domainsโaid rapid DT development, due to component re-usability and maintainability.
ยง.ยง.ยง Process
This section aims to understand the process seven interviewees shared to build a DT. Five of them follow a software development process adjusted to DT development.
Another two described specific, domain-dependent, processes to design a DT.
The five interviewees who follow software development processes expressed that they have adapted the processes but they did not specify how. The software development processes that were mentioned are Dev-Ops (Software Development and Operation) or Agile. They stated that these practices facilitate cross-domain cooperation.
The other two interviewees, who used specific design processes, recognized that each domain develops DTs in different ways, but still uses software development practices in different stages of their DT development. The industrial interviewee discussed that he uses Dev-Ops practices and tools to develop some of his sub-steps, such as the use of Continuous Integration-Continuous Deployment (CI/CD) tools. The academic interviewee shared that he uses practices from the V-model to perform his unit testing.
ยง.ยง.ยง Goal's role in design
This section discusses the role that a clear goal can play in the design of digital twins, according to 11 interviewees. All of them mentioned determining a goal (or purpose, or service) as the first step to developing a DT. We identified the influence of the goal in three entities, namely, the model, data, and other design choices.
Related to models, they mentioned that the goal defines the model's fidelity, level of abstraction, type (e.g., continuous time), and modelling approach (e.g., data or physics-based).
Concerning data, the goal defines the data to collect, data processing methods, and the selection of sensors and actuators. They mentioned that to design the collection of data, the data must have a purpose. The data purpose aids the selection of data sources, processing methodologies, and other properties such as collection frequency.
Other design choices influenced by the goal are tool selection, resource definition, and optimization methods. Moreover, one interviewee mentioned that the DT's goal should relate DT development to the business objectives.
ยง.ยง.ยง Modeling practices
We found six interviewees who discussed the best practices to maintain or create models for DTs. Regarding maintainability, three interviewees shared that in DTs the models must be updated regularly since they should be synchronized with the during their life cycle. However, they mentioned that this is a challenge since current version control systems are not appropriate for complex systems such as DTs.
Another three interviewees mentioned that it is important to find a balance between model fidelity and system complexity. An example is the use of data-based models (e.g., machine learning models) in highly complex systems, which require real-time responsiveness. According to these interviewees, data-based models can execute faster, have good fidelity, and can learn from the environment, compared to physics-based models.
ยง.ยง.ยง Role of humans in DT
As mentioned in sectionย <ref>,
two interviewees explicitly mentioned that humans play an important role in a DT and thus, they should also be seen as a part of a DT. Several
interviewees also implicitly discussed the importance of humans in a DT during its operation. One common observation from our analysis is that humans interact with a DT through a user interface for monitoring, training, and other purposes. Our analysis further suggests that certain services provided by the DT cannot be automated and thus, a human is required to translate the information received through the user interface, into actions on the .
Two interviewees
explicitly mentioned that complete automation in a DT is not possible and so, the human part in a DT is very important for its operation. Three interviewees from academia also mentioned that continuous information input and knowledge use from humansโsuch as field service engineers, subject matter experts, stakeholders part of or using the
and othersโhelp in updating and improving the models continuously while the DT is in operation. One industrial interviewee also mentioned that model calibration and rework which involves changing and tuning parameters in the model based on the data from the real world, may require human intervention at times. Three
mentioned that in some of the DTs they have worked on, there is no direct connection between the
and its virtual counterpart and thus, humans act as the bridge connecting these two entities by manually transferring data from the
to its corresponding VE in order to effect their continuous synchronization. Apart from these, one interviewee from academia mentioned that in one of the DTs he had worked with, humans played an important role in training the virtual counterpart
of an ,
which then trained the
for its specific purpose.
Thus, it has been observed that humans, apart from contributing to the development of DTs, play an important role in the operation of DTs; this has been visualised in Figureย <ref>.
ยง.ยง.ยง Discussion on additional findings
During the interview, the interviewees shared their thoughts and opinions on the development of DTs. The additional findings, as visualised in Tableย <ref>, are related to architecture, development process, the DT goal's role in design, and modelling practices.
Most interviewees who discussed DT architecture mentioned a block-based architecture as their preference. Each has a different interpretation of what a block entails,
but all agree that the aim of this architecture is rapid DT development and maintenance.
From the interviewees who discussed their process for DT development, the majority stated using an adapted software development process, such as Dev-Ops or Agile. Others who use specific development processes, seem to use some software development practices and tools, such as unit testing or CI/CD tools. Our analysis shows that the main driver to use software practices or processes is the software-centered nature of DT's.
Interviewees who discussed the role of the goal in the design, agreed that it is crucial to define it before starting DT development. The goal has a big influence on many design decisions related to models, data, and tooling. Examples of this are the fidelity of the models or the data selection from the .
Related to models, interviewees discussed the importance of DT models' evolution during the life cycle of the . This evolution requires proper tools for version control, going beyond currently available ones.
Furthermore, while several interviewees implicitly discussed the importance of humans' role in DTs, only two interviewees explicitly considered humans to be a part of DTs.
.96
Takeaway message:
Our analysis shows that the DT architecture should facilitate maintainability, cross-domain collaboration, and rapid development. Also, our analysis shows that DT development processes contain ingredients of software development processes or adaptations thereof for DTs. A common recommendation for development, is to define the goal of the DT before starting its development. Concerning modelling practices, our analysis shows that models should evolve with the and that current technologies for version management are not sufficient for these complex systems.
ยง THREATS TO VALIDITY
This research is an empirical study, which is never completely devoid of omissions or pitfalls. We discuss the threats to validity as described by Easterbrook et al.ย <cit.> and the measures taken to mitigate these.
Regarding construct validity which pertains to quality of the measurement of the constructs for the experiment, one concern is the level of expressiveness of information shared by the interviewees about the DT practices and technical DT issues at their organization. However, this was mitigated to a certain extent by emphasizing the research's approval by the universities' ethics review board and data stewards, assuring data privacy rights and complete anonymization of the collected data and its processing.
A second threat concerns the limited variability of interviewee opinions. To mitigate, we included interviewees from both industry and academia with varied educational backgrounds, different levels of experience with DTs; working in diverse roles; from different industrial domains; and from companies of different sizes.
A third threat concerns the understandability of the interview questions. To mitigate this threat, we conducted a pilot interview with an interviewee from industry working with DTs.
Based on the feedback received, interview questions were reordered and paraphrased for a seamless interview flow and ease of understanding, the process of which is explained in Sectionย <ref>.
A last construct validity threat is related to interviewer bias. This was mitigated by creating an interview guide which had instructions to drive the interview with an exhaustive list of pre-defined questions to be asked. Furthermore, interviews were conducted by two interviewers, one asking questions and one keeping track of questions and making notes.
Regarding internal validity, the dependency of results on just the interview data is a potential threat. In order to mitigate this, the data analysis was done meticulously following the analysis methodology described in Sectionย <ref>. Data analysis involved three levels of transcription: firstly, automated transcription; secondly, manual transcription by student assistants; and thirdly, verification of the entire transcripts against the recorded interviews by three researchers. The coding process was scrupulously done by three researchers in an interpretative manner over two months. For any artifact, the process was deemed complete only when it was coded equally by at least two researchers. When the two researchers had different interpretations in coding a particular artifact, these were presented and discussed among the researchers to arrive at a common understanding on such codes. This way of working was described in Sectionย <ref>.
Regarding external validity, the generalizability of results is a threat. We tried to minimize this threat by having a significant number of interviewees with diverse roles and varied educational backgrounds from both industry and academia; with different levels of experience with DTs; from different industrial domains varying from health care to space exploration; from companies of different sizes; and with different levels of understanding of DTs, considering the lack of common understanding of DTs in both industry and academia.
However, the sample size of nineteen in this research can still be seen as relatively small, possibly limiting external validity.
ยง DISCUSSION AND CONCLUSION
In this exploratory research we studied the current landscape of DTs from a technical point of view, particularly its current state-of-practice on design, development, operation and maintenance. To do that we interviewed individuals from industry and academia. This research is focused on software aspects related to DTs, specifically the interviewees' DT's understanding, model consistency, integration, orchestration and validation. These challenges were discussed inย <cit.>. In addition, we also discussed our interviewees' opinion on the future impact of DTs.
With regard to understanding of DTs as discussed in Sectionย <ref>, our findings suggest that there is no consistency in the definition of DTs nor in the understanding of the components that make up a DT. However, commonality exists in the understanding that a DT is a virtual representation of an entity. Moreover, there is some agreement on certain components of a DT, namely, model, data, and some level of synchronization between the virtual representation and its . The level of agreement of the different components in a DT was depicted in Figureย <ref>.
In addition, we asked the interviewees for their opinion on Tao et al.'s DT modelย <cit.>, explained in Sectionย <ref>. 11 interviewees agreed to this model to a certain extent, although with some changes to this model. Some of the changes suggested pertaining to the connections in this model are eliminating certain connections among components and some of the connections can be made uni-directional instead of bi-directional. Furthermore, other changes which were suggested to this model are that there could more than one VE which could be separated or inter-connected, data and VE component in this model could be combined into a single component and humans play an important role in DTs, thus they could possibly be included as another dimension.
Regarding reuse practices in DTs (Sectionย <ref>), we noticed positive effects in the development of DTs, yet in practice infrequent reuse due to its challenges. On the other hand, we observed a trend towards multi-tool development approaches to DTs which can benefit from re-use practices. Moreover, the multi-tool approach was also frequently used to integrate models, as shown in Sectionย <ref>.
The findings on model consistency discussed in Sectionย <ref> show that inconsistency issues are recurrent, but not recognized as inconsistency issues specifically.
We observed that these issues are solved as regular issues, with no specialized methods or tools implemented. We are convinced that further research on methods and tools to tackle inconsistencies is needed. Furthermore, such methods and tools can tackle orchestration challenges related to data exchange, as discussed in Sectionย <ref>. These methods and tools can be based on highly standardized domains, e.g.,ย the automotive and aerospace domains,
which has demonstrated to be more mature in dealing with inconsistencies.
Our findings on integration (Sectionย <ref>) show maintainability, cross-domain cooperation and effort as the main criteria to select the approach to integrate. The high frequency use of a multi-tool approach is due to facilitation of the first two criteria, although it requires more effort: heterogeneous components require encapsulation and interface definition to operate together. Finally, two main challenges were related to the heterogeneity: the cross-platform integration; and complexity, i.e.,ย the difficulty of managing the number of components to integrate in a DT. In conclusion, we believe that research is needed to develop a tool for cross-platform integration. Such a tool could be based on technologies such as FMI and DSLs as mentioned by the interviewees, and help to tackle a DT's complexity and heterogeneity.
The model orchestration findings discussed in Sectionย <ref> show that all interviewees agree orchestration is about proper scheduling of model execution for a specific DT application. Thus, the type of DT application seems to define the trigger to execute a model,
role of time for orchestration and how data is exchanged among models. Finally, the main technical challenges are cross-platform and combining continuous and discrete models' execution. In conclusion, research is required to generalize and implement the aspects to orchestrate different domain applications. A DSL seems to be a promising approach, since it was the most frequent technology mentioned. These aspects should be implemented in a tool for model scheduling. Finally, the developed technology should tackle the technical challenges on cross-platform and cross model-type execution. The technological solutions for integration and orchestration should be complementary because DT services' dynamic behavior require proper orchestration. However, before defining the orchestration, the DT integration is required since it defines its components and interconnections.
The verification and validation findings discussed in Sectionย <ref> show that it is considered an important task in DT development. In addition, they show that the most prominently used method for validation is the informal method of comparing the behavior of VE and the . Formal verification and testing has also been used for verification and validation of DTs. Each of these methods have different challenges which have been discussed in Sectionย <ref>. Furthermore, some strategies for validating DTs were put forward such as validation after model reduction, validation by increasing complexity, validation by operating DTs at out-of-bound conditions and continuous validation of DTs. However, each presented technique only validates specific aspects of the system, thus we speculate that a combination of the techniques is required to rigorously validate a DT.
The properties for validation of DTs discussed in Sectionย <ref> are highly influenced by DTโs domain, purpose or application. Several behavioral properties were discussed in the interviews which were specific to the DTโs domain, purpose or application. Moreover, some functional and temporal properties were put forward which are key to address dynamic consistency issues in a DT. One of the main challenges put forward in the interviews were how to measure the quality of DTs. This challenge was further discussed on which properties could be used for measuring quality of DTs and how to quantify these properties. We believe that further research is required to understand how to measure the quality of a DT and the corresponding properties for this purpose.
Finally, the findings regarding the future of DTs from Sectionย <ref> show that there are three main trends put forward by the interviewees. First, related to the influence of DT in business, interviewees expect that DTs will be assets that can be traded. Furthermore, DT technology will be used in many products; and as a result interaction of DTs between vendors will become common practice. Second, the influence of DT in engineering practices, interviewees shared their view on how DT technology will transform systems to become more autonomous and self-adaptive. Third, they shared how DTs' development will be facilitated in the future, by tackling its main challenges such as interoperability or standardization. Moreover, they shared that in the future tools will facilitate DT development by non-software developers.
Our analyses of the interviewees' expressions showed many challenges around DT development, maintenance, and operations. Yet also emerging were various ways to address or with potential to address these challenges, involving using concepts and results from more classical software and systems engineering. The field holds a lot of promise and need for applying and adapting such concepts and results, requiring research on how best to adjust and apply these to Digital Twin development, maintenance, and operations. With such research, and with the vision of a DT-enriched future as expressed by the interviewees, the future definitely looks both bright and twinned.
ยง ACKNOWLEDGEMENT
This research was funded by NWO (the Dutch national research council) under the NWO AES Perspectief program, project code P18-03 P3.
|
http://arxiv.org/abs/2306.05686v2
|
20230609055723
|
Encrypted Simultaneous Control of Joint Angle and Stiffness of Antagonistic Pneumatic Artificial Muscle Actuator by Polynomial Approximation
|
[
"Yuta Takeda",
"Takaya Shin",
"Kaoru Teranishi",
"Kiminao Kogiso"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
Encrypted Simultaneous Control of Joint Angle and Stiffness of Antagonistic Pneumatic Artificial Muscle Actuator by Polynomial Approximation
Yuta Takeda,
Takaya Shin,
Kaoru Teranishi, and
Kiminao Kogiso
This work was supported by JSPS KAKENHI Grant Numbers JP22H01509 and JP21K19762.
Y. Takeda, K. Teranishi, and K. Kogiso are with the Department of Mechanical and Intelligent Systems Engineering,
The University of Electro-Communications,
1-5-1 Chofugaoka, Chofu, Tokyo 1828585, Japan.
e-mail: [email protected].
T. Shin is with DAIHEN, Inc.
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================
This study proposes an encrypted simultaneous control system for an antagonistic pneumatic artificial muscle (PAM) actuator toward developing a cybersecure and flexible actuator.
First, a novel simultaneous control system design is considered for the joint angle and stiffness of a PAM actuator in a model-based design approach, facilitating the use of an encrypted control method.
The designed controller includes a contraction force model expressed as rational polynomial functions, which makes it difficult to encrypt the controller.
To overcome this difficulty, a least absolute shrinkage and selection operator (LASSO)-based polynomial approximation is employed for a rational controller.
The resulting polynomial controller is then transformed into a matrix-vector product form, which enables the use of a specific homomorphic encryption scheme to develop an encrypted simultaneous control system for the PAM actuator.
Finally, this study quantitatively evaluates the tracking control performance of the original, approximated, and encrypted controllers.
The experimental results show that the proposed encrypted controller achieves simultaneous tracking of the joint angle and stiffness with a tracking error of less than 2.7ย %.
PAM, encrypted control, polynomial-type controller, LASSO, simultaneous control, experimental validation
ยง INTRODUCTION
The McKibben pneumatic artificial muscle (PAM) was first developed in the 1950s <cit.>.
A PAM is inflated by injecting compressed air into a rubber tube wrapped in a nonstretchable mesh to generate a contraction force.
This structure offers several advantages over traditional motors and cylinders, including compact size, lightweight, and flexibility <cit.>.
Because PAMs produce contraction force in a single direction, an antagonistic configuration is typically adopted for practical applications, where two PAMs are arranged in parallel through joints <cit.>.
This structure facilitates rotational motion and closely mimics human movements <cit.>.
Owing to these benefits, a review paperย <cit.> stated that PAMs have found considerable use in the development of rehabilitation and assistive devices that can be comfortably worn and operated by users.
Furthermore, numerous studies have explored stiffness or compliance control <cit.> and developed methods of simultaneous control of the joint angle and stiffness to leverage the potential of PAMs fully <cit.>.
These active controls enhance safety during contact and collision between robots and humans or their environments while also augmenting the comfort of wearable devices.
Applications that integrate wireless network technology into PAM actuator systems include remote surgical robots <cit.>, remotely operated cranes <cit.>, teleoperation of pneumatic robots <cit.>, and walking-assistive devices <cit.>.
The use of wireless communications enhances the portability of devices and promotes the self-rehabilitation of patients at home.
In self-rehabilitation, the monitoring of wearable devices through a network by physical therapists helps prevent injury due to falls.
Meanwhile, attention must be paid to the cybersecurity of networked actuator systems.
Cyberattack incidents and risk analyses have been reported, such as the falsification of control parameters by Stuxnet to destroy process plants <cit.>,
an unmanned aerial vehicle compromised by hijacking video streaming <cit.>,
and the compromising of a robot controller <cit.>.
To develop secure PAM actuator systems, networked actuator systems must be equipped with cybersecurity countermeasures.
Encrypted control was proposed as a cyberserucrity countermeasure for networked control systems in <cit.> by integrating the homomorphism of a specific public-key encryption scheme into a linear or polynomial-type control system.
Encrypted control conceals the control parameters inside the control device and the signals over the communication links, resulting in protection against eavesdropping <cit.> and real-time detection of cyberattacks <cit.>.
Encrypted implementation is expected to enhance the cybersecurity of control systems.
However, designing a nonlinear controller that is not a polynomial, such as a rational polynomial or switching controllers, is difficult.
A simultaneous control system for the joint angle and stiffness of an antagonistic PAM actuator uses a rational polynomial-type controller <cit.>.
In additional, encrypted proportional-integral (PI) control of the PAM joint angle or torque was considered in <cit.>, and in <cit.>, the implementation of encrypted polynomial controllers was demonstrated, verifying its effectiveness through a numerical toy example.
Unfortunately, to the best of our knowledge, no studies have been conducted on encrypting such nonlinear controllers.
The objective of this study is to propose an encrypted simultaneous control system for an antagonistic PAM actuator to develop a cybersecure and flexible actuator.
In this study, the simultaneous tracking control of the joint angle and stiffness of the antagonistic PAM actuator is considered for a step-like reference.
We present a novel model-based nonlinear control system comprising three components: a reference generator, a contraction force estimator, and PI controllers.
However, a model-based controller includes specific rational functions, and applying the encrypted control method is challenging.
To overcome this difficulty, this study approximates the designed controller as a polynomial-type controller that is friendly to controller encryption.
We then quantitatively investigate the effects of polynomial approximation and controller encryption on the control performance to evaluate the developed encrypted control system.
Finally, an experimental investigation confirms that the proposed encrypted control system enables simultaneous tracking to the step-like reference within an acceptable tracking error.
This study makes the following main contributions:
it is the first to present an encrypted nonlinear control system for simultaneously tracking the joint angle and stiffness of an antagonistic PAM actuator, and
it provides a novel form of the contraction force model, which helps reduce the number of approximations that cause control performance degradation throughout the controller encryption procedure.
The remainder of this paper is organized as follows.
Sectionย <ref> introduces the antagonistic PAM actuator with a control objective and presents a model-based simultaneous control method for tracking the joint angle and stiffness of the PAM actuator to the reference.
Sectionย <ref> discusses the polynomial approximation to acquire a friendly form of the controller to realize the encrypted control.
Sectionย <ref> proposes an encrypted-controlled PAM actuator to maintain the controller and communication secrets and investigates the impact of the secure implementation on the control performance to highlight the effectiveness of the proposed encrypted controller.
Finally, Sectionย <ref> concludes the study.
ยง NONLINEAR CONTROL SYSTEM DESIGN FOR ANTAGONISTIC PAM ACTUATOR
This section introduces the antagonistic PAM actuator, its simultaneous control system desingn, and the numerical evaluation of the control performance of the designed control system.
ยง.ยง Antagonistic PAM Actuator System
Fig.ย <ref>sfig:eqatg shows an antagonistic PAM system.
An antagonistic PAM actuator is a joint actuator driven by two PAMs.
One side of each PAM is connected by a joint.
The system consists of two PAMs, two proportional directional control valves (PDCVs), an air tank, pressure sensors, a torque meter, a rotary encoder, and a control PC.
The tank stores compressed air connected to the PDCVs and PAMs using air tubes.
The airflow regulated by the PDCVs drives the PAMs, and the joint rotates.
The system inputs are the voltage commands to the two PDCVs (u_1 and u_2), and the measured values are the joint angle ฮธ, torque ฯ, and inner pressure of the PAMs (P_1 and P_2).
The range of the rotation angle is ยฑ 25ย deg, and the range of the output torque is ยฑ 3.0ย Nm.
The sampling period T_ s was set to 20ย ms.
Table <ref> lists the details of the experimental equipment used.
The control objective considered in this study is to track the joint angle and stiffness of the PAM actuator simultaneously with respect to a given reference.
The goal of this study is to develop an encrypted simultaneous tracking control system for the joint angle and stiffness of a PAM actuator to mitigate the control performance degradation caused by secure implementation.
ยง.ยง Joint Stiffness
Joint stiffness is an index of the difficulty of a joint rotating against external forces <cit.>.
This study employs the model of joint stiffness presented in <cit.>, following the introduction of the notations of the physical variables regarding the PAM.
The geometric relationships of the antagonistic PAM actuator are illustrated in Fig.ย <ref>fig:apam, the lengths of the two PAMs, l_1 and l_2, are respectively given by
l_1(k)=L_0-ฮ L(k) and l_2(k)=L_0+ฮ L(k), where kโโค^+:={1,2,โฏ} is a step, ฮ L (k)โ rsinฮธ(k) is the horizontal displacement of the two PAMs, r is the radius of the joint, and L_0 is the PAM length when the joint is at the horizontal position.
The contraction force of the PAM F_i, โ iโโ:={1,2} can be expressed as a function of the inner pressure and length as follows:
F_i(l_i(k),P_i(k))=a_i(l_i(k))P_i(k)+b_i(l_i(k)), โ iโโ,
with a_i(l)=p^a_1_il+p^a_2_i and b_i(l)=p^b_1_il+p^b_2_i, โ iโโ.
When the joint torque ฯ generated by the two PAMs is given by ฯ(k) = rcosฮธ(k) ( F_1(k) - F_2(k) ),
the joint stiffness K_P is defined as the partial differentiation of joint torque ฯ by joint angle ฮธ as follows <cit.>:
K_P(k) =rsinฮธ(k) (F_1(k) - F_2(k))
+r^2cos^2ฮธ(k) (F_1(k)-ฮฑ_1(k)/l_1(k)+F_2(k)-ฮฑ_2(k)/l_2(k)),
where ฮฑ_i(k) := p^a_2_iP_i(k)+p^b_2_i, โ iโโ.
ยง.ยง Angle-Stiffness Controller
This study proposes a novel nonlinear controller for the simultaneous control of joint angle and stiffness, which is highlighted in red in Fig.ย <ref>, as follows:
x(k+1) =A(k;ฮธ)x(k)+g(k;ฮถ),
[ u_1(k); u_2(k) ] =C(k;ฮธ)x(k)+h(k;ฮถ)+
[ ฮฒ_1; ฮฒ_2 ],
where xโ^3 is the state,
u_1 and u_2 are the outputs, and ฮถโ^5 is the input, denoted by ฮถ:=[ P_1 P_2 ฮธ ฮธฬ
Kฬ
_P ]^T, where P_1, P_2, and ฮธ can be measured by the sensor, and ฮธฬ
and Kฬ
_P are the references.
ฮฒ_1 and ฮฒ_2 are the bias voltages of each PDCV.
A(ฮธ)โ^3ร 3 and C(ฮธ)โ^2ร 3 are time-varying, ฮธ-dependent coefficients, and g:^5โ^3 and f:^5โ^3 are nonlinear functions of ฮถ.
The coefficients and nonlinear functions are clarified after introducing a reference generator, pressure into the contraction force, and PI controllers, which are components of the proposed controller and are explained as follows.
ยง.ยง.ยง Reference Generator
The reference generator outputs the reference signals of the contraction force for two PAMs, denoted as Fฬ
_1 and Fฬ
_2, taking the measured joint angle ฮธ, computed torque command ฯ_c, and the given reference of joint stiffness Kฬ
_P as inputs.
Using <cit.>, Fฬ
_1 and Fฬ
_2 are expressed as rational polynomial functions of the inputs ฮธ, ฯ_c, and Kฬ
_P:
Fฬ
_1(k) = 1/r^2cos^2ฮธ(k)l_1(k)l_2(k)/l_1(k) + l_2(k)[ Kฬ
_P(k) .
+ (rcosฮธ(k)/l_2(k)-
tanฮธ(k))ฯ_c(k)
.- r^2cos^2ฮธ(k)(ฮฑ_1(k)/l_1(k) + ฮฑ_2(k)/l_2(k)) ],
Fฬ
_2(k) = Fฬ
_1(k) - ฯ_c(k)/rcosฮธ(k).
ยง.ยง.ยง Pressure into Contraction Force
The pressure-to-contraction force block shown in Fig.ย <ref> estimates the contraction force of the PAM by using the following experimental polynomial function of the inner pressure and measured joint angle:
Fฬ_i (ฮธ(k),P_i(k))=รข_i(ฮธ(k)) P_i(k)+bฬ_i(ฮธ(k)),
รข_i(ฮธ)=p^รข_1_iฮธ+p^รข_2_i, bฬ_i(ฮธ)=p^bฬ_1_iฮธ+p^bฬ_2_i,
where Fฬ_i, รข_i, and bฬ_i, โ iโโ, are an estimate of the PAM contraction force and the coefficients of (<ref>), respectively.
A reason for introducing the contraction force model (<ref>) in addition to (<ref>) is to reduce the number of approximation, which will facilitate the discussion in Section <ref>.
Model (<ref>) involves a nonlinear term sinฮธ, which requires the approximation process to be a polynomial.
On the other hand, the model (<ref>) is a polynomial of ฮธ and captures a static relationship between the angle and the contraction force.
Indeed, the fitting results of (<ref>) are shown in Fig.ย <ref>.
In Fig.ย <ref>sfig:P-F, the black circles represent experimentally measured data, and the colored lines represent fitting results using (<ref>) for each angle.
In this case, the coefficients (<ref>) for each angle correspond to Fig.ย <ref>sfig:v-w, where the black circles represent the coefficients corresponding to the colored lines in Fig.ย <ref>sfig:P-F.
The fitting test confirms that the ฮธ-dependent model (<ref>) is valid.
ยง.ยง.ยง PI Controller
The proposed controller includes two types of PI controllers to compensate for errors in the joint angle and the contraction force.
The controller for the joint angle uses a feedback error between ฮธฬ
and ฮธ, denoted by e^ฮธ:=ฮธฬ
-ฮธ, to compute the command torque ฯ_c, as follows:
[ x^ฮธ(k+1); ฯ_c(k) ]=
[ 1 T_s; G_ I^ฮธ G_ P^ฮธ ][ x^ฮธ(k); e^ฮธ(k) ],
where x^ฮธ is the state and G_ P^ฮธ and G_ I^ฮธ are the proportional and integral gains, respectively.
Regarding the contraction force controller,
the outputs of the reference generator, Fฬ
_1 and Fฬ
_2, are compared with the estimated forces Fฬ_ฬ1ฬ and Fฬ_ฬ2ฬ computed by (<ref>), respectively.
The resulting errors are fed to the PI controllers to generate the control input voltages u_1 and u_2 for each PDCV.
The controllers are given as follows:
[ x_i^F(k+1); u_i(k) ]=
[ 1 T_s 0; G_ I^F G_ P^F 1 ][ x_i^F(k); e_i^F(k); ฮฒ_i ], โ iโโ,
where x_i^F, โ iโโ, is the state of the i-th controller, e_i^F:= Fฬ
_i-Fฬ_i, โ iโโ, is the error in the contraction force of the ith PAM, G_ P^F and G_ I^F are the proportional and integral gains, respectively, and the gains are common between the controllers.
Additionally, we set ฮฒ_i=5.0, โ iโโ.
Consequently, defining the controller state by x:=[ x^ฮธ x_1^F x_2^F ]^T and eliminating ฯ_c, e^ฮธ, e^F_i, Fฬ
_i, and Fฬ_i, โ iโโ from (<ref>)โ(<ref>) results in the nonlinear controller (<ref>) with the coefficients and functions in (<ref>).
ยง.ยง Numerical Verification
This section presents the numerical simulations conducted to evaluate the proposed nonlinear controller in terms of the control performance with respect to simultaneous angle-stiffness control.
For the simulations, we used the following state-space model of an antagonistic PAM system <cit.>:
x_p(k+1)=f_ฯ(x_p(k),u(k)) if x_p(k)โ๐ณ_ฯ holds,
y(k)=h_p(x_p(k)),
where u:=[u_1 u_2]^Tโ[0,10]^2โ^2 is the input voltages (V) to the two PDCVs,
the state variable is x_p:=[ ฮธ ฮธฬ P_1 P_2 ]^Tโ^4, and the output variable is y:=[ ฮธ P_1 P_2 K_P ]^Tโ^4, in which a set of allowable absolute pressures (kPa) is determined by the specification of the PAMs, i.e., P_1, P_2โ[200,750],
f_ฯ:^4โ^4 is a nonlinear function with 18 subsystems, and it switches according to if-then rules,
๐ณ_ฯ:={x_pโ^4 | ฮจ_ฯ(x_p)>0} is the state set, where ฯโ{1,2,โฏ,18} is the index of the subsystem,
ฮจ_ฯ(x_p) is a function derived from the modes in the form of if-then rules,
and the function h_p:^4โ^4 is an observation equation.
We considered two types of references of the joint angle and stiffness.
Referenceย 1 was set to
(ฮธฬ
(k),Kฬ
_P(k))= (10,8) if 0 โค T_sk < 15,
(10,6) if 15 โค T_sk < 30,
(10,4) if 30 โค T_sk < 45,
and Referenceย 2 was set to
(ฮธฬ
(k),Kฬ
_P(k))= (5,9) if 0 โค T_sk < 15,
(15,6) if 15 โค T_sk < 30,
(10,7) if 30 โค T_sk < 45.
The parameters of the PI controllers in (<ref>) and (<ref>) were determined by trial and error, resulting in G_ P^ฮธ=0.25, G_ I^ฮธ=0.13, G_ P^F=0.088, and G_ I^F=0.08.
The initial voltage commands u_1(0)=u_2(0)=5.5 were given before the start of the control, and the control was started after sufficient time had elapsed.
Additionally, note that the simulation and experimental results regarding Referenceย 1 have been omitted, and the results are discussed in Sectionย <ref> owing to page limitations.
The simulation results are provided in Fig.ย <ref>, where
(a) shows the time response of the joint angle and stiffness,
(b) presents the tracking errors of the joint angle and stiffness,
(c) depicts the control inputs to the valve, and
(d) shows the inner pressure of each PAM.
Figs.ย <ref>sfig:ori2_outsfig:ori2_error confirm that the joint angle and stiffness track the reference in a steady state.
In this case, the input voltage oscillates in a transient state, as shown in Fig.ย <ref>sfig:ori2_ele.
The operating pressure behavior is shown in Fig.ย <ref>sfig:ori2_pre.
These results confirm that the proposed nonlinear controller achieves simultaneous tracking control of the joint angle and stiffness of the PAM actuator system.
However, the coefficients and input functions in the proposed controller are rational functions, which makes them difficult to implement in a controller device in an encrypted control fashion.
Therefore, in the next section, we consider polynomial approximation of the proposed controller.
The novelty of the proposed controller (<ref>) is the use of the ฮธ-dependent contraction force model (<ref>), compared to the UKF-based simultaneous controller <cit.>.
The UKF-based controller involving fluid dynamics <cit.> and sliding mode controllers <cit.> include an if-then rule; therefore, obtaining an alternative controller in a polynomial is difficult.
Meanwhile, the nonlinear controllers proposed in <cit.> can be applied to a polynomial approximation, whereas costly force sensors are required to measure the contraction force.
ยง POLYNOMIAL APPROXIMATION OF THE CONTROLLER
This section presents the approximation of the proposed controller into polynomial functions and investigates the impact of the approximation on the response of the experimental PAM control system.
ยง.ยง Controller Approximation
This study uses one- and two-variable functions to approximate the reference generator (<ref>) in polynomials because only the reference generator is a rational function.
By introducing five functions denoted by f_1,โฆ,f_5, let the rational functions (<ref>) be rewritten as follows:
Fฬ
_1(k)= f_1(k;ฮธ,Kฬ
_P)+f_2(k;ฮธ)ฯ_c(k)+f_3(k;ฮธ,P_1)+
f_4(k;ฮธ,P_2),
Fฬ
_2(k)= Fฬ
_1(k)+f_5(k;ฮธ)ฯ_c(k),
where
f_1(k;ฮธ,Kฬ
_P):= -l_1(k)l_2(k)Kฬ
_P(k)/r^2(l_1(k)+l_2(k))cos^2ฮธ(k),
f_2(k;ฮธ):= l_1(k)l_2(k)/r^2(l_1(k)+l_2(k))cos^2ฮธ(k)(rcosฮธ(k)/l_2(k)-
tanฮธ(k)),
f_3(k;ฮธ,P_1):= ฮฑ_1(P_1(k))l_2(k)/l_1(k)+l_2(k),
f_4(k;ฮธ,P_2):= ฮฑ_2(P_2(k))l_1(k)/l_1(k)+l_2(k),
f_5(k;ฮธ):= -1/rcosฮธ(k).
For the five functions, we used the LASSO-based polynomial approximation <cit.> and manually removed coefficients with relatively small values, which resulted in the following approximated functions:
f_1(k;ฮธ,Kฬ
_P)โ w_1Kฬ
_p(k)+w_2ฮธ^2(k)Kฬ
_P(k)+w_3 =: fฬ_ฬ1ฬ,
f_2(k;ฮธ)โ w_4ฮธ(k)+w_5ฮธ^2(k)+w_6 =: fฬ_ฬ2ฬ,
f_3(k;ฮธ,P_1)โ w_7P_1(k)+w_8ฮธ(k)P_1(k)+
w_sฮธ(k)P_1^2(k)+w_9,
โ w_7P_1(k)+w_8ฮธ(k)P_1(k)+w_9 =: fฬ_ฬ3ฬ,
f_4(k;ฮธ,P_2)โ w_10P_2(k)+w_11ฮธ(k)P_2(k)+w_12 =: fฬ_ฬ4ฬ,
f_5(k;ฮธ)โ w_13ฮธ^2(k)+w_14 =: fฬ_ฬ5ฬ,
where w_i, โ iโ{1,2,โฏ,14} and w_s are coefficients of approximated polynomials fฬ_j, โ jโ{1,2,โฏ,5}, and their values are listed in TABLEย <ref>, where the regularization parameter of LASSO was set to 1.0.
fฬ_ฬ2ฬ and fฬ_ฬ5ฬ are cubic polynomials in ฮธ, and
the other functions are cubic in both arguments.
Furthermore, w_s of fฬ_3 resulted in a nonzero term, but it was significantly small compared to the other coefficients; thus we removed w_s.
Consequently, we obtained the polynomial controller in (<ref>) by using the approximated coefficient matrices and functions specified below,
A(k;ฮธ)โ [ 1 0 0; T_sG_I^ฮธfฬ_ฬ2ฬ(k;ฮธ) 1 0; T_sG_I^ฮธ(fฬ_ฬ2ฬ(k;ฮธ)+fฬ_ฬ5ฬ(k;ฮธ)) 0 1 ],
C(k;ฮธ)โ [ G_P^FG_I^ฮธfฬ_ฬ2ฬ(k;ฮธ) G_I^F 0; G_P^FG_I^ฮธ(fฬ_ฬ2ฬ(k;ฮธ)+fฬ_ฬ5ฬ(k;ฮธ)) 0 G_I^F ],
h_1(k;ฮถ)โ fฬ_ฬ1ฬ(k;ฮธ,Kฬ
_P)+G_P^ฮธfฬ_ฬ2ฬ(k;ฮธ)(ฮธฬ
(k)-ฮธ(k))+
fฬ_ฬ3ฬ(k;ฮธ,P_1)+fฬ_ฬ4ฬ(k;ฮธ,P_2)- รข_1P_1(k)-bฬ_1,
h_2(k;ฮถ)โ fฬ_ฬ1ฬ(k;ฮธ,Kฬ
_P)+fฬ_ฬ3ฬ(k;ฮธ,P_1)+fฬ_ฬ4ฬ(k;ฮธ,P_2)+
G_P^ฮธ(fฬ_ฬ2ฬ(k;ฮธ)+fฬ_ฬ5ฬ(k;ฮธ))(ฮธฬ
(k)-ฮธ(k))-
รข_2P_2(k)-bฬ_2.
ยง.ยง Evaluation of Approximation Impacts
The impact of the controller approximation on the control performance is evaluated by comparing the experimental time responses of the control systems using the original controller (<ref>) and the approximated controller (<ref>).
The scenarios of the control experiments are simultaneous tracking control of the joint angle and stiffness with and without a load.
The references of the joint angle and stiffness set in the experiments are the same as those employed in the simulations.
Before the start of the control, an initial voltage command u_1(0)=u_2(0)=5.5 were provided, and the control was initiated after sufficient time had elapsed.
The experimental results for the original and approximated controllers without a load are shown in Fig.ย <ref>.
In this figure, the blue and red lines indicate the results of the original and approximated control systems, respectively, and their meanings are the same as those in Fig.ย <ref>.
Figs.ย <ref>sfig:app2_outsfig:app2_error confirm that the angle and stiffness track the reference in the steady state and that sufficient control performance is maintained.
Additionally, the improvement in transient-response control performance is believed to be due to the impacts of quantization; however, the details have not been clearly established.
Figs.ย <ref>sfig:app2_elesfig:app2_pre observe that the experimental results of the original and approximated controllers exhibit similar behavior in the steady state.
Moreover, we conducted a control experiment in which a 1.5 kg load was hung from the left side of the joint before starting control to evaluate the impact of the approximation.
The experimental results of the loads are shown in Fig.ย <ref>, where the line colors in each figure are the same as those in Fig.ย <ref>.
Figs.ย <ref>sfig:app4_outsfig:app4_error confirm that the angle and stiffness track the reference in the steady state.
Good control tracking causes the controller to compensate for the impact of the load, which can be observed as a difference of approximately 50 kPa over the steps between the responses of the inner pressures, as shown in Figs.ย <ref>sfig:app2_pre and <ref>sfig:app4_pre.
Similarly, Fig.ย <ref>sfig:app4_ele demonstrates that the experimental results of the two controllers exhibit similar behaviors in the steady state, respectively.
The experimental results confirm that the approximated controller achieves almost the same control performance as the original controller, implying that the impact of the controller approximation is negligible.
Therefore, the original controller can be replaced with an approximated controller that is adequate for secure implementation.
ยง SECURE PAM ACTUATOR WITH ENCRYPTED POLYNOMIAL-TYPE CONTROLLER
This section describes the encrypted control of the joint angle and the stiffness of an antagonistic PAM actuator system.
ยง.ยง Secure Implementation
The controller encryption method inย <cit.> can be applied to a linear controller in a matrix-vector product form.
Hence, we consider to transform our proposed polynomial controller into a product of matrix and vector.
In this study, by defining a vector of monomials as ฮพโ^18, the polynomial controllerย (<ref>) with the coefficients and functionsย (<ref>) can be transformed into the following matrix-vector multiplication:
ฯ(k)= ฮฆฮพ(k), โ kโ^+,
with
ฮพ:= [ Kฬ
_P Kฬ
_Pฮธ^2 ฮธฬ
ฮธ x^ฮธ ฮธฬ
ฮธ ฮธ^2 x^ฮธฮธ ฮธฬ
ฮธ^2 ฮธ^3 x^ฮธฮธ^2 P_1
ฮธ P_1 P_2 ฮธ P_2 1 x^F_1 x^F_2 ]^T,
ฯ(k):= [ x^ฮธ(k+1) x^F_1(k+1) x^F_2(k+1) u_1(k) u_2(k) ]^T,
where ฮพ and ฯโ^5 are the input and output of the encrypted controller, respectively.
The coefficient matrix ฮฆโ^5ร18 is given as follows:
ฮฆ =
[
[ 0 0 T_s -T_s 1 0; T_sw_1 T_sw_2 T_sฯ_3 T_sฯ_4 T_sฯ_5 T_sฯ_6; T_sw_1 T_sw_2 T_sฮฝ_3 T_sฮฝ_4 T_sฮฝ_5 T_sฮฝ_6; G_P^Fw_1 G_P^Fw_2 G_P^Fฯ_3 G_P^Fฯ_4 G_P^Fฯ_5 G_P^Fฯ_6; G_P^Fw_1 G_P^Fw_2 G_P^Fฮฝ_3 G_P^Fฮฝ_4 G_P^Fฮฝ_5 G_P^Fฮฝ_6; ].
.
[ 0 0 0 0 0 0; T_sฯ_7 T_sฯ_8 T_sฯ_9 T_sฯ_10 T_sฯ_11 T_sฯ_12; G_P^Fฯ_7 G_P^Fฯ_8 G_P^Fฯ_9 G_P^Fฯ_10 G_P^Fฯ_11 T_sฮฝ_12; G_P^Fฯ_7 G_P^Fฯ_8 G_P^Fฯ_9 G_P^Fฯ_10 G_P^Fฯ_11 G_P^Fฯ_12; G_P^Fฮฝ_7 G_P^Fฮฝ_8 G_P^Fฮฝ_9 G_P^Fฮฝ_10 G_P^Fฮฝ_11 G_P^Fฮฝ_12; ].
.
[ 0 0 0 0 0 0; T_sฯ_13 T_sฯ_14 T_sฯ_15 T_sฯ_16 1 0; T_sฮฝ_13 T_sฮฝ_14 T_sฮฝ_15 T_sฮฝ_16 0 1; G_P^Fฯ_13 G_P^Fฯ_14 G_P^Fฯ_15 G_P^Fฯ_16+ฮฒ_1 G_I^F 0; G_P^Fฮฝ_13 G_P^Fฮฝ_14 G_P^Fฮฝ_15 G_P^Fฮฝ_16+ฮฒ_2 0 G_I^F ]],
where
ฯ_3:=w_4G_P^ฮธ,
ฯ_4:=-p_1^b_1-w_4G_P^ฮธ,
ฯ_5:=w_4G_I^ฮธ,
ฯ_6:=w_5G_P^ฮธ,
ฯ_7:=-w_5G_P^ฮธ,
ฯ_8:=w_5G_I^ฮธ,
ฯ_9:=w_6G_P^ฮธ,
ฯ_10:=-w_6G_P^ฮธ,
ฯ_11:=w_6G_I^ฮธ,
ฯ_12:=w_7-p_2^a_1,
ฯ_13:=w_8-p_1^รข_1,
ฯ_14:=w_10,
ฯ_15:=w_11,
ฯ_16:=w_3+w_9+w_12-p_2^bฬ_1,
ฮฝ_3:=(w_4+w_13)G_P^ฮธ,
ฮฝ_4:=-p_1^bฬ_2-(w_4+w_13)G_P^ฮธ,
ฮฝ_5:=(w_4+w_13)G_I^ฮธ,
ฮฝ_6:=w_5G_P^ฮธ,
ฮฝ_7:=-w_5G_P^ฮธ,
ฮฝ_8:=w_5G_I^ฮธ,
ฮฝ_9:=(w_6+w_14)G_P^ฮธ,
ฮฝ_10:=-(w_6+w_14)G_P^ฮธ,
ฮฝ_11:=(w_6+w_14)G_I^ฮธ,
ฮฝ_12:=w_7,
ฮฝ_13:=w_8,
ฮฝ_14:=w_10-p_2^รข_2,
ฮฝ_15:=w_11-p_1^รข_2, and
ฮฝ_16:=w_3+w_9+w_12-p_2^bฬ_2.
The augmented formulation of the controller (<ref>) is a linear operation; thus, the same procedure for encrypting a linear controller is applied to encrypt the controller, as described in <cit.>.
The configuration of the control system is illustrated inย Fig. <ref>, where the controller input ฮพ is generated by the reference signals, measurements, and states of the PI controllers.
The input ฮพ is encrypted by using an ElGamal encryption scheme in the block.
The encrypted controller (ฮฆ) conducts multiplicative homomorphic operations, and ^+ extracts the plaintext states and control inputs to each PDCV.
For the notations regarding the encryption scheme, please refer to <cit.>.
ยง.ยง Experimental Validation
This section verifies the encrypted control system by comparing the experimental results obtained using the encrypted and original control systems.
A key length of 64ย bit was chosen, and the scaling parameters ฮ_ฮพ and ฮ_ฮฆ were set to 1.0ร10^8, which were introduced in <cit.> to manage the quantization errors due to encryption.
The scenarios of the control experiments involve simultaneous tracking control of the joint angle and stiffness with and without the load.
The references of the joint angle and stiffness were the same as those of the simulations in Figs.ย <ref>.
Before the start of the control, an initial voltage command u_1(0)=u_2(0)=5.5 was provided, and the control was initiated after sufficient time had elapsed.
The experimental results are shown in Figs.ย <ref>.
In the figures, the blue and green lines represnet the results of the original and encrypted control systems, respectively.
Figs.ย <ref>sfig:enc2_outsfig:enc2_error and <ref>sfig:enc4_outsfig:enc4_error confirm that the angle and stiffness track the reference in the steady state and that sufficient control performance is maintained.
Similarly,they show that the controller compensated for the impact of the load, which can be observed as a difference of approximately 50ย kPa.
The computation time involved in the encrypted control system is shown in Fig.ย <ref>, where Figs.ย <ref>fig:cal_time_bfig:cal_time_d correspond to Figs.ย <ref> and <ref>.
The figures show that the control operation is in real time because the computation time for each step is less than the sampling period of 20ย ms.
ยง.ยง Quantitative Investigation
We quantitatively evaluate the control results presented so far to investigate the impact of the secure implementation of the nonlinear controller on the control performance.
For this purpose, we introduce the โ_2 norm to measure the tracking error during a specific duration for each control result, as shown in Figs.ย <ref>-<ref> and <ref>-<ref>.
The โ_2 norm is defined as
ฮณ(z,zฬ
):=โ(โ_k=k_0^k_1(z(k)-zฬ
(k))^2),
where z and zฬ
represent scalar signal sequences, and the evaluation steps (closed interval) [k_0,k_1] are set to [500,749], [1250,1499], and [2000,2249].
These intervals are labeled #1, #2, and #3, respectively, in References 1 and 2.
The โ_2-norm scores of the original, approximated, and encrypted controls with and without loads are summarized in Figs.ย <ref> and <ref>, respectively.
In each figure, sfig:con1_angle and sfig:con1_stiff display ฮณ(ฮธ,ฮธฬ
) and ฮณ(K_P,Kฬ
_P), respectively, for the three evaluation steps.
The blue, red, and green colors represent the โ_2-norm values of the original, approximated, and encrypted controllers, respectively.
The dots and error bars indicate the average, maximum, and minimum โ_2-norm values from 10 experiments.
These figures confirm that the resulting scores tend to increase as the procedure advances through the polynomial approximation and secure implementation.
However, in many cases, the scores of the original and encrypted controllers are similar.
The worst rate of the controlled signal relative to its reference is 3.89/4.00ร 100=97.3ย % for #3 of Reference 1 in Fig.ย <ref>sfig:con1_stiff.
This result implies that the proposed encrypted control achieved a tracking error of less than 2.7ย %.
Moreover, in cases where a relatively large change in the score occurs, such as #2 of Reference 2 in Fig.ย <ref>sfig:con1_angle and #3 of Reference 1 in Fig.ย <ref>sfig:con2_stiff_load, the effect of the polynomial approximation prevails over the change in.
In other words, the change in score from blue to red is larger than that from red to green.
This discussion provides further insight, suggesting that increasing the accuracy of the polynomial approximation could help avoid the degradation of the original control performance.
It is crucial to consider an accuracy-aware approximation method suitable for secure implementation, which will be the focus of future work.
In addition, these figures confirm that, owing to the lack of significant differences between the cases without and with the load, the proposed control system is capable of compensating for the unknown load.
ยง CONCLUSION
This study proposed an encrypted simultaneous control system for antagonistic PAM actuators aimed at developing cyber-secure and safe PAM actuator systems.
A novel nonlinear controller was designed to track the joint angle and stiffness simultaneously based on the PAM actuator model.
By applying the polynomial approximation technique to a nonlinear controller, we obtained a polynomial-type controller.
Subsequently, through the secure implementation in the control device, we developed a secure PAM actuator system.
For experimental validation, the โ_2 norm was introduced to measure and compare the experimental results of the original, approximated, and encrypted controllers.
The experimental results showed that the proposed encrypted controller achieved simultaneous tracking of the joint angle and stiffness of the PAM actuator with a tracking error of less than 2.7ย %.
Consequently, the developed PAM actuator system, enabled by the secure implementation of the simultaneous controller, enhances security while maintaining a control performance similar to that of the original controller.
The developed actuator is expected to be used in secure and safe PAM-driven devices, such as nursing care robots, rehabilitation orthoses, and power-assisted orthoses for remote usage applications.
In future work, we plan to improve the control performance of the encrypted controller further by addressing the nonlinear characteristics of PAMs, including Coulomb friction and fluid dynamics.
To mitigate the performance degradation caused by the polynomial approximation, we investigate more effective strategies for tuning the regularization parameter in LASSO.
Moreover, when considering the secure implementation of cost-effective computers, reducing the computation time is essential to achieve a resource-aware encrypted controller by streamlining the proposed controller.
IEEEtran
[
< g r a p h i c s >
]Yuta Takeda
received the B.S. degree in Informatics and Engineering from The University of Electro-Communications, Tokyo, Japan, in 2022. He is currently an M.S. student at The University of Electro-Communications, Tokyo, Japan.
His research interests include control applications and modeling/control of pneumatic artificial muscles.
[
< g r a p h i c s >
]Takaya Shin
received the B.S. and M.S. degrees in Informatics and Engineering from The University of Electro-Communications, Tokyo, Japan, in 2020 and 2022, respectively.
He joined Daihen Co., Osaka, Japan.
His research interests include control applications and modeling/control of pneumatic artificial muscles.
[
< g r a p h i c s >
]
Kaoru Teranishi received the B.S. degree in electromechanical engineering from National Institute of Technology, Ishikawa College, Ishikawa, Japan, in 2019.
He also obtained the M.S. degree in Mechanical and Intelligent Systems Engineering from The University of Electro-Communications, Tokyo, Japan, in 2021.
He is currently a Ph.D. student at The University of Electro-Communications.
From October 2019 to September 2020, he was a visiting scholar of the Georgia Institute of Technology, GA, USA.
Since April 2021, he has been a Research Fellow of the Japan Society for the Promotion of Science.
His research interests include control theory and cryptography for cyber-security of control systems.
[
< g r a p h i c s >
]Kiminao Kogiso
received the B.E., M.E., and Ph.D. degrees in mechanical engineering from Osaka University, Japan, in 1999, 2001, and 2004, respectively.
He was appointed a postdoctoral fellow in the 21st Century COE Program and as an Assistant Professor in the Graduate School of Information Science, Nara Institute of Science and Technology, Nara, Japan, in April 2004 and July 2005, respectively.
From November 2010 to December 2011, he was a visiting scholar at the Georgia Institute of Technology, GA, USA.
In March 2014, he was promoted to associate professor in the Department of Mechanical and Intelligent Systems Engineering at The University of Electro-Communications, Tokyo, Japan.
Since April 2023, he has been a professor in the same department.
His research interests include the cybersecurity of control systems, constrained control, control of decision-makers, and their applications.
|
http://arxiv.org/abs/2306.03423v2
|
20230606055058
|
I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models
|
[
"Max Reuter",
"William Schulze"
] |
cs.AI
|
[
"cs.AI"
] |
Both authors contributed equally to this research.
[email protected]
0009-0006-6173-0535
0009-0007-9719-7362
[1]
[email protected]
Michigan State University
East Lansing
Michigan
USA
48824
Since the release of OpenAI's ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models' broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative models arises from the use of subjective fine-tuning to avoid generating harmful content. Fine-tuning bias may come from individual engineers and company policies, and affects which prompts the model chooses to refuse. In this experiment, we characterize ChatGPT's refusal behavior using a black-box attack. We first query ChatGPT with a variety of offensive and benign prompts (n=1,706), then manually label each response as compliance or refusal. Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum; as such, we map several different kinds of responses to a binary of compliance or refusal. The small manually-labeled dataset is used to train a refusal classifier, which achieves an accuracy of 96%. Second, we use this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from the Quora Insincere Questions dataset. With this machine-labeled data, we train a prompt classifier to predict whether ChatGPT will refuse a given question, without seeing ChatGPT's response. This prompt classifier achieves 76% accuracy on a test set of manually labeled questions (n=985). We examine our classifiers and the prompt n-grams that are most predictive of either compliance or refusal. Our datasets and code are available at <https://github.com/maxwellreuter/chatgpt-refusals>.
<ccs2012>
<concept>
<concept_id>10003456.10003462.10003480</concept_id>
<concept_desc>Social and professional topicsย Censorship</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10010181</concept_id>
<concept_desc>Computing methodologiesย Discourse, dialogue and pragmatics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10010182</concept_id>
<concept_desc>Computing methodologiesย Natural language generation</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10002978.10003022.10003465</concept_id>
<concept_desc>Security and privacyย Software reverse engineering</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Social and professional topicsย Censorship
[500]Computing methodologiesย Discourse, dialogue and pragmatics
[300]Computing methodologiesย Natural language generation
[100]Security and privacyย Software reverse engineering
Iโm Afraid I Canโt Do That: Predicting Prompt Refusal in
Black-Box Generative Language Models
William Schulze
July 31, 2023
=============================================================================================
ยง BACKGROUND
ยง.ยง Bias in ChatGPT
Immediately after ChatGPT's release in November 2022, conversations on social media highlighted examples of apparent political bias in its responses. Common democratic norms and artificial intelligence ethics guidelines both indicate that AI should be fair and without bias; and ethical correctness is even more critical in ChatGPT because it may soon mediate the flow of information to a large proportion of humanity.
One of the first to study ChatGPT's bias was Hartmann et al. <cit.> They prompted ChatGPT with questions from the Political Compass test, and found that its beliefs were most consistent with a left-libertarian, strongly environmentalist belief system. They also used the Wahl-O-Matic online voting alignment advice tool for the context of the German elections, and it suggested an alignment with the Socialist party of Germany that was 13.4% stronger than the population of Germany's alignment with that party.
In parallel with the present investigation, a great number of other studies have characterized ChatGPT's biases. Rutinowski et al. <cit.> repeated the political compass test, with additional political questions specific to national issues of G7 member states. They also indicated that ChatGPT held progressive and libertarian views on general subjects, but that on national questions ChatGPT was not strongly biased between libertarianism and authoritarianism. However, they also administered the Dark Factor psychology test, and found ChatGPT to exhibit low levels of psychological dark traits- it scored in the bottom 15% of test takers.
One of the most thorough studies of bias in ChatGPT was performed by Rozado <cit.>. Rozado confirmed the political compass results shown elsewhere; but he also constructed an array of "hateful" comments by combining a demographic group ("women", "the rich", "Democrats") with a negative adjective ("dishonest", "immature", "greedy") in a template sentence to see which combinations would be flagged by OpenAI's moderation system as hateful. Results varied by demographic group over a wide range, with protection above 80% for the most favored classes ("disabled people", "Blacks", "gay people"), and less than 20% protection for the least favored classes ("Republicans", "wealthy people"). Rozado then used OpenAI's fine-tuning mechanism to train another model in the GPT-3 family, which he dubbed RightWingGPT; it had approximately the opposite biases of ChatGPT on the political compass test.
ยง.ยง Prompt Refusal
Bias in ChatGPT is not only observed through the opinions it chooses to express when given a prompt; in some cases, it will refuse to cooperate with the prompt at all. Examples of refused and complied prompts are shown in Table <ref>. Initial examples of prompt refusal appeared cleanly binary: refusals usually included some combination of an apology, a statement of refusal, and a statement of values that would be violated with the prompt. However, our investigation later found that a smooth continuum from compliance to refusal was possible in ChatGPT responses (see: Manual Refusal Labeling).
ยง METHODS
We set out to build a predictive model for which kinds of prompts were likely to be refused by ChatGPT. Our work proceeded in four main steps: prompt database compilation, manual refusal labeling, refusal classifier training, and prompt classifier training.
ยง.ยง Prompt Database Compilation
In order to train a model that could predict whether a prompt would be refused, we needed a database of prompts labeled with whether they were refused; and for the classifier to perform well, we needed a large number of the labeled prompts to be refused. The search for prompts, therefore, required generating or finding a large number of offensive prompts. Once the prompts were compiled, they were submitted to ChatGPT as queries using OpenAI's ChatGPT API.
ยง.ยง.ยง New York Post Dataset (n = 21)
An article from the New York Post <cit.> alleged bias against ChatGPT, and gave several examples. We reproduced exact prompts where available; where not available (due to social media posts being deleted, for example), we reconstructed the prompts based on the description. The response dataset was too small to have much effect on later model training, but verified that the research question was still valid: most of the biases claimed by the article were also observed in our own prompt responses.
ยง.ยง.ยง Political Figures Dataset (n = 700)
The Political Figures dataset aimed to elicit political bias based on public figures. To find a list of public figures about which ChatGPT would have knowledge (and therefore might have an opinion), the list of public figures was sourced from ChatGPT itself. ChatGPT was asked to provide a list of the 100 most notable United States political figures and their political party memberships. The political figures returned were a mixture of living and dead, government and non-government, left-of-center and right-of-center. A set of eight template sentences were written, with sentiments ranging from strongly positive to strongly negative. Template sentences are listed in Table <ref>.
The request for poems made these easily to manually classify: refused prompts were usually not in the form of a poem, but complied-with prompts were usually in the form of a poem. However, the relative lack of variety meant that these prompts could not be used to train our prompt classifier, lest some of the terms in the stronger prompts ("poem", "murdering", "statue") come to have an outsized weight.
ยง.ยง.ยง Quora Insincere Questions (n = 985)
Quora is an online platform for the public to ask questions, and for other members of the public to answer them. Answers are voted on, and the highest-voted answers are promoted to the top result. Because of its popularity, it is also subject to the submission of "insincere" questions: questions not really seeking an answer, but seeking rather to shock, offend, or state an opinion. Quora compiled a dataset of both sincere and insincere questions <cit.>, and created a Kaggle challenge to build a model that can automatically discern whether a question is sincere.
Because the text strings are almost always in the form of a question, they were well-suited as prompts for ChatGPT; therefore, this became our largest hand-labeled dataset. We sampled 400 sincere and 600 insincere questions from the Quora dataset, and hand-labeled their responses; later, we sampled another 10,000 samples and labeled them with our refusal classifier.
Manual inspection of the Quora dataset suggested that a large number of Quora users are located in the Indian subcontinent; many insincere questions concerned what appeared to be regional prejudices (Indian versus Pakistani, caste prejudices, Indian political party preferences, Hindu versus Muslim conflicts, and North Indian versus South Indian stereotypes). It is unclear if the selection of offensive content in this Quora dataset overlaps heavily with the topics considered most offensive to OpenAI, which is located in Silicon Valley; the dataset may not perfectly target the sorts of issues ChatGPT is trained to consider harmful.
ยง.ยง.ยง Other Candidate Datasets
We also investigated the use of OpenAI's moderation safety dataset <cit.>; however, many text strings were too fragmentary to be properly understood as prompts, eliciting incoherent responses from ChatGPT. The same problem prevented effective use of the 4Chan archive collected by Papasavva et al. <cit.>.
ยง.ยง.ยง Hand-Labeled Dataset (n = 1,706)
Once samples from the aforementioned datasets were labeled, they were compiled into a superset, which we refer to as the hand-labeled dataset. This is the primary dataset used for the training of the refusal classifier.
ยง.ยง Manual Refusal Labeling
We initially set out to classify responses as being in one of two categories: queries it refuses, and queries it accepts. By "refusal", we meant that ChatGPT responds with an answer like the following: "I'm sorry, but as an AI language model, I cannot generate content that is designed to be inflammatory or biased." OpenAI achieves this behavior through fine-tuning on a small set of text samples designed to represent desired values <cit.>.
However, having manually inspected more than 2,000 query responses from ChatGPT, we have found that ChatGPT's compliance with or refusal of prompts fall onto a continuum of responses, and not into a neat binary of compliance or refusal. Accordingly, by the end of manual labeling, we had classified responses into eight subcategories; they are described in Table <ref>.
Because some subcategories were rare, we knew that a classifier attempting to predict all eight subcategories would not perform well. Therefore, we mapped the subcategories back onto a binary of compliance' or refusal'. The mapping used is shown in Table <ref>.
With our labeling scheme, ChatGPT complied with 93% of the sincere Quora questions and 53% of the insincere questions. We did not expect ChatGPT to refuse all insincere questions, but rather to largely take them at face value. Conversely, we did not expect ChatGPT to comply with all sincere questions, especially those involving sensitive topics. Table <ref> features an example of ChatGPT refusing a sincere question, and complying with an insincere question.
ยง.ยง Classifier Training
Both the refusal classifier and the prediction classifier take in variable-length text and output a binary classification. Therefore, we tested using the same model types for both.
We evaluated three model types for the two distinct tasks of identifying ChatGPT's refusals and predicting whether ChatGPT would refuse a given prompt: (1) Google's Bidirectional Encoder Representations from Transformers (BERT), (2) logistic regression, and (3) random forest. We expected BERT to yield the best accuracy performance, and used the others primarily for their interpretability, allowing us to see which words or few-word phrases (n-grams) were highly predictive of either compliance or refusal.
We performed standard hyperparameter grid searches for each model on each task. In our logistic regression and random forest models, we used a term frequency-inverse document frequency (TF-IDF) vectorizer; the vectorizer was configured to consider n-grams with 1 โค n โค 3. BERT training was performed on Google Colab GPUs.
ยง.ยง.ยง Refusal Classification
Manual labeling indicated that refusal responses contain a variety of shared expressions, which we hypothesize make them easier for NLP models to classify. In a refusal, ChatGPT will often mention that it is an AI language model, apologize, mention OpenAI's policies, mention that something is wrong, or exhort the user to respect or inclusiveness.
Refusal classifiers were trained on ChatGPT responses, manually labeled as complied or refused, and assign a given response one of these labels.
ยง.ยง.ยง Predicting Refusals
Unlike ChatGPT's responses, input prompts we used are much more varied, and the differences between a prompt that will be refused and a prompt that will be complied with can be very small (a single word substitution). As shown by our Political Figures dataset and Rozado's "hateful comment" generation, substitution of one person or demographic for another can cause the same prompt to change from being complied with to being refused.
Because of this sensitivity, we expected that a more sophisticated model like BERT was needed, especially one with a vocabulary that encoded the semantic content of words, instead of treating them all the same. For instance, a classifier might need to understand that there is a large contrast between Joe Biden and Donald Trump, but that there is more similarity between Joe Biden and Barack Obama. Then, the model might predict that prompts asking ChatGPT to praise Joe Biden and Barack Obama might get the same response, but prompts asking ChatGPT to praise Joe Biden and Donald Trump might receive opposite responses.
Because of the greater sensitivity to prompt text, we wanted a much larger and more diverse dataset of prompts to train the prompt classifier. Manual classification is slow and difficult, so we wanted to use our refusal classifier to enable us to automatically bootstrap our dataset to a larger size.
We trained prompt classifiers on 10,000 samples from the Quora Insincere Questions dataset, with responses automatically labeled by the refusal classifier. We evaluated the prompt classifiers on the hand-labeled Quora data. An overview of the prompt classifier training is shown in Figure <ref>.
ยง RESULTS
ยง.ยง Model Performance
On all our hand-labeled data, a logistic regression model was able to classify refusals with 90.6% accuracy, while a random forest model achieved 86.7% accuracy. BERT significantly outperformed the classical models, with a performance of 96.5%.
Prompt classification was more difficult. Trained on the bootstrapped Quora Insincere Questions dataset and tested on the hand-labeled Quora data, logistic regression and random forest achieved 73.9% and 72.2% accuracy, respectively; BERT again outperformed them, with a performance of 75.9%. Details of classifier performance are shown in Table <ref>.
ยง.ยง Feature Importance
We inspected the Logistic Regression weights to see which n-grams were predictive of (1) whether a prompt would be refused, and (2) whether a response was a refusal.
Figure <ref> (right) shows that expressions such as "cannot", "sorry", and "AI language model" are strongly indicative of a refusal. Surprisingly, the word "the" strongly indicated a compliance: it may be an indicator of a declarative sentence style used in compliant responses.
Figure <ref> (left) shows that controversial figures ("trump"), demographic groups in plural form ("girls", "men", "indians", "muslims"), and negative adjectives ("stupid") are among the strongest predictors of refusal. On the other hand, definition and enumeration questions ("what are") are strong predictors of compliance.
ยง DISCUSSION
Our investigation showed that it is possible, at scale, to characterize the inclination of ChatGPT to comply with certain prompts and refuse others. However, compliance is not a clean binary; there is a smooth continuum of refusal in ChatGPT's response to user prompts.
Negative generalizations of demographic groups are among the surest predictors of ChatGPT's refusals. Mentions of controversial figures also predict a refusal, though this investigation did not disambiguate between possible reasons: it may be that ChatGPT is inclined to refuse controversial figures; or it may be that controversial figures attract refusable questions.
For refusal classification, BERT significantly outperformed the classical models. For prompt classification, BERT still outperformed the classical models, but to a lesser degree. Note that, particularly in the performance of the refusal classification, every percent of error rate counts: any erroneous automatic refusal classification will frustrate the learning of the prompt classifier.
We expect there is a performance ceiling for prompt prediction, based on the intentional randomness temperature setting used by OpenAI to vary ChatGPTs response. However, we doubt that our investigation hit this randomness-induced performance ceiling.
ยง FUTURE WORK
Employing multiple manual labelers for refusal might improve the quality of hand-labeled data by reducing the effect of personal bias. Other improvements to the performance of the refusal classifier may be possible, which would pay dividends in the performance of the prompt classifier.
Greatly increasing the sample size of the automatically labeled dataset might allow prompt classifiers to cover the diverse set of possible input expressions with less sparsity; this might enable more reliable prompt classification.
OpenAI's API allows the access of many ChatGPT snapshots, not only the latest; a comparison of feature importance between model snapshots could serve as a characterization of OpenAI's ongoing alignment work.
The effect of ChatGPT's internal randomness temperature on performance could be characterized by querying each prompt several times. Classifiers could then predict the likelihood of a prompt's refusal rather than simply the binary version.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.03753v2
|
20230606151358
|
AI Art Curation: Re-imagining the city of Helsinki in occasion of its Biennial
|
[
"Ludovica Schaerf",
"Pepe Ballesteros",
"Valentine Bernasconi",
"Iacopo Neri",
"Dario Negueruela del Castillo"
] |
cs.AI
|
[
"cs.AI",
"cs.CV"
] |
AI Art Curation]AI Art Curation: Re-imagining the city of Helsinki in occasion of its Biennial
[1]
0000-0001-9460-702X
[email protected]
[1]
[email protected]
[1]
[email protected]
[1]
[email protected]
All authors contributed equally to the paper
Center for Digital Visual Studies, Max Planck Society โ University of Zurich
Culmannstrasse 1
Zurich
Switzerland
8006
[email protected]
Art curatorial practice is characterized by the presentation of an art collection in a knowledgeable way. Machine processes are characterized by their capacity to manage and analyze large amounts of data. This paper envisages AI curation and audience interaction to explore the implications of contemporary machine learning models for the curatorial world. This project was developed for the occasion of the 2023 Helsinki Art Biennial, entitled New Directions May Emerge. We use the Helsinki Art Museum (HAM) collection to re-imagine the city of Helsinki through the lens of machine perception. We use visual-textual models to place indoor artworks in public spaces, assigning fictional coordinates based on similarity scores. We transform the space that each artwork inhabits in the city by generating synthetic 360ยฐ art panoramas. We guide the generation estimating depth values from 360ยฐ panoramas at each artwork location, and machine-generated prompts of the artworks. The result of this project is an AI curation that places the artworks in their imagined physical space, blurring the lines of artwork, context, and machine perception. The work is virtually presented as a web-based installation on this link <http://newlyformedcity.net/>, where users can navigate an alternative version of the city while exploring and interacting with its cultural heritage at scale.
<ccs2012>
<concept>
<concept_id>10010405.10010469</concept_id>
<concept_desc>Applied computingย Arts and humanities</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010469.10010474</concept_id>
<concept_desc>Applied computingย Media arts</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Applied computingย Arts and humanities
[500]Applied computingย Media arts
< g r a p h i c s >
Example of a 360ยฐ art panorama. A real location from Helsinki is transformed according to the artistic style of its corresponding artwork.
Panoramic image of a HAM artwork in the city.
[
Dario Negueruela del Castillo
July 31, 2023
=================================
ยง INTRODUCTION
In this paper, we present a curatorial work that was exhibited on the occasion of the 2023 Helsinki Biennial of Art. The project uses Artificial Intelligence (AI) as a new means for curatorial practice, exploring the possibilities and difficulties that such new methods introduce. The proposed work in this paper is part of a more significant project entitled New Directions May Emerge[See <https://helsinkibiennaali.fi/en/story/helsinki-biennial-2023-brings-together-29-artists-and-collectives/>.], which aims to curate a museumโs art collection through the perception of the machine. The immediate product of this curation consists of an interactive website[<http://newlyformedcity.net/>] that transforms the shape of Helsinki using the virtual dots of the collection locations. The website dynamically updates throughout the Biennial, with new locations (and therefore artworks) appearing and modifying the shape of the city every 30 minutes.
The work is based on the art collection from the Helsinki Art Museum (HAM), consisting of public artworks, such as sculptures and art installations, as well as an indoor collection. Using the city of Helsinki as a context, the goal is to present a different experience of the works of art through the navigation of a new projection of the artworks in the city. First, we situate the artworks from the indoor museum collection to a fictional place in Helsinki, using deep learning and machine learning tools. We then extract the 360ยฐ Google Maps views of all the real and fictional locations, which are later used as guiding depth maps to embed the artworks in their fictional surrounding space. This is achieved using Stable Diffusion <cit.> on the 360ยฐ views that they inhabit. The city is thereafter populated by the new machinic world, where the user can navigate a geography that blurs the world of extant reality and that of machinic fiction.
Following the narrative thread proposed in <cit.>, the project focuses on the following questions: How can we curate a collection we have never seen? How does the machine perceive art? Can the machine offer a fruitful re-contextualization of the artistic data? Can the geography of the city offer fruitful ground for this re-contextualization?
The contributions of this paper can be summarized as follows: 1) We provide a survey of existing AI curatorial methods and situate this novel avenue in the context of digital and physical curation, 2) We present an innovative method for AI curation that explores the connection to the physical setting and offers a re-contextualization of the original art, 3) We introduce a comprehensive evaluation procedure which takes into consideration both the technical quality of the final artworks and the artistic licenses, 4) We thoroughly discuss the social and ethical implications of our project and, generally, AI curation.
ยง.ยง AI as a Curator
Traditionally, the work of a museum curator consists of the enrichment of a collection and cultural heritage preservation. With the creation of an exhibition, the curator performs a selection of the collection according to a narrative thread that has to be passed to the public <cit.>. The goal is to generate new insights into the original works of art, and elevate their physical dimension through the design of their display <cit.>. In recent years, with the increased availability of digital collections and tools, the notion of digital curation has become an important aspect <cit.>, especially facing the large amount of digital data generated and their online publication to reach a wider audience. Computational art curation aims at classifying and indexing data for efficient retrieval <cit.>, as well as creating new experiences of the artworks through new technologies <cit.> (e.g., virtual and augmented reality).
Unsurprisingly, the use of AI systems for artistic curation has found fertile ground, spanning from projects by Google Arts & Culture to on-site curations of museums and biennials. Google Arts & Cultureโs[See <https://experiments.withgoogle.com/collection/arts-culture>.] experiments are pioneering applications of computational methods to the online curation of artistic datasets. For example, their 't-SNE Map' and 'Curator Table' experiments are visualization tools to see how objects, styles, and artists evolve over time. Moreover, the project โX degrees of separationโ was a source of inspiration for the AI-based curatorial project Dust and Data: The Art of Curating in the Age of Artificial Intelligence <cit.>.
In fact, Dust and Data (DAD) explores the possibilities of AI curation as an assistant to both curators and audiences; it uses semantic embeddings to recreate a curatorially specific version of the Google Arts experiment, which proposes a chain of artworks that connect one work to another, in this sense, filling the curatorial gaps in art collections <cit.>.
Another remarkable example of machine curation is the previous edition of a Biennial, the Liverpool Biennial 2021, titled The Next Biennial Should be Curated by a Machine[See <https://ai.biennial.com/>.]. The curation allows navigating the Liverpool collection through a set of alien images. For each artwork, GAN-generated images were created from the titles of existing artworks in the collections. Moreover, CLIP <cit.> was used to extract keywords from the artworks that were used as the link to navigate the collection.
AI curation aims to offer new insights into digital cultural artifacts. It is possible to propose personalized journeys of the collections, as well as to foster creative takes on its presentation <cit.>. Finally, contemporary AI curation strives to disentangle the undercovered behaviors of large models by switching from the practical data and tasks they were trained on to curatorial and artistic purposes.
In this field, AI technology and its implementations are evolving steadily fast. The incorporation of physical space has already been explored in interactive place-making with the use of data curation technologies like recommender systems [3]. Our incorporation of references to real locations and the generation of new imagined immersive panoramas make this work novel in regards to previous and current efforts in urban curation using AI, whilst other projects have been limited to the visual and textual dimensions. Specifically, regarding the general trends and approaches identified by [2]. In addition, our approach to first interpret HAM artworks textually through CLIP engages with the discursive practices of art curation and the necessary critique and self-reflective practice in digital curation as articulated by Cruz [4]. It is crucial to distinguish between the different literature on AI and the digital curation of cultural and artistic data. On the one hand, that belonging to the contemporary digital art curating practices, and on the other, that focused on cultural heritage and its management, usually more focused on infrastructural solutions and less on discursive practices, as summarized by [1]. In the literature, a general concern regards the condition of AI as a โblack-boxโ and its effects of disempowerment of a passive audience. Even though our work has an artistic context that is not primarily nor solely concerned with explainable AI, we will also clearly characterize our work with respect to this.
The way we incorporate the urban context is still mainly limited to visual inputs (360ยฐ panoramas) but we do start to imply the crucial third dimension through the depth estimation using MiDaS. In addition, in contrast to other similar projects like โThe Next Biennial Should Be Curated By a Machineโ[6], we also include spatial contextual information in the process of guiding the generation of new artistic panoramas, and we do not simply restrain ourselves to the outputs of an abstract visual and textual latent space. In this respect, special attention needs to be put into understanding the place of this project and methodology with regard to curating cities with the aid of computational methods and machine learning in particular.
A crucial element relevant to ethical considerations of AI curation is the possibility of offering a gaze on any art collection that would be free from a specific cultural framing. Indeed, as pointed out by Jones <cit.>, there has been over the past four decades an increasing criticism of the way cultural objects that do not belong to our Western culture are treated. These objects are too often perceived as primitive artifacts, which derive from a colonial projection on non-western societies <cit.>. The approach of the museum curator towards the objects and the narrative presented in an exhibition impact and influence the perception of the public, and solutions have to be found to overcome this biased gaze based on the origin of artworks, and to open to โalternative voices, histories, and representationsโ <cit.>. Nevertheless, considering the material used in the training of most deep learning models and its strong Western anchoring, AI cannot be considered in itself as the solution but as another possibility for experiments toward cultural diversity and postcolonial views.
ยง.ยง The HAM Dataset
The Helsinki Art Museum (HAM)โs collection consists of the core material used for the project. Defining itself as โa city-wide art museumโ the HAM holds about 10โ000 artworks, of which around 2โ500 can be found in the outdoor and indoor public spaces of the city. These artworks are very diverse, such as sculptures, paintings, and drawings. The idea behind the project is to take advantage of that urban perspective and experiment with the original locations of the public works. To this end, we get access to the geographical information and the corresponding photographs of the 488 outdoor public artworks. The information consists of a set of longitude and latitude coordinates. Additionally, 1โ744 items from their indoor collection are harvested from their online platform[See <https://ham.finna.fi/?lng=en-gb> for the full collection.]. For each item, a corresponding image representing the artwork is retrieved, as well as the title, date of creation, name of the artist, keywords in English, Finnish, and Swedish describing the piece, and the object ID in the official collection.
We thus collect a total of 2โ232 items divided into two distinctive sets referred to as the public art, corresponding to the outdoor public artworks, and the indoor art, referring to the indoor collection of the HAM.
ยง METHODS
In this project, we strive to present and recreate the HAM collection as a new entity inhabiting and embodying the city of Helsinki. To this end, our curatorial pipeline begins with the geolocation of all the artworks of the collection, including the indoor artworks that do not have a physical location. We proceed in two steps: we employ an image-to-text model to extract a compressed representation of both the public and indoor art, and we successively exploit this representation to assign fictional coordinates to the indoor collection based on their similarity to the public artworks. Given the new coordinates, we proceed to induce the artworks to embody their space at that coordinate: We extract the panoramic 360ยฐ view of each artwork from its corresponding location and use diffusion-based models <cit.> to turn the 360ยฐ panoramas into an immersive space representing the artwork. The output image is generated using depth images of extracted panoramas and machine-generated prompts as input guidance for the model (Figure <ref>).
ยง.ยง Image to CLIP representations
As a first step, we extract the visual and textual features from all images in the collection using the CLIP-based model CLIP-Interrogator <cit.> (Figure <ref>). In fact, we take advantage of the zero-shot performance of the CLIP model released by OpenAI, which produces Stable Diffusion 1.5 compatible prompts. Using CLIP-Interrogator, we store two outputs: the prompts ๐ญ and the embeddings ๐ณ^*. For each image ๐ฑโ๐ณ, the interrogator maps x to the image embedding ๐ณ_I โ๐ต_I using the model. It leverages contrastive-based learned features to map image embeddings ๐ณ_I to text embeddings ๐ณ_T โ๐ต_T, where ๐ณ_I, ๐ณ_T โโ^m, m = 768. Each text embedding ๐ณ_T is decoded into a text prompt ๐ญโ๐ฏ. The prompts will be used in Section 2.4 as the inputs for the Stable Diffusion generation. We wish to consider both linguistic and visual information to assign the fictional coordinates in Section 2.2. Therefore, we represent each artwork (both indoor and public) as the concatenation ๐ณ^* โ๐ต^* of ๐ณ_I and ๐ณ_T, where ๐ณ^* โโ^m+m.
ยง.ยง CLIP to Fictional Coordinates
Successively, we determine fictional coordinates (Figure <ref>) for the 1โ744 images of the indoor collection using information from the known geolocations (latitudes and longitudes) of the public artworks ๐ฒ_๐ณ_public^*โ๐ด and the feature vectors ๐ณ^* obtained in the previous step. We experiment with several predictive and similarity-based algorithms. We split the public collection data ๐ณ_public^* โ๐ต^*, ๐ฒ_๐ณ_public^*โ๐ด into 70% training and 30% validation, and predict on ๐ณ_indoor^* โ๐ต^* to obtain the fictional locations of the indoor artworks ลท_๐ณ_indoor^*โ๐ด.
We train a selection of canonical machine learning regression models to predict ๐ฒ using ๐ณ^*. In particular, we test two decision tree-based methods: Random Forest <cit.>, XGBoost <cit.>, and, a Support Vector Machine Regression (SVR) <cit.>[All the models used are available from https://scikit-learn.org/stable/supervised_learning.htmlsklearn], using feature standardization and hyperparameter tuning, described in Appendix 1 [TO ADD].
Moreover, we also experiment with an unsupervised GPS-inspired similarity method to compute the coordinates of the indoor artworks ลท_๐ณ_indoor^*. We use the feature vectors ๐ณ_indoor^* to find the three most similar public artworks. Practically, we construct a Ball Tree <cit.> using ๐ณ_public^* and we query the tree with each ๐ณ_i^*_indoor. We retrieve, for each indoor artwork, the three items of the public artworks ๐ณ_j^*_public where j=1,2,3 with the smallest Euclidean distance d as:
๐ณ_1^*_public = _i โ indoor, k โ public d( ๐ณ_k^*, ๐ณ_i^* )
๐ณ_2^*_public = _i โ indoor, k โ public d( ๐ณ_k^* | ๐ณ_1, ๐ณ_i^* )
๐ณ_3^*_public = _i โ indoor, k โ public d( ๐ณ_k^* | (๐ณ_1, ๐ณ_2), ๐ณ_i^* )
Finally, we use the coordinates ๐ฒ_z_j^*_public of the three most similar public artworks ๐ณ_j^*_public, j=1,2,3 to triangulate the fictional coordinate of the indoor artwork as the centroid of the triangle simply as:
ลท_z_i^*_indoor = ๐ฒ_z_1^*_public + ๐ฒ_z_2^*_public + ๐ฒ_z_3^*_public/3
Furthermore, we test a weighted alternative of the latter method as:
ลท_z_i^*_indoor = w_1 ๐ฒ_z_1^*_public + w_2 ๐ฒ_z_2^*_public + w_3 ๐ฒ_z_3^*_public/3
where w_1,2,3 are defined as the inverse of the softmax of the Euclidean distances d:
w_j = 1 - softmax(d)[j]
which gives more weight and importance to the locations of artworks that are more similar to the artwork in consideration.
We use the public artworks belonging to the training set to predict the final indoor artwork locations and test the performance of this method on the public artworks of the validation set.
ยง.ยง Fictional and Real Coordinates to Panoramas
Once the fictional location of the indoor artworks ลท_๐ณ_๐ข๐ง๐๐จ๐จ๐ซ^* is calculated, we begin with the inspection of the local conditions of each data point, the 360ยฐ panorama street view. We employ the Google Street View API to gather the panorama street views ๐ฏ at each latitude and longitude tuple (both ๐ฒ_๐ณ_๐ฉ๐ฎ๐๐ฅ๐ข๐^* and ลท_๐ณ_๐ข๐ง๐๐จ๐จ๐ซ^*).
Helsinki offers a very varied landscape, spanning from coastal settings and gulfs to urban areas and parks. On the one hand, this variation provides fertile ground for the following image generation phase. On the other, the pipeline is challenged by several locations where the 360ยฐ street view is unavailable. To overcome this limitation, an iterative process queries the Google Street View API with increasing radii to ensure proximity to the predicted position, while permitting local adjustments. In the case of locations with no available street view panorama within a radius of 250 meters, such as in the middle of the sea, a 360ยฐ panorama view with an aspect ratio of 19:6 is generated using Midjourney[See www.midjourney.commidjourney.com.]. While for the remainder of the paper we adopt Stable Diffusion for image generation, we found that empirically Midjourney was providing better quality imaginary natural landscapes than Stable Diffusion in the absence of depth guidance.
ยง.ยง Panoramas and CLIP prompts to Art Panoramas
Finally, using the panorama views ๐ฏ of each location as depth maps, and prompts ๐ญ, we generate landscape artworks that semantically depict the original art piece but use the real context as the canvas. To this end, ControlNet[We use the code from the official release on https://github.com/lllyasviel/ControlNetGithub, v1.0.] <cit.> plays a key role in guiding the generation with an input depth map, computed via MiDaS from ๐ฏ <cit.>. Through their combination - assisted with asymmetric tiling[See <https://github.com/tjm35/asymmetric-tiling-sd-webui/>.] - we influence the Stable Diffusion[We use the code from the https://huggingface.co/runwayml/stable-diffusion-v1-5huggingface release v1.5, using 30 inference steps and Euler sampling.] generation towards pertaining visual consistency between the real and the imagined landscapes. Finally, the resolution of the artwork is increased by 4x using ESRGAN <cit.>, leading to the resulting art panoramas ๐ (Figure <ref>).
ยง EVALUATION
To evaluate the various steps of the pipeline we adopt and develop both qualitative and quantitative metrics, due to the profoundly perceptual nature of the final work. Furthermore, we do not strive for a perfect prediction or generation, both with respect to the allocation of the coordinates and to the rendering of the style of the original artwork in the final panorama; oppositely, we allow for a certain degree of freedom, which, we believe, is innate to the artistic license of the work.
Particularly, we quantitatively evaluate the fictional coordinates assignment (Section 2.2 and 3.1). We set out to evaluate the results according to two criteria: spatial dispersion of the predicted coordinates, and โsemanticโ accuracy of the new locations. The first criterion is motivated by the joint desire of our team and the HAM curatorial team to disseminate the artworks across the whole area of the city, against the natural and cultural tendency to concentrate artworks and attractions in the center or in limited areas of a city. We decided to evaluate the 2D dispersion of the predicted locations in the city using the mean Euclidean distance from the central predicted point, quantifying a pseudo standard deviation of the points. The second criterion is related to the original idea of this AI curation: to manifest a pseudo-agnostic machine view of the city. We purposefully abstract from the specificities of the urban context and only adopt the visual and textual embeddings of the artworks to estimate a location. We then evaluate the degree of plausibility of the predicted locations. As a proxy, we use the validation set from the public artworks dataset and evaluate the methods using the mean Euclidean distance of the predicted locations of the public artworks against their real locations. This, we believe, gives us an estimate of the predictive power of the two methods when given as input only the visual-textual embeddings and none of the urban confounding factors.
To evaluate the quality of the final panoramas we use both qualitative and quantitative methods (Section 3.2). Qualitative assessment is in essence highly subjective, especially in the context of artistic creation, which makes it challenging to evaluate. When starting this project, the team had in mind specific requirements regarding the output produced by the computational process, which are related to the level of abstraction of the created image, the degree of visual similarity with the original artworks, and the sense of connection with the urban environment where the artwork is projected. Furthermore, the project was developed with periodic feedback from people directly involved with the HAM collection, which consists of a team of expert curators. We assess the opinion of this expert public, as well as integrate a quantitative evaluation to support the qualitative aspects taken into account.
We perform the qualitative assessment in the form of a questionnaire, which presents multiple-choice, ordinal, and interval scale questions. The survey is conducted with a group of expert curators based in the city of Helsinki and are evaluating the production of AI curation on the following criteria:
- the level at which the essence of the original panorama is retained in the final panorama,
- the level at which the essence of the original artwork is preserved in the generated panorama,
- the strength of the relationship between the original artwork and the assigned urban context.
The questionnaire is detailed in Appendix 2 [TODO].
The quantitative assessment of the produced artwork addresses, partially, the above criteria with the following comparative analysis:
- The color distribution between the original artwork and the art panoramas to determine their degree of color similarity in spite of the fact that the generation is guided by textual descriptions. The five most representative colors of each image are obtained by clustering the colors in each image using k-Means (<cit.>) with k=5 clusters and taking the crucial colors to be the centroids of the clusters. We then represent each image as the feature vectors encoding HSV channels. We, therefore, have 3 5-dimensional embeddings per image, which can be used to compare the color similarity both in terms of general hue, but also coherence in saturation and color intensity (value).
- Histogram of Oriented Gradients (HOG) to evaluate the art panoramas and their corresponding 360ยฐ panoramic views of the city, and determine the degree of similarities in the urban shapes preserved in the computationally generated images. We compute the distribution of orientations in each 16x16 patch with 8 possible orientations for each image and use these to compare the line dynamics between the original panorama and the generated one.
[NOTE] The results of the second part of this methodology are currently being updated and will be re-uploaded soon.
ยง RESULTS
In this section, we present our quantitative and qualitative results, related to the computation of fictional coordinates and the quality of the generated panoramas.
ยง.ยง Fictional coordinates
We compare and evaluate the performance of the assignment of the fictional coordinates as explained in Section 3. The results are presented in Table <ref>. After the hyperparameter tuning, the best-performing models with respect to the mean squared and absolute error in the predictions with respect to the original coordinates are the machine learning-based methods, with a marginal preference for the SVR in terms of absolute error and Random Forest for squared error. On the other side, the spatial dispersion of the machine learning-based models is rather poor, with a great improvement in the dispersion happening in the similarity-based methods. The geographic dispersion is also visible in Figure <ref>, where it becomes clear that the predictions of the Random Forest are not able to capture the variance of the original data.
The similarity method overcomes such low prediction variation, with only a slight increase in the error rates (which is almost none for the non-weighted similarity method). This approach results in a roughly even and well-distributed set of locations, which are easy to navigate and distinctive from each other (Figure <ref>).
As there are no ground truth locations for the indoor artworks, we are only interested in a meaningful way of representing the locations through the eyes of the machine. We aim to recreate the space of the city, an alternative Helsinki through the lens of machine perception. The unsupervised method is justified exactly by this premise, that the space should follow a semantic continuity rather than a strictly geographical one. In addition, the triangulation is inspired by GPS localization, reconnecting symbolically to the bases of geolocation.
Therefore, we select GPS-inspired similarity as the most suitable method to generate fictional locations. We prefer the non-weighted similarity over the weighted option despite a slight decrease in dispersion because of the marginal improvement in the squared error. With this method, we obtain a dataset of 1โ744 predicted locations of which we are able to retrieve 1โ681 equirectangular street view images within a range of 250 meters. The remaining 3.61% are assigned the set of generated HDR (High Dynamic Range) images of forest (14%) and sea (86%) landscapes.
We inspect the three most similar images from a randomly selected set of indoor artworks from which we draw some general impressions on the nature of such similarity. The retrieved images seem to have both conceptual and visual similarities to the original image (Figure <ref>), where the concept of religion is used in the retrieval of the first image, even in the absence of any physical connection. In some cases, similarities are found across modalities, with instances of connections between paintings and statues. For example, formalistic properties are shared between query and retrieved images (e.g., a drawing of a square triggers cubic-shaped sculptures in retrieval). On the other hand, some public artworks are retrieved repeatedly, sometimes without any clear connection, indicating that the search space is not equally likely for all images. The limited conceptual connections are unsurprising as CLIP prompts function as textual descriptions of the visual, and not as an art historical explanation of the artwork. Moreover, we note that CLIP-Interrogator only adopts a limited vocabulary in the extraction of the textual description, which is not suited for art historical descriptions.
ยง.ยง Art Panoramas
[NOTE] This section is currently being updated.
With the generation of 360ยฐ art panoramas using depth maps as a guide, we aim to maintain the spatial geometry of the image close to that of the original street views. While in most cases that geometry is kept and the perceived resulting space still captures the 3-dimensional quality of the original urban setting (Figure <ref>), the results show that the built physical features can become highly transformed depending on the graphical and pictorial style of the source artwork. For instance, very atmospheric pictorial styles, where contours are very blurred and diffuse, tend to generate resulting 360ยฐ art panoramas with less recognizable geometries of specific physical features of the original urban elements, while keeping the same perception of depth and overall composition. We see that most of the panoramas broadly reflect high coherence with respect to the semantics but significant shifts in color palette and brushstroke. Concretely, we highlight how the textual information used to generate art panoramas is often insufficient for an appropriate match of the color palette and style properties of artworks. The perceived space changes with the artistic style of the original artwork, rendering a new dreamed Helsinki as seen through the collection by the proposed pipeline. The immersion of the resulting 360ยฐ art panoramas is generally satisfactory as tested through a generic online 360ยฐ panorama viewer VR[See https://renderstuff.com/tools/360-panorama-web-viewer/renderstuff.com.]. It is to be noted that those panoramic views are experienced through a fixed standpoint and do not each result in a synthetic 3D navigable space. Nevertheless, using 360ยฐ panoramas allows us to capture a complete spherical view of the image surroundings, hence leading users towards a credible immersive experience.
ยง DISCUSSION
Previous sections have presented our curatorial work for the 2023 Helsinki Biennial. The process of creating a new spatial projection of artworks from the HAM collection entails ethical and scholarly issues. These are part of a larger narrative that continuously unfolds and is focused on the ways creative AI applications (en)force global cultures.
ยง.ยง Towards a curatorial machine
The stack of models presented works as a curatorial agent, not simply following another sophisticated search engine paradigm, but as a mediating actor. In modern society, we are already witnessing machine curatorial processes that guide cultural aesthetic preferences <cit.> (e.g., recommender systems), but this example showcases and highlights new emergent practices that point to a shift in the modalities involved in cultural curation and artistic production. Particularly, agency comes into question in this work, a modular stacking of several algorithms from a variety of tasks and applications. The complexity of such products makes it unsurprising that both scholars and the broader public are ready to project authorship to these models as we have recently seen in debates about ChatGPT and GPT3-4 <cit.>. In our case, the capacities of a curatorial agent which relate to the possibility of connecting images and texts to a common embedding space, are inherited from CLIP. Furthermore, the capacity to establish translations between the visual, textual, and spatial, situates the imagined collection in a new geography that is neither unreal nor illusory as it is shared and can be experienced, much like curation. Curation is no longer reserved for humans, just as this curatorial practice transgresses the domain-specific knowledge of the art world. As machine curation develops and normalizes, understanding the inner functioning of machine learning models becomes an increasingly crucial literacy, relevant to a general set of skills in contemporary artistic curation.
ยง.ยง Avenues for public engagement
As this curation is not a physical exhibition, it can inhabit endless physical spaces, each creating a diverse public engagement. The first possibility of physical interaction is inside the walls of the HAM. This can take the form, among others, of a 2D projection screen, a 3D immersive curved screen, or a Virtual Reality headset. Due to the nature of the panoramas, we believe the best interaction would be achieved through a cylindrical screen or a dome, which would immerse the public in a 360 degrees setting of the surrounded mutated city, and, unlike the VR headset, this projection is suited for the fixed central standpoint of the produced panoramas. The geographical nature of the project opens a second avenue for physical interaction outside the museum space: the street. Here, we believe an interesting interaction can take place as a mobile AR experience. The public would engage with the machinic Helsinki at the location of the real Helsinki, superimposing the two worlds simultaneously.
These avenues lead us to ask what the effects of this newly imagined urban landscape on the โrealโ Helsinki are, and what interactions it will unleash once deployed. The impact of digital versions of a given environment, especially urban, is being discussed within the digital twin and smart city scene <cit.>, but the effects of such an imagined digital version on the behaviors, attitudes, urban development, and other aspects of the social and built urban environment are yet to be assessed.
In addition, the potential effects of such an experience in fostering enhanced levels of spatial agency are also to be considered and assessed. Aesthetic experiences can be conducive to emotional reactions, which in turn alter not only the way we perceive the space around us but also can modulate our perception of affordances and therefore, our spatial capacities <cit.>. Our project seeks to revisit the urban spaces of Helsinki through the imagined panoramas, inviting the public to engage differently with their urban imaginary. An AR implementation that locally reacts to the location of the visitors, either through geolocation estimation via GPS triangulation or through on-site QR code scanning can add a crucial component, resulting in a more convincing situated experience and therefore, a more impactful emotional and affective engagement.
ยง.ยง Ethical considerations
In our globalized world, an art biennial is an event designed to showcase and locate a city on the map of current creative and influential cities worldwide. It is, therefore, about status and attracting attention, explicitly foregrounding the added value of the cityโs assets, innovative profile, and capacity to become a central player <cit.>. In the case of this project, we were commissioned to work with and feature the collections of the HAM collection, which is primarily composed of artworks from local artists, potentially for a global audience. We tackled such a conundrum with the clear objective of avoiding manipulating the original artworks in order to respect their artistic integrity and their rich and complex relationship with the many layers of memory, identity, and local cultural heritage.
The ethical stakes of such an operation need to be carefully delineated. On the one hand, there is a need to respect, as outlined above, the artwork regarding its cultural context, and, on the other hand, we need to consider the approach from a global public, necessarily agnostic of those specificities. This second approach requires some freedom to appropriate, recombine, and re-imagine cultural production in the process of international artistic influence and cross-fertilization.
In this conundrum, the machine (meaning the actant resulting from the stack of diverse models) becomes an aid. It allows a certain lecture of these images. However, the fact that we are using a CLIP-guided model, and getting the textual and visual embeddings from CLIP means that we are, in practice, reading a collection (the one of HAM) from another collection (the CLIP training dataset). What that exactly means remains a complex and multifaceted question, but in our case, it becomes a crucial aspect that problematizes the desired cultural agnostic approach as made through the machine. It confirms that cultural framing is already embedded in our computational models and begs the question of how to conceive an AI curation that embraces diverse cultural frames and a non-colonial approach.
ยง CONCLUSION
In this paper, we presented a novel AI curation of a collection of works of art carried out for the 2023 Helsinki Biennial. Because of our will to propose a culturally agnostic view on artistic material, we decided to represent the collection through the lens of deep learning models, proposing a machinic approach to art curation, and anchoring it in the city of Helsinki. Using the Helsinki Art Museum (HAM) collection, we considered the use of the CLIP-Interrogator to create textual descriptions of the works of art and assign them fictional coordinates around the city of Helsinki through a similarity-based algorithm. A new synthetic 360ยฐ panoramic view of the predicted location was then generated with the original depth map of the location and the CLIP prompt, thus proposing a new visual style or flavor of the city of Helsinki. We introduced a discussion on the current AI curation practices and the innovations presented by our work. Moreover, we elaborated a comprehensive evaluation exploiting descriptive, quantitative, and qualitative methods.
Part of the generated material is made accessible with the implementation of a web application in collaboration with the designer Yehwan Song[See <https://yhsong.com/>.]. The goal of the platform is to propose to the user the possibility to navigate these fictional projections, to move from one space to the next, thus discovering the HAM collection through the gaze of the machine and opening the question of the structure of the navigation in this synthetic geography. As a complement to this free navigation, a future line of research will be to use other machine learning models to automatically generate narratives and shape new threads of exploration of that space.
Digital Visual Studies is a project funded by the Max Planck Society. Furthermore, we would like to thank the team behind the organization of the 2023 Helsinki Biennial as well as those responsible for the Helsinki Art Museum collections for their collaboration and assistance. We would also like to thank the artists featured in the HAM collections whose artworks constitute the source materials for the project.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.01516v1
|
20230602130846
|
Lensing reconstruction from the cosmic microwave background polarization with machine learning
|
[
"Ye-Peng Yan",
"Guo-Jian Wang",
"Si-Yu Li",
"Yang-Jie Yan",
"Jun-Qing Xia"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 100875, China; [email protected]
Department of Astronomy, Beijing Normal University, Beijing 100875, China
School of Chemistry and Physics, University of KwaZulu-Natal, Westville Campus, Private Bag X54001 Durban, 4000, South Africa
NAOC-UKZN Computational Astrophysics Centre (NUCAC), University of KwaZulu-Natal, Durban, 4000, South Africa
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Science, P. O. Box 918-3 Beijing 100049, Peopleโs Republic of China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 100875, China; [email protected]
Department of Astronomy, Beijing Normal University, Beijing 100875, China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 100875, China; [email protected]
Department of Astronomy, Beijing Normal University, Beijing 100875, China
The lensing effect of the cosmic microwave background (CMB) is a powerful tool for our study of the distribution of matter in the universe. Currently, the quadratic estimator (EQ) method, which is widely used to reconstruct lensing potential, has been known to be sub-optimal for the low-noise levels polarization data from next-generation CMB experiments. To improve the performance of the reconstruction, other methods, such as the maximum likelihood estimator and machine learning algorithms are developed. In this work, we present a deep convolutional neural network model named the Residual Dense Local Feature U-net (RDLFUnet) for reconstructing the CMB lensing convergence field. By simulating lensed CMB data with different noise levels to train and test network models, we find that for noise levels less than 5ฮผK-arcmin, RDLFUnet can recover the input gravitational potential with a higher signal-to-noise ratio than the previous deep learning and the traditional QE methods at almost the entire observation scales.
ยง INTRODUCTION
In observations of the cosmic microwave background (CMB) over the past decades, its anisotropy has been established as a robust and powerful cosmological probe, as shown through many experiments over the past two decades, such as COBE <cit.>, Boomerang <cit.>, WMAP <cit.>, Planck <cit.>, SPT <cit.>. An important goal for current and next-generation CMB experiments is the precise measurement of anisotropies in its polarization states, particularly the search for the faint primordial polarization B-modes induced by primordial gravitational waves. A lot of upcoming and proposed CMB surveys will be devoted to this direction, such as SPT-3G <cit.>, CMB-S4 <cit.>, the Simons Observatory <cit.>, LiteBIRD <cit.>, AliCPT <cit.>. Gravitational lensing effect of the large-scale structure has a significant impact on CMB observations. Gravitational lensing of the CMB arises from the deflection of CMB photons as they pass through the matter distribution between the surface of the last scattering and us, which will lead to a subtle remapping of the temperature and polarization anisotropies <cit.>. More specifically, lens smoothes the acoustic peaks of the CMB power spectra and generates non-stationary statistics of CMB fluctuations, and converts E-mode polarization into B-mode polarization <cit.>. It is therefore essential that we remove the lensing effect (delensing) from the observed CMB to reveal the unaltered primordial signal. On the other hand, CMB lensing is a powerful probe for observing the growth of cosmic structures because it encodes information about the matter fluctuations in the CMB maps observed by our telescopes. Reconstructing and analysing a lensing map from CMB data can help constrain the ฮCDM cosmological model, the neutrino mass scale, modified gravity, and a wealth of other cosmological physics <cit.>. All in all, reconstruction of the lensing potential and delensing from observable CMB data are key for decoding early universe physics.
CMB photons deflection angle is proportional to the gradient of the lensing potential ฯ, which is the projected weighted gravitational potential along the line-of-sight between us and the CMB. Quadratic estimator <cit.> has been used with great success to reconstruct the CMB lensing potential from CMB temperature and polarization maps and to detect the effects of gravitational lensing at high significance with existing CMB surveys <cit.>.
QE is nearly optimal at current noise level of CMB experiments. However, it will become significantly sub-optimal once the noise level falls below the โผ 5 ฮผK-arcmin <cit.>, which could be achieved in the next-generation CMB surveys, such as CMB-S4. Previous works <cit.> demonstrated that the likelihood-based approaches might significantly enhance the lensing reconstruction by efficiently utilizing higher order statistics of the CMB fields at low noise levels.
Machine learning has demonstrated exceptional capabilities in the field of image processing, such as image recognition and denoising, which has been widely used in astrophysical research <cit.> as a result of the impressive advancements in computer science in recent years. Machine learning recently demonstrated its potential for CMB lensing reconstruction. Specifically, <cit.> and <cit.> applied the Residual-Unet (ResUnet) model <cit.> to the CMB convergence ฮบ reconstruction. In addition, <cit.> and <cit.> used the ResUnet model to reconstruct maps of patchy Reionization and cosmic polarization rotation. In their investigation, this model successfully reconstructs ฮบ or rotation maps, and has lower lensing reconstruction noise than the QE technique at low noise levels. Their results, however, are noise-dependent, and the power spectrum of ฮบ field is difficult to reconstruct accurately in the presence of non-negligible noise. <cit.> employs a generative adversarial network (GAN) to precisely reconstruct ฮบ power spectrum despite the presence of noise. However, the GAN model's reconstruction of the ฮบ map contains more noise than the ResUnet model. As an improvement of the ResUnet model, we introduce a convolutional neural network to reconstruct the CMB lensing convergence field.
This paper is organized as follows. In Section <ref>, we briefly review a basic background for the CMB lensing and the quadratic estimator. In Section <ref>, We describe the simulated data sets as well as our network model. In Section <ref>, we present the results of our method and its comparison with the ResUnet model and quadratic estimator approach. In section <ref> discussions about using the CNN method are presented. Finally, we conclude in Section <ref>.
ยง WEAK LENSING OF THE CMB
In this section, the physical underpinnings of CMB lensing, as well as its construction using the quadratic estimator, will be briefly reviewed. We work in the flat-sky approximation and denote two-dimensional Fourier wave numbers by โ for CMB fields and L for the lensing potential. Gravitational lensing deflects the path of CMB photons resulting in a remapping of the primary CMB in the sky <cit.>,
Tฬ(nฬ)=T(nฬ+ฮฑ(nฬ)),
(Qฬ(nฬ) ยฑ i ลจ(nฬ))(nฬ)=(Q ยฑ i U)(nฬ+ฮฑ(nฬ)),
where nฬ denotes the line-of-sight direction, (Tฬ, Qฬ, ลจ) are the lensed CMB fields, (T, Q, U) are the primordial CMB fields, and ฮฑ is the deflection angle that describes the remapping. The deflection angle is the gradient of the lensing potential (ฯ) that is a weighted integral of the gravitational potential (ฯ) along the line of sight, which can be written as:
ฮฑ = โฯ,
ฯ = -2โซ dD (ฯ_s - ฯ)/ฯฯ_sฯ(ฯnฬ,ฯ),
where ฯ is the comoving distance along the line-of-sight and ฯ_s is the distance to the last-scattering surface. The lensing potential can be also represented by the convergence field ฮบ because one can move between the two fields using Poisson equation. In Fourier space, using the flat-sky approximation, this relationship can be written as:
ฮบ(L) = -L^2/2ฯ (L).
Gravitational lensing will affect auto- and cross-spectra of CMB fields and produces correlations between different modes โ, proportional to the lensing potential. In the flat-sky approximation, the off-diagonal mode coupling has the following form
โจXฬ(โ_1)แปธ(โ_2) โฉ โ f^ฯ_XY(โ_1, โ_2)ฯ(L),
where L=โ_1+โ_2, (X,Y) are CMB temperature or polarization fluctuations (T,E,B), and โจโฉ denotes the average over CMB realization. In this work, we focus on the quadratic estimator <cit.>, in which the minimum-variance quadratic estimator for the CMB lensing potential is written as
ฯฬ_ฮฒ = A_ฮฒ(โ) โซd^2โ_1/(2ฯ)^2X^ obs(โ_1)Y^ obs(โ_2)F_ฮฒ^ฯ(โ_1,โ_2) ,
where the superscript `obs' denotes the observed map including instrumental noise, and F_ฮฒ^ฯ is the variance filter. ฮฒ = XY, where X,Y โ{T,E,B}. The normalization factor A^ฯ_ฮฒ(โ) is chosen to make the estimator unbiased โจฯฬ_ฮฒ(โ) โฉ = ฯ(โ) and has the following form for the least variance estimator
A_ฮฒ(โ) = [ โซd^2โ_1/(2ฯ)^2 f^ฯ_ฮฒ(โ_1,โ_2)F_ฮฒ^ฯ(โ_1,โ_2) ]^-1,
For the EB estimator, we have
F_EB^ฯ(โ_1,โ_2) = f^ฯ_EB(โ_1,โ_2)/C_โ_1^EE,obs C_โ_1^BB,obs,
and
f^ฯ_EB(โ_1,โ_2) = [ (โ_1ยทL)C_โ_1^EE - (โ_2ยทL) C_โ_1^BB]
รsin2(ฯ(โ_1)-ฯ(โ_2)) .
For the EE estimator, we have
F_EE^ฯ(โ_1,โ_2) = f^ฯ_EE(โ_1,โ_2)/2C_โ_1^EE,obs C_โ_2^EE,obs,
and
f^ฯ_EE(โ_1,โ_2) = [ (โ_1ยทL)C_โ_1^EE + (โ_2ยทL) C_โ_2^EE]
รcos2(ฯ(โ_1)-ฯ(โ_2)) .
The noise properties of these estimators follow from
โจฯฬ_ฮฒ(L)ฯฬ_ฮณ(L') โฉ = (2ฯ)^2ฮด(L-L')[C_L^ฯฯ+N^ฯ_ฮฒฮณ(L)],
where
N^ฯ_ฮฒฮณ(L) = A_ฮฒ(L)A_ฮณ(L) โซd^2โ_1/(2ฯ)^2 F^ฯ_ฮฒ(โ_1,โ_2)[F_ฮณ^ฯ(โ_1,โ_2)
ร C_โ_1^X_ฮฒX_ฮณC_โ_2^Y_ฮฒY_ฮณ+F_ฮณ^ฯ(โ_1,โ_2)C_โ_1^X_ฮฒY_ฮณC_โ_2^Y_ฮฒX_ฮณ] .
Similarly to <cit.>, we use in this paper the minimum variance combination of all the polarization pairs,
ฯฬ^QE(L) = โ_ฮฒฯ_ฮฒ(L)ฯฬ^QE(L),
instead of the EB-only estimator used in <cit.>. Here, the minimum variance weighting is
ฯ_ฮฒ = โ_ฮณ(N^-1)_ฮฒฮณ/โ_ฮณฮท(N^-1)_ฮณฮท.
The reconstruction noise of the minimum-variance estimator is then
N_L = 1/โ_ฮฒฮณ(N^-1)_ฮฒฮณ.
ยง METHODOLOGY
In this section, we will introduce the deep learning architecture we use in this work and the details of the data pipeline and network training.
ยง.ยง Network architecture
The convolutional neural network (CNN) has been widely applied to image processing tasks. Its power relies on the network architecture that consists of a stack of non-linear parametric models. By training, the network architecture can convert complex problems into parameter optimization. The convolutional layer <cit.> is the core of the network architecture. Each convolutional layer accepts the feature images from the former layer as an input, then convolves them with a bank of local spatial filters (or kernels) whose values are parameters to be learned, and finally gives output to the next layer after a nonlinear activation function. The output size of a convolutional layer is controlled by three hyper-parameters: the number of output channels (the number of convolutional filters), stride, and amount of zero padding. The stride is defined as the distance in pixels between the centers of adjacent filters. Residual connection <cit.>, which widely used in image processing tasks, is designed to learn the residual mapping by adding inputs of a given layer to the outputs of the layer that one. The U-Net <cit.> is a successful architecture that takes an encoderโdecoder with convolutional layers and adds extra shortcuts (skip connections) between the encoding and decoding layers to allow for propagation of small-scale information that might be lost when the size of the images decreases. In this work, we provide Residual Dense Local Feature U-Net (RDLFUnet), a deep learning architecture designed to reconstruct the CMB lensing potential. This network architecture combines Residual-Unet (ResUnet) <cit.> and the modified Residual Local Feature Network (RLFN) <cit.>. The ResUnet is created in U-Net using the residual connections. The RLFN is an excellent architecture for image super resolution reconstruction.
Our network architecture is divided into two branches as shown in Figure <ref>. The first branch is a modified RLFN. The residual local feature block (RLFB), which employs three convolutional layers for residual local feature learning, is the fundamental building block of the RLFN as depicted in the bottom panel (b). The RLFB finally employs an Enhanced Spatial Attention (ESA) block <cit.> to produce the end output. The ESA block can make the residual features to be more focused on critical spatial contents. In this work, the residual dense local feature block (RDLFB), which is depicted in the bottom panel (c), replaces the RLFB. In contrast to the RLFN with residual connection, the RDLFB uses a residual dense connection <cit.>. In our test, such a change might enhance the ability to reconstruct CMB lensing potential. The second branch, known as ResUnet, is shown in the bottom panel (e) of Figure <ref>. ResUnet encodes relevant information from the input maps into smaller maps, and then decodes that information to generate the output maps. The output of these two branches is concatenated along the channel dimension, then pass to three residual blocks, and the gravitational convergence ฮบ map will be finally constructed. Our network model is built in [<https://pytorch.org/>] environment, which is an open-source optimized tensor library for deep learning.
The network model is optimized by minimizing a loss function. The training set consist of S pairs of samples {x_i, y_i}_i=1^S, where x represents the observed Q/U maps, and y is the corresponding ground truth of ฮบ map. Our loss function defines as
โ = โ_ LAD + ฮฒโ_ FFT,
where โ_ LAD is the least absolute deviation (LAD, also called L1 loss), โ_ FFT is the Fourier space loss, and ฮฒ is a coefficient representing the contribution of โ_ FFT to the total loss and we set ฮฒ=1 throughout the paper. The L1 loss has form
โ_ LAD =1/Nโ_n=1^N[ 1/WHโ_w=1^Wโ_h=1^H(|I^n_w,h-y^n_w,h|)],
where N is the batch size, H (W) is the height (width) of the images in pixels, and I = f(x) (f(ยท) is the network model) is the predicted image. The form of โ_ FFT is defined as
โ_ FFT =1/Nโ_n=1^N[ 1/WHโ_w=1^Wโ_h=1^H(|A_ F(I^n_w,h)-A_ F(y^n_w,h)|)],
where A_ F is the amplitude of FFT, which has the form of
A_ F(I) = โ(Re[ FFT(I)]^2+Im[ FFT(I)]^2),
where Re[ยท] and Im[ยท] denote the real and imaginary part, respectively.
ยง.ยง Data pipeline and network training
As a supervised machine learning technique, the CNN method needs a training dataset with values that are already known to be true. Our data set is based on a simulation of a flat sky. First, the publicly available [<https://github.com/cmbant/CAMB>] package is used to calculate the primordial CMB and lensing power spectra <cit.>. Here, we use a standard ฮ cold dark matter model with parameters (H_0, ฮฉ_bh^2, ฮฉ_ch^2, ฯ, A_s, n_s), and their best-fit value and standard deviation can be obtained from the Planck 2015 data <cit.>. Then, using the theoretical power spectra from , we simulate two-dimensional maps that cover a 5^โร 5^โ patch of sky with 128 ร 128 pixels by using a modified version of [<https://github.com/msyriac/orphics>] and [<https://github.com/EEmGuzman/resunet-cmb>].
Previous works <cit.> employed a fixed lensing power spectrum to generate observed CMB map, however they indicated that the trained network model is sensitive to the variations in cosmological parameters. <cit.> used a varying lensing power spectrum ฮฑ C_โ^ฮบฮบ (ฮฑ is random values within range 0.75-1.25), while the cosmological parameters remained constant. In this work, in order to be more realistic and eliminate the dependence of the cosmological parameters, we treat the values of cosmological parameters as independent Gaussian random variables, where the mean values and standard deviations are taken from the best-fit values and standard deviation of Planck-2015 results <cit.>.
Finally, similar to <cit.>, <cit.> and <cit.>, we smooth the maps with a Gaussian beam size of FWHM= 1 acrmin and choose 4 levels of noise with 0.0 ฮผK-arcmin, 1.0 ฮผK-arcmin, 2.0 ฮผK-arcmin, 5.0 ฮผK-arcmin. Each generated map applies a cosine taper of 0.5^โ to reduce edge effects. In total five different types of maps are generated, (Q^ prim, U^ prim, Q^ obs, U^ obs, ฮบ). Here, the superscript `prim' represents the primordial CMB map, while `obs' denotes the observed CMB map including the lensing and instrumental information.
To generate the observed maps (Q^ obs, U^ obs), the primordial maps (Q^ prim, U^ prim) are first lensed with the convergence map, then a noise map is added after these lensed maps have been smoothed with a Gaussian beam. The example observed CMB maps with varying noise levels are shown in Figure <ref>. The variations between noise levels of 0 and >1 ฮผK-arcmin are readily visible on the maps.
For each noise level, 25000 sets of the five maps described above are generated and split into training, validation, and test sets with a ratio of 8:1:1. Each noise level is trained using a separate network. To train the network, mock observed CMB maps (Q^ obs, U^ obs) will be fed into the network model, going through each level, and outputting a ฮบฬ map. We use a batch size of 32, adopt the Adam optimizer <cit.>, and initially set the learning rate to 0.005, which gradually drops to 10^-6 during the iteration. Iterations totaling 60,000 are used to train the network model. We used two NVIDIA Quadro GV100 GPUs to train network, and one network model requires โผ12 hr to train.
ยง RESULTS
Results for the reconstructed ฮบ map using our network and its power spectra are presented in this section. We will compare our results to those of the ResUnet model <cit.>. The reconstructed map is represented by ฮบฬ, whereas the true map is represented by ฮบ.
ยง.ยง Reconstructed map
First, we apply our network to the task of ฮบ map reconstruction. We take the observed Q^ obs and U^ obs maps as input, and the desired output is the ฮบ map. The test set is fed into the network after the network training is finished. The reconstructed ฮบฬ maps are shown in Figure <ref> for the four noise experiments. In the case of noiseless input, the residual map contains minimal information, implying that the network can recover the ฮบ map reasonably effectively. However, as noise levels increase, all residual map amplitudes increase, and some small-scale structure details are missed by the reconstructed ฮบ maps. Despite the presence of noise, our network can also recover the most large-scale structure of the ฮบ map.
To compare with the ResUnet model, we train the ResUnet model separately for each noise level. For the ResUnet training, we use the same training pipeline and network architecture as <cit.>. In Figure <ref>, we compare the results of the ResUnet with our network model at the level of the reconstructed map. We can see that, for noiseless input, the residual map ฮบ-ฮบฬ has a lower amplitude than ResUnet's. We also calculate the structural similarity index measure (SSIM) <cit.> of the reconstructed and true ฮบ map. The SSIM is a method used for measuring the similarity between two images, as follows
SSIM(x,y) = (2ฮผ_x ฮผ_y +c_1)(2ฯ_xy+c_2)/(ฮผ_x^2+ฮผ_y^2+c_1)(ฯ_x^2+ฯ_y^2+c_2),
here, x and y represent two images, ฮผ_x and ฮผ_y are the pixel sample mean of x and y, ฯ_x and ฯ_y are the variance of x and y, ฯ_xy is the covariance of x and y, and c_1 and c_2 are two constants to stabilize the division with weak denominator. The two images are more similar the closer the value of SSIM is to 1. The two images are precisely the same if its value is 1. For the noiseless input case, the SSIMs are 0.98ยฑ 0.003 for our model and 0.89ยฑ 0.015 for the ResUnet model, indicating that our reconstructed ฮบฬ map is closer to the true ฮบ than result from ResUnet model.
When considering the non-zero noise inputs, from visual inspection, our reconstructed ฮบฬ maps can also have more small-scale structure features than the ResUnet model. Compared to ResUnet model, our network can capture more details on small-scale structure. We also calculate the SSIM of the reconstructed and true ฮบฬ map. The SSIMs of our network for the 1, 2, 5 ฮผK-arcmin noise levels experiments are 0.68ยฑ 0.010, 0.56ยฑ 0.012 and 0.41ยฑ 0.015, while the ResUnet model are 0.62ยฑ 0.015, 0.52ยฑ 0.014 and 0.39ยฑ 0.017, respectively. For experiments of three noise levels, our model performs better than the ResUnet model.
ยง.ยง Reconstructed power spectrum
Next, we present the power spectrum of the reconstructed ฮบฬ map at each noise level and compare them to the results of the ResUnet and QE approaches.
Figure <ref> shows the power spectra of reconstructed ฮบฬ for four input noise levels. The power spectra are the mean result over the testing set. For the noiseless input, the constructed ฮบ power spectra from our network is quite consistent with the target truth power spectrum. However, as noise levels rise, the reconstructed power spectrum visibly degrades at small scales, which implies the small-scale fluctuations are suppressed. The ResUnet model's reconstructed power spectra for each noise level are also shown in figure <ref> for comparison's sake. At all noise levels, we can clearly observe that C_L^ฮบฬฮบฬ reconstructed by our model contains more small-scale information than the ResUnet model, which is consistent with the reported results of the reconstructed ฮบฬ map in Figure <ref>.
Finally, we compare the performance of the quadratic estimator in reconstructing ฮบฬ power spectrum to that of our network model. First we need to define the reconstructed noise. As shown in Figure <ref> and Refs. <cit.>, the reconstructed ฮบ power spectrum C^ฮบฬฮบฬ by network model is a bias result. We use a method from <cit.> to treat this bias. We normalize the biased ฮบฬ field by a factor R_L defined as
R_L =[ โจ C_L^ฮบฮบฬโฉ/โจ C_L^ฮบฮบโฉ]^-1,
where โจ...โฉ represents the average over the test set. It should be noted that R_L is a function of scales L. Finally, the equivalent of the unbiased ฮบ field is given by
ฮบฬฬฬ_L =R_Lฮบฬ_L.
We assume that ฮบฬฬฬ can be modelled as ฮบฬฬฬ=ฮบ + n, where n is an uncorrelated noise term. The equivalent of the reconstruction noise spectrum can be defined as
N^ฮบฮบ_L =โจ C^ฮบฬฬฬฮบฬฬฬ_Lโฉ - โจ C^ฮบฮบ_Lโฉ,
where C^ฮบฮบ_L is true power spectrum, and we have
โจ C^ฮบฬฬฬฮบฬฬฬ_Lโฉ =R_L^2โจ C^ฮบฬฮบฬ_Lโฉ.
Figure <ref> shows the reconstruction noise power spectrum
for the experiments with 1 ฮผK-arcmin (upper panel) and 5 ฮผK-arcmin (lower panel) noise levels using the ResUnet model, the standard quadratic estimator, and our network model. The noise power spectra (see Eq. (<ref>)) of the quadratic estimator are obtained using publicly available [<https://github.com/simonsobs/symlens>] package.
We can see that the ResUnet model predictions have a lower reconstruction noise than the quadratic estimator at all scales for the noise level of 1 ฮผK-arcmin and closely matches the performance of the quadratic estimator for the noise level of 5 ฮผK-arcmin. The ResUnet model performance is also obviously better than the quadratic estimator for the noise level of 2 ฮผK-arcmin (the result is not shown in the figure). These imply that ResUnets outperform the QE method for lower noise levels. These conclusions are consistent with those made by <cit.>.
For experiments with noise levels below 5 ฮผK-arcmin, the reconstruction noise levels from our network model are lower than those from the ResUNet model and QE at almost all angular scales. These results suggest that our approach outperforms both the QE and the ResUnet model in terms of signal-to-noise ratio on the reconstructed ฮบ map.
Finally, we use the Fisher matrix method to compute the signal-to-noise ratio (SNR) <cit.>
SNR = โ(โ_L,L'C_Lโ_L,L'^-1C_L'),
where the numerator C_L is the theoretical ฮบ spectrum, and โ_L,L' is the covariance matrix obtained from our test set via
โ_L,L' = 1/N-1โ_n=1^N=2500[ (C^ฮบฬฮบฬ_L - Cฬ
^ฮบฬฮบฬ_L) ร (C^ฮบฬฮบฬ_L' - Cฬ
^ฮบฬฮบฬ_L') ],
where C^ฮบฬฮบฬ_L is reconstructed ฮบ power spectrum and Cฬ
^ฮบฬฮบฬ_L is the averaged reconstructed ฮบ power spectrum across the test set. For experiments with the noise level of 1 ฮผK-arcmin, the SNRs for our network model, ResUNet, and QE estimator are 14.7, 12.4, 9.1, respectively. For experiments with the noise level of 5 ฮผK-arcmin, the SNRs for our network model, ResUNet, and QE estimator are 6.1, 5.5, 5.2, respectively. Our method can get SNR higher than ResUNet and QE methods.
ยง.ยง Null test
Finally, a null test is performed to ensure that the network learns a sensible mapping of the input lensed (Q^ obs, U^ obs) maps to the predicted ฮบฬ map. In other words, we need to check that the predicted ฮบฬ maps by our network model are due to the presence of the true ฮบ signal rather than an artifact of the network (i.e., non-physical signal). In order to check the output results of our network model, we feed unlensed versions of (Q, U) maps to the network trained on observed (Q^ obs, U^ obs) maps.
We add noise to unlensed (Q^ prim, U^ prim) maps introduced in the section <ref> and feet it to network trained on observed (Q^ obs, U^ obs) maps. The cross-correlation spectrum (C^ฮบฬฮบ_L) of reconstructed ฮบฬ derived from the null test and true ฮบ is first calculated, and it is discovered that they are uncorrelated at all noise levels. This suggests that when we input the unlensed (Q, U) maps into network trained on observed maps, the output reconstructed ฮบ map of the network is merely random noise.
The power spectrum of the output ฮบฬ map from the null test is then calculated and compared to the reconstructed noise. To address bias, we first use Eq.(<ref>) to normalize the biased ฮบฬ map from the null test. We can calculate the rescaled power spectrum C^ฮบฬฬฬฮบฬฬฬ_L in Eq.(<ref>) from the output of null test, and compare it to the reconstructed noise shown in Figure <ref>. Figure <ref> shows the rescaled power spectra โจ C^ฮบฬฬฬฮบฬฬฬ_Lโฉ for each noise level and the reconstructed noise. We can see that the rescaled power spectra from the null test are basically consistent with the reconstructed noise at all noise levels, which is also consistent with the result of <cit.>. All these null tests illustrate that our network is capable of successfully learning the reasonable mapping from the input lensed (Q^ obs, U^ obs) maps to the predicted ฮบฬ map correctly.
We also notice that there is the difference within a few percent between C^ฮบฬฬฬฮบฬฬฬ_L and N^ฮบฮบ_L at large scales, and this difference becomes more obvious for the input maps with lower noise levels. We think that this could be caused by the normalization factor R in Eq.(<ref>), which may be not very accurate and will bias the ฮบฬฬฬ field in computation because the R computed from the training set fluctuates high/low.
ยง DISCUSSIONS
ยง.ยง Comparing with ResUnet Methods
In this work, we compared the performances of the ResUnet and RDLFUnet in reconstructing ฮบ map. To be consistent with <cit.>, the ResUnet model uses mean squared error in image space as the loss function for training, whereas the RDLFUnet uses Eq.(<ref>). The two models should utilize the same loss function for an accurate comparison.
We use the loss function contained FFT loss, Eq.(<ref>), to train the ResUnet model for the experiment of 1ฮผK-arcmin noise (other experiments have similar results), and results of reconstructing ฮบ are presented in Figure <ref>. We can see that the reconstructed ฮบฬ map by the ResUnet with FFT loss function has more small-scale structures than the results of the original ResUnet model. This implies that the FFT loss function can help the network capture more small-scale structures at the map level, implying that using the FFT loss function could enhance the model's ability to reconstruct ฮบ map.
From the aspect of the reconstructed power spectrum, the reconstructed C^ฮบฬฮบฬ power spectrum from the ResUnet with FFT loss obviously outperforms the result from the original ResUnet model, but it is still worse than the result of the RDLFUnet. These demonstrate that the ฮบ reconstruction process is aided by our loss function and the RDLFUnet model outperforms the ResUnet model. The RDLFUnet model actually has a more complicated structure than the ResUnet model, which suggests that it may have a stronger capacity for fitting than the ResUnet model. As a result, compared with the ResUnet method, both our network structure and loss function can improve the performance of reconstructing ฮบ map.
ยง.ยง Variability of noise level
In Section <ref>, each noise level experiment is trained using a separate network. Here, we alter the noise level for a trained network to see how sensitive our method is to changes in noise level. We train the RDLFUnet model on a training set with noise level of 1 ฮผK-arcmin and then feed it three test sets with noise levels of 0.8 ฮผK-arcmin, 1 ฮผK-arcmin, and 1.2 ฮผK-arcmin. We calculate the predicted power spectrum by RDLFUnet model as shown in Figure <ref>. The predicted ฮบฬ power spectrum for test sets with noise levels of 0.8 ฮผK-arcmin and 1.2 ฮผK-arcmin are consistent with the test sets with noise level of 1 ฮผK-arcmin. This implies that our network is insensitive to small changes in noise level.
ยง CONCLUSIONS
In this work, we present a machine-learning method for reconstructing the CMB lensing potential. The network was trained using simulated CMB maps with 5 ร 5 deg^2 size. We compared RDLFUnet's performance with that of ResUnet model from previous work. We demonstrated that the RDLFUnet can capture more small-scale structure features than ResUnet at the reconstructed ฮบ map level based on the reconstructed results of four tests with varying noise levels (0, 1, 2, 5 ฮผK-arcmin).
At the reconstructed power spectrum level, we demonstrated that our method outperforms the ResUnet in reconstructing power spectra of lensing convergence and has a higher signal-to-noise ratio than the ResUnet and quadratic estimator methods at almost the entire observation scales.
All results in this paper are based on simulated data. To extend this network usage for actual data, we need to include foreground residual in the input maps. We plan to reconstruct CMB lensing from the foreground cleaned CMB polarization data in future work. In addition, since actual data are often taken from larger regions of the sky, we also need to use simulations from larger sky patches. As a result, the flat sky approximation will no longer be valid, necessitating the modeling of a spherical sky map. The spherical HEALPix format map needs to be converted into 2D data since the neural network needs 2D data for its inputs. This issue has been addressed in <cit.> and <cit.>. We also investigate the impact of different foreground removal methods on the results of reconstructing CMB lensing in future work.
We will further investigate the ability of our method to other aspects, such as the reconstruction of foregrounds and CMB delensing, as well as the component separation of future radio surveys, in future works.
ยง ACKNOWLEDGEMENTS
J.-Q. Xia is supported by the National Science Foundation of China under grants No. U1931202 and 12021003; the National Key R&D Program of China No. 2017YFA0402600 and 2020YFC2201603.
mnras
99
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[Abazajian et al.(2016)]Abazajian:2016 Abazajian, K.ย N., Adshead, P., Ahmed, Z., et al. 2016, arXiv:1610.02743
[Ade et al.(2019)]Ade:2019 Ade, P., Aguirre, J., Ahmed, Z., et al. 2019, @doi [] 10.1088/1475-7516/2019/02/056, 2019, 056.
[Bennett et al.(2013)]Bennett:2013 Bennett, C.ย L., Larson, D., Weiland, J.ย L., et al. 2013, @doi [] 10.1088/0067-0049/208/2/20, 208, 20.
[Benson et al.(2014)]Benson:2014 Benson, B.ย A., Ade, P.ย A.ย R., Ahmed, Z., et al. 2014, @doi [] 10.1117/12.2057305, 9153, 91531P.
[Caldeira et al.(2019)]Caldeira:2019 Caldeira, J., Wu, W.ย L.ย K., Nord, B., et al. 2019, @doi [Astronomy and Computing] 10.1016/j.ascom.2019.100307, 28, 100307.
[Carron & Lewis(2017)]Carron:2017 Carron, J. & Lewis, A. 2017, @doi [] 10.1103/PhysRevD.96.063510, 96, 063510.
[Carron(2019)]Carron:2019 Carron, J. 2019, @doi [] 10.1103/PhysRevD.99.043518, 99, 043518.
[Darwish et al.(2021)]Darwish:2021 Darwish, O., Madhavacheril, M.ย S., Sherwin, B.ย D., et al. 2021, @doi [] 10.1093/mnras/staa3438, 500, 2250.
[Dumoulin & Visin(2016)]Dumoulin:2016 Dumoulin, V. & Visin, F. 2016, arXiv:1603.07285
[Fluke & Jacobs(2020)]Fluke:2020 Fluke, C.ย J. & Jacobs, C. 2020, @doi [WIREs Data Mining and Knowledge Discovery] 10.1002/widm.1349, 10, e1349.
[Guzman & Meyers(2021)]Guzman:2021 Guzman, E. & Meyers, J. 2021, @doi [] 10.1103/PhysRevD.104.043529, 104, 043529.
[Guzman & Meyers(2022)]Guzman:2022 Guzman, E. & Meyers, J. 2022, @doi [] 10.1088/1475-7516/2022/01/030, 2022, 030.
[Hazumi et al.(2019)]Hazumi:2019
Hazumi, M., Ade, P.A.R., Akiba, Y. et al. 2019, @doi [J Low Temp Phys] 10.1007/s10909-019-02150-5, 194, 443
[He et al.(2015)]He:2015 He, K., Zhang, X., Ren, S., et al. 2015, arXiv:1512.03385
[Heinrich et al.(2022)]Heinrich:2022 Heinrich, C., Driskell, T., & Heinrich, C. 2022, arXiv:2210.07391
[Hirata & Seljak(2003a)]Hirata:2003 Hirata, C.ย M. & Seljak, U. 2003a, @doi [] 10.1103/PhysRevD.67.043001, 67, 043001.
[Hirata & Seljak(2003b)]Hirata:2003b Hirata, C.ย M. & Seljak, U. 2003b, @doi [] 10.1103/PhysRevD.68.083002, 68, 083002.
[Horowitz et al.(2019)]Horowitz:2019 Horowitz, B., Ferraro, S., & Sherwin, B.ย D. 2019, @doi [] 10.1093/mnras/stz566, 485, 3919.
[Hotinli et al.(2022)]Hotinli:2022 Hotinli, S.ย C., Meyers, J., Trendafilova, C., et al. 2022, @doi [] 10.1088/1475-7516/2022/04/020, 2022, 020.
[Hu & Okamoto(2002)]Hu:2002 Hu, W. & Okamoto, T. 2002, @doi [] 10.1086/341110, 574, 566.
[Huang et al.(2016)]Huang:2016 Huang, G., Liu, Z., van der Maaten, L., et al. 2016, arXiv:1608.06993
[Kayalibay et al.(2017)]Kayalibay:2017 Kayalibay, B., Jensen, G., & van der Smagt, P. 2017, arXiv:1701.03056
[Kingma & Ba(2014)]Kingma:2014 Kingma, D.ย P. & Ba, J. 2014, arXiv:1412.6980
[Kong et al.(2022)]Kong2022 Kong, F., Li, M., Liu, S., et al. 2022, arXiv:2205.07514
[Lange et al.(2001)]Lange:2001 Lange, A.ย E., Ade, P.ย A., Bock, J.ย J., et al. 2001, @doi [] 10.1103/PhysRevD.63.042001, 63, 042001.
[Legrand & Carron(2022)]Legrand:2022 Legrand, L. & Carron, J. 2022, @doi [] 10.1103/PhysRevD.105.123519, 105, 123519.
[Lewis & Challinor(2006)]Lewis:2006 Lewis, A. & Challinor, A. 2006, @doi [] 10.1016/j.physrep.2006.03.002, 429, 1.
[Lewis et al.(2001)]Lewis:2001 Lewis, A., Challinor, A., & Turok, N. 2001, @doi [] 10.1103/PhysRevD.65.023505, 65, 023505
[Lewis et al.(2000)]Lewis:2000 Lewis, A., Challinor, A., & Lasenby, A. 2000, @doi [] 10.1086/309179, 538, 473.
[Li et al.(2017)]Li:2017 Li, H., Li, S.-Y., Liu, Y., et al. 2017, arXiv:1710.03047
[Li et al.(2022)]Li:2022 Li, P., Ilayda Onur, I., Dodelson, S., et al. 2022, arXiv:2205.07368
[Liu et al.(2022)]Liu:2022 Liu, J., Zhang, W., Tang, Y., et al. 2020, @doi [In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition] 10.1109/CVPR42600.2020.00243, pages 2359โ2368.
[Liu et al.(2022)]LiuSun:2022 Liu, J., Sun, Z., Han, J., et al. 2022, Science China Physics, Mechanics, and Astronomy, 65, 109511. doi:10.1007/s11433-022-1966-4
[Louis et al.(2017)]Louis:2017 Louis, T., Grace, E., Hasselfield, M., et al. 2017, @doi [] 10.1088/1475-7516/2017/06/031, 2017, 031.
[Maniyar et al.(2021)]Maniyar:2021 Maniyar, A.ย S., Ali-Haรฏmoud, Y., Carron, J., et al. 2021, @doi [] 10.1103/PhysRevD.103.083524, 103, 083524.
[Mather et al.(1994)]Mather:1994 Mather, J.ย C., Cheng, E.ย S., Cottingham, D.ย A., et al. 1994, @doi [] 10.1086/173574, 420, 439.
[Mehta et al.(2019)]Mehta:2019 Mehta, P., Bukov, M., Wang, C.-H., et al. 2019, @doi []
10.1016/j.physrep.2019.03.001, 810, 1.
[Millea et al.(2019)]Millea:2019 Millea, M., Anderes, E., & Wandelt, B.ย D. 2019, @doi [] 10.1103/PhysRevD.100.023509, 100, 023509.
[Millea et al.(2020)]Millea:2020 Millea, M., Anderes, E., & Wandelt, B.ย D. 2020, @doi [] 10.1103/PhysRevD.102.123542, 102, 123542.
[Nah et al.(2016)]Nah:2016 Nah, S., Kim, T.ย H., & Lee, K.ย M. 2016, arXiv:1612.02177
[Petroff et al.(2020)]Petroff:2020 Petroff, M.ย A., Addison, G.ย E., Bennett, C.ย L., et al. 2020, @doi [] 10.3847/1538-4357/abb9a7, 903, 104.
[Planck Collaboration et al.(2020a)]Planck:2020 Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020a, @doi [] 10.1051/0004-6361/201833910, 641, A6.
[Planck Collaboration et al.(2020b)]Planck:2020b Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020b, @doi [] 10.1051/0004-6361/201833886, 641, A8.
[Planck Collaboration et al.(2016)]Planck:2016 Planck Collaboration, Adam, R., Ade, P.ย A.ย R., et al. 2016, @doi [] 10.1051/0004-6361/201527101, 594, A1.
[Ronneberger et al.(2015)]Ronneberger:2015 Ronneberger, O., Fischer, P., & Brox, T. 2015, arXiv:1505.04597
[Smith et al.(2012)]Smith:2012 Smith, K.ย M., Hanson, D., LoVerde, M., et al. 2012, @doi [] doi:10.1088/1475-7516/2012/06/014, 2012, 014.
[Smith et al.(2006)]Smith:2006 Smith, K.ย M., Hu, W., & Kaplinghat, M. 2006, @doi [] 10.1103/PhysRevD.74.123002, 74, 123002.
[Tian et al.(2020)]Tian:2020
Tian, C., Xu, Y., & Zuo, W. 2020, @doi [Neural Networks] 10.1016/j.neunet.2019.08.022, 121, 461.
[Wang et al.(2022)]Wang:2022 Wang, G.-J., Shi, H.-L., Yan, Y.-P., et al. 2022, @doi [] 10.3847/1538-4365/ac5f4a, 260, 13.
[Wu et al.(2019)]Wu:2019 Wu, W.ย L.ย K., Mocanu, L.ย M., Ade, P.ย A.ย R., et al. 2019, @doi [] 10.3847/1538-4357/ab4186, 884, 70.
[Waqas Zamir et al.(2021)]Waqas:2021 Waqas Zamir, S., Arora, A., Khan, S., et al. 2021, arXiv:2102.02808
[Yan et al.(2023)]Yan:2023 Yan, Y.-P., Wang, G.-J., Li, S.-Y., et al. 2023, @doi [] doi:10.3847/1538-4357/acbfb4, 947, 29.
[Zaldarriaga & Seljak(1998)]Zaldarriaga:1998 Zaldarriaga, M. & Seljak, U. 1998, @doi [] 10.1103/PhysRevD.58.023003, 58, 023003.
[Zaldarriaga & Seljak(1999)]Zaldarriaga:1999 Zaldarriaga, M. & Seljak, U. 1999, @doi [] 10.1103/PhysRevD.59.123507, 59, 123507.
[Zhang et al.(2018a)]Zhang:2018 Zhang, Z., Liu, Q., & Wang, Y. 2018a, @doi [IEEE Geoscience and Remote Sensing Letters] 10.1109/LGRS.2018.2802944, 15, 749.
[Zhang et al.(2018b)]Zhang:2018b Zhang, Y., Tian, Y., Kong, Y., et al. 2018b, arXiv:1802.08797
[Zhou(2004)]Zhou:2004 Zhou Wang and Bovik, A.C. and Sheikh, H.R. & Simoncelli, E.P. 2004, @doi [IEEE Transactions on Image Processing] 10.1109/TIP.2003.819861, 13, 600-612.
|
http://arxiv.org/abs/2306.04023v1
|
20230606213636
|
Homogeneous electron gases revisited through quantum state ensembles
|
[
"Tim Gould",
"Stefano Pittalis"
] |
physics.chem-ph
|
[
"physics.chem-ph"
] | |
http://arxiv.org/abs/2306.08572v1
|
20230614152136
|
Symmetries in elastic scattering of electrons by hydrogen atoms in a two-color bicircular laser field
|
[
"Gabriela Buica"
] |
physics.atom-ph
|
[
"physics.atom-ph"
] |
[email protected]
Institute of Space Sciences, P.O. Box MG-36, Ro 77125,
Bucharest-Mฤgurele, Romania
We consider the elastic scattering of electrons by hydrogen atoms
in the presence of a two-color circularly polarized laser field
in the domain of moderate intensities below 10^13 W/cm^2 and high projectile energies.
A hybrid approach is used, where for the interaction of
the incident and scattered electrons with the laser field we employ the
Gordon-Volkov wave functions, while the interaction of the hydrogen atom
with the laser field is treated in second-order perturbation theory.
Within this formalism, a closed analytical solution is derived for
the nonlinear differential cross section, which is valid for circular as well linear polarizations.
Simple analytical expressions of differential cross section are derived
in the weak field domain for two-color laser field that is a combination of the fundamental
and its second or third harmonics.
It is shown that the nonlinear differential cross sections depend
on the dynamical phase of the scattering process and on
the helicities of the two-color circularly polarized laser field.
A comparison between the two-photon absorption scattering signal for two-color co- and counter-rotating
circularly polarized laser fields is made for even (2ฯ)
or odd (3ฯ) harmonics, and the effect of the intensity ratio of the
two-color laser field components is studied.
We analyze the origin of the symmetries in the differential cross sections
and we show that the modification of the photon helicity implies a change
in the symmetries of the scattering signal.
34.80.Qb, 34.50.Rk, 32.80.Wr
Symmetries in elastic scattering of electrons by hydrogen atoms
in a two-color bicircular laser field
Gabriela Buica
July 31, 2023
=======================================================================================================
ยง INTRODUCTION
The study of laser-assisted and laser-induced atomic processes has attracted an increasing theoretical
as well experimental interest in the last 30 years.
In particular, such interest is justified because of the possibility of controlling
the atomic processes by using two-color (or multicolor) fields and manipulating the
laser parameters such as relative phase and intensities ratio
between the monochromatic components of the fields, polarizations, etc. <cit.>.
The physical mechanism for controlling laser-assisted and laser-induced atomic processes using two-color
fields resides in the different pathways, due to monochromatic components of the field,
leading to the same final state of the atomic system involving
different numbers of exchanged (absorbed and/or emitted) photons.
Thus, the quantum-mechanical interference occurring among different transition amplitudes,
which contribute to the same final state but through distinct pathways,
offers the possibility of controlling laser-assisted and laser-induced atomic processes.
Different kinds of electronic transitions were investigated, such as
laser-assisted elastic and inelastic electron-atom scattering in a laser field (free-free transitions),
laser-induced excitation of atoms (bound-bound transitions),
or laser-induced ionization of atoms (bound-free transitions).
It is well known that laser-assisted electron-atom scattering is a subject of particular interest
in applied domains such as laser and plasma physics <cit.>, astrophysics <cit.>,
or fundamental atomic collision theory <cit.>.
Detailed reports on the laser-assisted electron-atom collisions can be found in several review papers
<cit.> and books <cit.>.
Recently, the study of electron-atom scattering in the presence of a laser field
has attracted considerable attention <cit.>,
especially because of the progress of experimental techniques.
A two-color bicircular electromagnetic field represents a superposition of two circularly
polarized (CP) fields which rotate in the same plane, with different photon energies
and the same helicities (corotating CP fields) or opposite helicities (counter-rotating CP fields).
More than 20 years ago, it was suggested that CP high-order harmonics can be generated
by counter-rotating bicircular fields for a zero-range-potential model atom <cit.>
and it was experimentally shown that the emission of these harmonics is very efficient compared to
corotating CP and linearly polarized (LP) fields <cit.>.
The recent experimental confirmation that the generated high harmonics are
circularly polarized <cit.>,
allowing the direct generation of CP soft x-ray pulses,
has generated an increasing interest in studying different
laser-induced processes by co- and counter-rotating CP laser fields such as
strong-field ionization <cit.>, nonsequential double ionization <cit.>,
or laser-assisted electron-ion recombination <cit.>.
In the present paper, we study the elastic scattering of fast electrons by hydrogen atoms
in their ground state in the presence of two-color bicircular laser fields.
Obviously, the physical mechanism occurring in laser-assisted processes for two-color fields
is the interference among different photon pathways leading to the same final state, and by
using CP fields another parameter (the dynamical phase
of the scattering process) plays an important role.
Theoretical studies involving monochromatic CP fields with the atomic dressing taken
into account in second-order of time-dependent perturbation theory (TDPT) were performed
for laser-assisted electron-hydrogen scattering by Cionga and coworkers <cit.>.
In contrast to these past studies, we investigate here a different regime of a bichromatic CP field
where the monochromatic components of the field rotate in the same plane
and in the same or opposite directions.
To our knowledge, there are no other theoretical studies regarding elastic laser-assisted
electron-atom scattering processes in a two-color bicircular laser field which include
the atomic dressing in second-order TDPT.
The paper is structured as follows.
In Sec. <ref> we present the semiperturbative method used
to obtain the analytical formulas for the differential cross section (DCS)
for a two-color CP laser field with different polarizations.
Because the scattering process under investigation is a very complex problem, the
theoretical approach poses considerable difficulties and few assumptions are made.
Moderate field intensities below 10^13 W/cm^2 and
fast projectile electrons are considered in order
to safely neglect the second-order Born approximation
in the scattering potential as well as the exchange scattering <cit.>.
The interaction between the projectile electrons and the laser field
is described by Gordon-Volkov wave functions <cit.>, whereas
the interaction of the hydrogen atom with the laser field is described
within the second-order TDPT in the field <cit.>.
The derived analytical formula for the DCS, which includes the second-order atomic
dressing effects, is valid for two-color fields with circular and/or linear polarizations.
In Sec. <ref>, we provide simple analytic formulas in a closed form for DCSs
in the laser-assisted elastic scattering in the weak field limit,
for the superposition of the fundamental laser field with its second or third harmonic,
which exhibit an explicit dependence on the field polarizations.
The numerical results are discussed in Sec. <ref>, where
the DCSs by corotating and counter-rotating CP fields
are compared and analyzed as a function of the scattering and azimuthal
angles of the projectile electron at different intensity ratios of the monochromatic
components of the bicircular laser field.
Atomic units (a.u.) are used throughout this paper unless otherwise specified.
ยง SEMIPERTURBATIVE THEORY AT MODERATE LASER INTENSITIES FOR A TWO-COLOR BICIRCULAR LASER FIELD
The laser-assisted scattering of electrons by hydrogen atoms in a two-color laser field is
formally represented as
e^-(E_p,๐ฉ)
+ H(1s) +N_1i ฮณ (ฯ_1, ฮต_1)
+ N_mi ฮณ (ฯ_m, ฮต_m)
โ
e^-(E_p^',๐ฉ^')
+ H(1s) + N_1f ฮณ (ฯ_1, ฮต_1)
+ N_mf ฮณ (ฯ_m, ฮต_m ),
where E_p (E_p^') and ๐ฉ (๐ฉ^') denote
the kinetic energy and the momentum vector of the incident (scattered) projectile electron.
ฮณ (ฯ_k, ฮต_k) represents a photon with the energy ฯ_k
and the unit polarization vector ฮต_k, and
N_k=N_ki-N_kf denotes the net number of exchanged photons
between the projectile-atom system and each monochromatic component of the two-color laser field,
with k=1 and m.
The two-color laser field is treated classically and is considered as a
superposition of two coplanar CP electric fields,
E (t) =
i/2[
E_01ฮต_1 exp(-i ฯ_1 t)
+ E_0mฮต_m exp(-i ฯ_m t)]+ c.c.,
where E_0k represents the peak amplitude of the monochromatic components of electric field
and c.c. denotes the complex conjugate of the right-hand-side term.
For a bicircular field the polarization vector of the first laser beam is defined as
ฮต_1 =ฮต_+= ( ๐_j + i ๐_l ) /โ(2),
where ๐_j and ๐_l are the unit vectors along two orthogonal directions,
and the second laser beam has the same polarization
ฮต_m = ฮต_+ (the so-called corotating polarization),
or is circularly polarized in the opposite direction
ฮต_m = ฮต_-=(e_j-i e_l)/โ(2)
(the so-called counter-rotating polarization).
Thus, the two-color bicircular electric field vector can be easily calculated as
E(t) =
๐_j( E_01sinฯ_1 t + E_0msinฯ_m t)/โ(2)
-๐_l( E_01cosฯ_1 t +ฮท E_0mcosฯ_m t)/โ(2),
where the helicity takes the values ฮท =+1 for corotating CP fields
and ฮท =-1 for counter-rotating CP fields.
ยง.ยง Projectile electron and atomic wave functions
We consider moderate laser intensities and fast projectiles,
which imply that the strength of the laser field is lower than the Coulomb field strength
experienced by an electron in the first Bohr orbit of the hydrogen atom and the
energy of the projectile electron is much higher than the energy of
the bound electron in the first Bohr orbit <cit.>.
The interaction between the projectile electron and the two-color laser field is treated
by a Gordon-Volkov wave function <cit.>, and
the initial and final states of the projectile electron are given by
ฯ_๐ฉ (๐,t)=(2ฯ )^-3/2exp[ -iE_pt+i๐ฉยท๐
-i ๐ฉยทฮฑ_1(t) -i ๐ฉยทฮฑ_m(t) ] ,
where ๐ denotes the position vector and
ฮฑ_k(t), with k=1 and m, describes the classical oscillation
motion of the projectile electron in the bicircular electric fields defined by Eq. (<ref>),
ฮฑ_1(t) = ฮฑ_01( ๐_jsinฯ_1 t + ๐_lcosฯ_1 t )/โ(2),
ฮฑ_m(t) = ฮฑ_0m( ๐_jsinฯ_m t +ฮท ๐_lcosฯ_m t )/โ(2),
where ฮฑ_0k = โ(I_k)/ ฯ_k^2 is the peak amplitude
and I_k= E_0k^2 is the peak laser intensity.
In Eq. (<ref>) the terms which are proportional to the ponderomotive
energy U_p,k= I_k/ 4 ฯ_k^2 are neglected
since the calculations presented in this paper are made at
moderate field intensities below 10^13 W/cm^2.
For example, at a field intensity of 10^12 W/cm^2 and a photon energy
of 1.55 eV, the ponderomotive energy
is about 0.06 eV and therefore is negligible in
comparison with the photon and projectile energies.
At moderate field strengths the interaction of the hydrogen atom
with the two-color laser field is considered within the second-order TDPT
and an approximate solution for the wave function in the second-order TDPT
for an electron bound to a Coulomb potential in the presence of an
electric field, Eq. (<ref>), is written as
ฮจ _1s( ๐ซ, t) =
exp(-i E_1st)[
ฯ _1s (๐ซ,t) + ฯ_1s^(1)( ๐ซ,t)
+ ฯ_1s^(2)( ๐ซ,t)
]
,
where r represents the position vector of the bound electron,
E_1s is the energy of the ground state, ฯ _1s
is the unperturbed wave function of the ground state, and
ฯ_1s^(1) and ฯ_1s^(2) represent the first- and second-order
radiative corrections to the atomic wave function.
We employ the following expression of the first-order correction,
ฯ_1s^(1)(๐ซ,t) =-โ_k=1,mฮฑ_0kฯ_k/2[
ฮต_kยท๐ฐ_100(ฮฉ^+_k;๐ซ ) exp(-iฯ_k t) +
ฮต_k^* ยท๐ฐ_100(ฮฉ^-_k;๐ซ )exp(iฯ_k t)]
,
where w_100 is the linear-response vector <cit.>,
which depends on the energies ฮฉ_k^ยฑ = E_1sยฑฯ_k,
that is expressed as
w_100(ฮฉ ;๐ซ)
=i (4ฯ)^-1/2 B_101 (ฮฉ;r) ๐ซฬ.
The radial function B_101 is calculated
using the Coulomb Green's function including both
bound and continuum eigenstates <cit.> and
๐ซฬ = ๐ซ /| ๐ซ|.
The second-order correction to the atomic wave function ฯ_1s^(2)
is written in terms of the quadratic response tensors <cit.> as,
w_jl,100(ฮฉ',ฮฉ ;๐ซ)
=1/6 โ(ฯ){3x_j x_l/r^2 B_10^21 (ฮฉ',ฮฉ ;๐ซ)
+ฮด_jl [ B_10^01(ฮฉ',ฮฉ ;๐ซ)
- B_10^21(ฮฉ',ฮฉ ;๐ซ) ]
}
,
and
w_jl, 100 (E_1s, ฮฉ ;๐ซ) =
1/2lim_ฯตโ 0[ w_jl, 100 ( E_1s+ฯต, ฮฉ;๐ซ) +
w_jl, 100 (E_1s-ฯต, ฮฉ;๐ซ) ],
where the radial functions B_10^01 and B_10^21
are calculated in Ref. <cit.>, with j and l =1,2,3.
The explicit form of the second-order radiative correction ฯ_1s^(2)
for a two-color laser field is given in Appendix <ref>.
ยง.ยง Scattering matrix
To proceed, once we have obtained the atomic and projectile electron wave function
in the laser field we are able to derive the scattering matrix
for the electron-atom scattering in the static potential
V(r,R)=-1/R+ 1/|R-r|.
We assume fast projectile electrons such that the scattering process can be
well treated within the first-order Born approximation
in the scattering potential V(r, R)
and we use a semiperturbative treatment for the
scattering process similar to the one developed by Byron and Joachain <cit.>.
The scattering matrix element is calculated at high projectile energies,
E_p>100 eV, where the exchange effects can be safely neglected, as
S_fi = -i โซ_-โ^+โ dt
โจฯ_๐ฉ^'( ๐,t)
ฮจ_1s(๐ซ,t) |V(r,R) |
ฯ_๐ฉ( ๐, t)
ฮจ_1s(๐ซ,t)
โฉ ,
where ฯ_๐ฉ_i and ฯ_๐ฉ^' are the initial and final
Gordon-Volkov wave functions of the projectile electron in the two-color laser field
and ฮจ_1s represents the wave function of the bound electron in the laser field,
given by Eqs. (<ref>) and (<ref>), respectively.
For commensurate photon energies, ฯ_m =m ฯ_1,
the energies of the projectile electron satisfy the conservation relation
E_p^' - E_p = N ฯ_1, with N ฯ_1 โก N_1 ฯ_1 +N_m ฯ_m.
Using the Jacobi-Anger expansion formula
exp(i a sinฯ t) =โ_N J_N(a) exp(i N ฯ t),
we develop the field dependent part of the Gordon-Volkov wave functions in the scattering matrix,
Eq. (<ref>), in terms of the phase-dependent generalized Bessel functions,
B_N, <cit.>, as
exp[-i ฮฑ_1(t) ยท๐ช -i ฮฑ_m(t) ยท๐ช] =
โ_N=-โ^+โ B_N(R_1,R_m;ฯ_1,ฯ_m)
exp(-i N ฯ_1 t +i N ฯ_1),
where
B_N(R_1,R_m;ฯ_1,ฯ_m)
=โ_l=-โ^+โ
J_N-ml(R_1) J_l(R_m)exp[-i l(mฯ_1- ฯ_m)].
Here ๐ช denotes the momentum transfer vector
of the projectile electron, i.e., ๐ช= ๐ฉ - ๐ฉ^',
and the arguments of the generalized Bessel function are
R_k= ฮฑ_0k|ฮต_k ยท๐ช |
and ฯ_k, where the dynamical phase is defined as
e^i ฯ_k = (ฮต_kยท๐ช)
/ |ฮต_kยท๐ช |, with k=1 and m.
Specifically, for a CP field we obtain
R_k= ฮฑ_0kโ((๐_jยท๐ช)^2+(๐_lยท๐ช)^2)/โ(2) and ฯ_k=arctan(๐_lยท๐ช)/(๐_jยท๐ช) +sฯ,
where s is an integer.
Obviously, a change of helicity of the CP field, i.e., ฮตโฮต^*,
leads to a change of the sign of the dynamical phase, ฯ_kโ -ฯ_k.
In contrast, for a LP field the arguments of the B_N function are simply given by
R_k^LP=ฮฑ_0k | ๐_jยท๐ช | and ฯ_k^LP=sฯ.
By substituting Eqs. (<ref>),(<ref>) and (<ref>) into
Eq. (<ref>) we obtain, after integrating over time and
projectile electron coordinate, the scattering matrix
S_fi for elastic electron-hydrogen collisions
in a two-color laser field,
S_fi =
- 2ฯ i โ_N=-โ^+โ T_fi(N) ฮด( E_p^' - E_p - N ฯ_1) ,
where the Dirac ฮด function assures the energy conservation, and
T_fi (N) represents the total transition amplitude
for the elastic scattering process, which can be written as the sum of three terms
T_fi(N) = T^(0)(N) + T^(1)(N)+ T^(2)(N)
.
Finally, the nonlinear DCS for the scattering of the projectile in the solid angle
ฮฉ with the energy of the projectile modified by Nฯ_1, reads
dฯ(N)/dฮฉ =
(2ฯ)^4 p^'(N)/p| T^(0)(N) + T^(1)(N)+ T^(2)(N) |^2
,
in which the final momentum of the projectile is calculated as
p^' (N)= ( p^2 + 2N ฯ_1 ) ^1/2.
The derivation of the transition amplitudes T^(i)(N), (i=0,1,2)
is briefly described in the next subsections.
ยง.ยง.ยง Electronic transition matrix elements
The first term, T^(0), on the right-hand side of Eq. (<ref>)
represents the elastic transition amplitude due to projectile electron contribution
in which the atomic dressing is neglected,
T^(0)(N) =
B_N (R_1,R_m;ฯ_1,ฯ_m)
โจฯ_1s|F(๐ช,๐ซ)|ฯ_1sโฉ ,
where the form factor is given by
F(๐ช,๐ซ) =
[ exp(i ๐ชยท๐ซ) - 1 ] /(2 ฯ^2 q^2).
By substituting the partial-wave expansion of the exponential term
exp(i ๐ชยท๐ซ) in the form factor and
performing the angular integration,
the electronic transition amplitude T^(0) reads
T^(0) (N) =- 1/(2ฯ)^2 f_el^B_1(q) B_N(R_1,R_m;ฯ_1,ฯ_m),
where the term f_el^B_1(q) = 2(q^2+8) /(q^2+4)^2
represents the first-order Born approximation of the
scattering amplitude for field-free elastic scattering process,
and the laser field dependence is contained in the arguments of the
generalized Bessel function only.
If the atomic dressing is negligible in Eq. (<ref>),
the DCS for N-photon exchange is approximated as
dฯ(N)/dฮฉโ4p'/p (q^2+8)^2 /(q^2+4)^4|B_N(R_1,R_m;ฯ_1,ฯ_m) |^2
,
which is the equivalent of the Bunkin and Fedorov formula <cit.>,
for fast electron scattering on hydrogen atoms in a two-color CP laser field,
in which the laser-atom interaction is neglected.
ยง.ยง.ยง First-order atomic transition matrix elements
The second term, T^(1), on the right-hand side of the transition amplitude,
Eq. (<ref>), represents the first-order atomic transition amplitude and it
occurs due to alteration of the atomic state by the two-color laser field, which is
described by the first-order radiative correction ฯ^(1)(๐ซ,t).
Obviously, only one photon is emitted or absorbed
between the two-color laser field and the bound electron.
After some algebra, the atomic transition amplitude T^(1) is expressed as
T^(1)(N) = -โ_k=1,mฮฑ_0kฯ_k/2[ B_N-k M_at^(1) ( ฯ_k ,๐ช) e^-ikฯ_1+
B_N+k M_at^(1) ( -ฯ_k, ๐ช ) e^ikฯ_1],
where M_at^(1)(ฯ_k,๐ช) denotes a specific
first-order atomic transition matrix element related to one-photon absorption
M_at^(1)( ฯ_k,๐ช) = โ_j=1^3ฮต_kj
[โจฯ _1s|F(๐ช)|
w_j,100( ฮฉ_k^+) โฉ
+โจw_j,100
( ฮฉ_k^- )|F(๐ช)|ฯ _1sโฉ],
whereas the transition matrix element M_at^(1) ( -ฯ_k, ๐ช )
is related to one-photon emission and is obtained from
M_at^(1) (ฯ_k, ๐ช )
with the replacements ฯ_kโ -ฯ_k (i.e., ฮฉ_k^ยฑโฮฉ_k^โ)
and ฮต_k โฮต_k^*.
The first term on the right-hand side of Eq. (<ref>)
describes first the atom interacting with the laser field followed by
the projectile electron-atom interaction, while
in the second term the projectile electron-atom interaction precedes the atom-laser interaction.
For the sake of simplicity, the arguments of the generalized Bessel functions
B_N (R_1,R_m;ฯ_1,ฯ_m) are now dropped off in Eq. (<ref>) and
throughout this paper.
By using the partial-wave expansion of the exponential term
exp(i ๐ชยท๐ซ )
and the definition of ๐ฐ_100 given by Eq. (<ref>),
after performing the angular integration,
we obtain for the atomic transition matrix element, Eq. (<ref>),
M_at^(1)( ฯ_k,๐ช) =
-ฮต_k ยท๐ชฬ/2ฯ^2q^2 [ J_101^a(ฮฉ_k^+,q) - J_101^a(ฮฉ_k^-,q)]
,
in which J_101^a is an atomic radial integral defined as
J_101^a( ฮฉ_k, q ) =
โซ_0^+โ dr r^2 R_10(r) j _1(qr)
B_101 ( ฮฉ_k ; r ) ,
where j _1(qr) represents the spherical Bessel function of first kind
and ๐ชฬ=๐ช/|๐ช|.
An analytic expression of J_101^a in terms of hypergeometric functions
is given by Eq. (36) of Ref. <cit.>.
Finally, the atomic transition amplitude, T^(1), is obtained
by substituting Eq. (<ref>) into Eq. (<ref>)
T^(1)(N) = โ_k=1,mฮฑ_0kฯ_k/4ฯ^2q^2 [
(ฮต_k ยท๐ชฬ) B_N-k J_101(ฯ_k,q)
e^-i kฯ_1.
.
- (ฮต_k^*ยท๐ชฬ) B_N+k J_101(ฯ_k,q)
e^ikฯ_1],
where the radial integral J_101 is calculated as
the difference between two atomic radial integrals given by Eq. (<ref>),
J_101(ฯ_k,q)=
J_101^a(ฮฉ_k^+,q)- J_101^a(ฮฉ_k^-,q).
In addition, in the low-photon energy limit (ฯโช E_n -E_1s),
the atomic radial integral can be approximated
as J_101(ฯ,q) โฯ q ฮฑ_d (q),
where ฮฑ_d represents the dynamic dipole polarizability
due to polarization of the target by the projectile electron <cit.>.
The analytical form of the dynamic dipole polarizability within the
first-order TDPT <cit.> in low-photon energy limit is calculated as
ฮฑ_d (q,Z) = ฮฑ_s(Z) /3[ 1+ q^2/(2Z)^2 ]^3 [ 1+2/1+ q^2/(2Z)^2] ,
where ฮฑ_s(Z)= 4.5 Z^-4 denotes the static dipole
polarizability of a H-like ion in the ground state <cit.>.
We point out that Eq. (<ref>) describes the target dressing effects at both
small and large scattering angles in contrast to the dynamic dipole
polarizability calculated in Ref. <cit.> for the hydrogen atom,
ฮฑ_d (q) = ฮฑ_s /(1+ q^2/4)^3,
which gives accurate results only at small scattering angles.
ยง.ยง.ยง Second-order atomic transition matrix elements
The third term, T^(2), on the right-hand side of the transition amplitude,
Eq. (<ref>), represents the second-order atomic transition amplitude
and occurs due to modification of the atomic state by the laser field, described by
the second-order radiative correction ฯ^(2)(๐ซ,t),
which is given in Appendix <ref>.
In this approach only two photons are exchanged (absorbed, emitted, or absorbed and emitted)
between the two-color CP laser field and the bound electron.
After some calculation the atomic transition amplitude T^(2) reads
T^(2)(N) = โ_k=1,mฮฑ_0k^2 ฯ_k ^2 /4{
B_N-2k M_at^(2)
( ฯ_k, ๐ช) e^-2ikฯ_1.
+
B_N+2k M_at^(2)
( - ฯ_k, ๐ช ) e^2ikฯ_1
.
+ B_N[ M_at^(2) ( E_1s,ฯ_k )
+ M_at^(2) ( E_1s,-ฯ_k )]
}
+ ฮฑ_01ฮฑ_0mฯ_1 ฯ_m /4[
B_N-m-1 N_at^(2)
( ฯ_1, ฯ_m, ๐ช) e^-i(m+1)ฯ_1.
+
B_N+m+1 N_at^(2)
( -ฯ_1, -ฯ_m, ๐ช) e^i(m+1)ฯ_1
+ B_N-m+1 N_at^(2)
(- ฯ_1, ฯ_m, ๐ช)e^-i(m-1)ฯ_1
.
+ B_N+m-1 N_at^(2)
( ฯ_1, -ฯ_m, ๐ช)e^i(m-1)ฯ_1]
.
We point out that the explicit or implicit presence of the dynamical phase factors
e^i ฯ_1 and e^ i ฯ_m
in the electronic and atomic transitions amplitudes,
Eqs. (<ref>), (<ref>) and (<ref>), can give different interference terms in DCS
for corotating in comparison to counter-rotating CP field.
Specific second-order atomic transition matrix elements related to two-photon exchange
appear in Eq. (<ref>).
Thus, M_at^(2) is related to absorption of two identical photons of energy
ฯ_k and complex polarization ฮต_k,
M_at^(2)( ฯ_k,๐ช ) = โ_j,l=1^3ฮต_kjฮต_kl[
โจ w_j,100(ฮฉ_k^-)|F(๐ช)| w_l,100(ฮฉ_k^+)โฉ.
. +
โจฯ_1s|F(๐ช)| w_jl,100 ( ฮฉ_k^' +, ฮฉ_k^+) โฉ
+ โจ w_jl, 100( ฮฉ_k^' -, ฮฉ_k^-)
|F(๐ช)|ฯ_1sโฉ],
with ฮฉ_k^' ยฑ=E_1sยฑ 2 ฯ_k, (k =1 and m).
The atomic transition matrix element M_at^(2) (-ฯ_k, ๐ช ),
which is related to emission of two identical photons, is obtained from Eq. (<ref>)
by replacing ฯ_kโ -ฯ_k and the components of the polarization vectors
ฮต_kj(l)โฮต_kj(l)^*.
M_at^(2)( ฯ_k,๐ช ),
describes the absorption followed by emission of the same photon,
and is derived using the tensor w_ij,100(E_1s,ฮฉ_k),
M_at^(2)( ฯ_k,๐ช ) = โ_j,l=1^3ฮต_kj^*ฮต_kl[
โจ w_j,100(ฮฉ_k^+)|F(๐ช)| w_l,100(ฮฉ_k^+)โฉ.
. +
โจฯ_1s|F(๐ช)|w_jl,100 ( E_1s, ฮฉ_k^+) โฉ
+ โจw_jl, 100( E_1s, ฮฉ_k^-) |F(๐ช)|ฯ_1sโฉ].
Similarly, M_at^(2)( -ฯ_k,๐ช ),
which describes the emission followed by absorption of the same photon,
is derived from Eq. (<ref>) by replacing ฯ_kโ -ฯ_k and polarizations ฮต_kj(l)โฮต_kl(j)^*.
N_at^(2)( ฯ_1, ฯ_m, ๐ช) describes the absorption
of two distinct photons of energies ฯ_1 and ฯ_m,
N_at^(2)
( ฯ_1, ฯ_m, ๐ช)
= ( 1+ P_1m )
โ_j,l=1^3ฮต_1jฮต_ml[โจ w_j,100(ฮฉ_1^-)|F(๐ช)|
w_l,100(ฮฉ_m^+)โฉ.
.
+โจฯ_1s|F(๐ช)|
w_jl,100 ( ฮฉ^' +, ฮฉ_m^+) โฉ +
โจ w_jl, 100 ( ฮฉ^' -, ฮฉ_m^- )
|F(๐ช)|ฯ_1sโฉ]
,
in which ฮฉ^' ยฑ = E_1sยฑ ( ฯ_1 +ฯ_m), where
P_1m denotes a permutation operator that interchanges the photon energies
ฯ_1(m)โฯ_m(1) and polarization vectors
ฮต_1(m)โฮต_m(1),
whereas N_at^(2) (- ฯ_1, -ฯ_m, ๐ช)
is connected to emission of two distinct photons of energies
ฯ_1 and ฯ_m, and is calculated from Eq. (<ref>) by replacing
ฯ_kโ -ฯ_k and ฮต_k โฮต_k^* (k=1,m).
N_at^(2) (- ฯ_1, ฯ_m, ๐ช)
is related to one-photon emission and one-photon absorption of two distinct photons of energies
ฯ_1 and ฯ_m, and is as well obtained from Eq. (<ref>) by replacing
ฯ_1โ -ฯ_1 and ฮต_1 โฮต_1^*,
N_at^(2)
(- ฯ_1, ฯ_m, ๐ช)
= ( 1+ P'_1m )
โ_j,l=1^3ฮต_1j^* ฮต_ml[โจ w_j,100(ฮฉ_1^+)|F(๐ช)|
w_l,100(ฮฉ_m^+)โฉ.
.
+โจฯ_1s|F(๐ช)|
w_jl,100 ( ฮฉ^ -, ฮฉ_m^+) โฉ +
โจ w_jl, 100 ( ฮฉ^ +, ฮฉ_m^- )
|F(๐ช)|ฯ_1sโฉ]
,
in which ฮฉ^ยฑ = E_1sยฑ ( ฯ_1 -ฯ_m) with m โ 1,
where P'_1m denotes a permutation operator that interchanges the photon energies
ฯ_1(m)โ -ฯ_m(1) and polarizations
ฮต_1^* โฮต_m and
ฮต_mโฮต_1^*.
The analytic expressions of the second-order atomic transition matrix elements
for two identical photons after performing the angular integration are
given by Eqs. (A2) and (51) of Ref. <cit.>
M_at^(2)(ฯ_k,๐ช)=
(ฮต_k ยท๐ชฬ)^2/2 ฯ^2 q^2 Q(ฯ_k,q)
+ฮต_k^2/2 ฯ^2 q^2 P(ฯ_k,q)
,
and
M_at^(2)(ฯ_k,๐ช)=
|ฮต_k ยท๐ชฬ|^2/2 ฯ^2 q^2 Q(ฯ_k,q)+
1/2 ฯ^2 q^2 P(ฯ_k,q)
,
in which ฮต_k^2=0 for a CP field
and ฮต_k^2=1 for a LP field, where the specific
expressions of the polarization-invariant atomic radial integrals
P and Q for two-photon absorption or emission are given
by Eqs. (A3) and (A4) of Ref. <cit.>.
A general form of the second-order atomic transition matrix elements for the exchange
of two distinct photons after performing the angular integration can be written as
N_at^(2)(ฯ_j,ฯ_l,๐ช)=
(ฮต_jยท๐ชฬ)
(ฮต_l ยท๐ชฬ)/2 ฯ^2 q^2 Q(ฯ_j,ฯ_l,q)
+ฮต_j ยทฮต_l/2 ฯ^2 q^2 P(ฯ_j,ฯ_l,q)
,
with the replacements ฯ_kโ -ฯ_k and
ฮต_kโฮต_k^* if the photon k is emitted (k=j,l).
Note that for corotating bicircular fields ฮต_+^2=ฮต_-^2=0,
while for counter-rotating bicircular fields
ฮต_+ยทฮต_-=1.
The specific atomic radial integrals P and Q depend on the photon energies
ฯ_j and ฯ_l, and the amplitude of the momentum transfer vector, q.
We point out that the general structure of Eq. (<ref>)
is also similar for other processes like
the elastic scattering of photons by hydrogen atoms <cit.>,
two-photon bremsstrahlung <cit.>,
elastic x-ray scattering by ground-state atoms <cit.>,
two-photon ionization of hydrogen <cit.>, or
two-photon double ionization <cit.>,
with the vector ๐ชฬ replaced by specific vectors characteristic to each particular process.
ยง TWO-PHOTON SCATTERING PROCESSES IN THE WEAK FIELD DOMAIN
We aim to provide simple analytical formulas in a closed form for DCSs, that are easy to handle,
which would give a deeper physical insight for electron-hydrogen scattering in a
two-color bicircular laser field.
However, we note that at high laser intensities
in the domain where the atomic dressing is non-negligible
the calculations of the multiphoton processes (|N| > 2)
require that the laser-atom interaction should be treated at least
to third order in the field.
Whenever the argument of the Bessel function of the first kind is small, i.e.,
R_1โช 1 and R_mโช 1, which is satisfied at low laser intensities or
at small scattering angles with moderate laser intensities,
approximate expressions of the generalized Bessel functions,
B_N(R_1,R_m,ฯ_1, ฯ_m), can be used according to Appendix <ref>.
For N > 0, by keeping the leading terms in the laser fields in the
transition amplitudes T^(1) and T^(2), given by Eqs. (<ref>) and (<ref>),
and neglecting the terms which are proportional to the higher powers of
the fields, we get the following approximation formula for the total transition amplitude:
T_m(N) โ
-1/4ฯ^2f_el^B_1(q) B_N
+โ_k=1,mฮฑ_0kฯ_k/4ฯ^2q^3
(ฮต_k ยท๐ช) B_N-k J_101(ฯ_k,q)
e^-i kฯ_1
+
โ_k=1,mฮฑ_0k^2 ฯ_k ^2 /4
B_N-2k M_at^(2)
( ฯ_k, ๐ช) e^-2ikฯ_1
+
ฮฑ_01ฮฑ_0mฯ_1 ฯ_m /4
B_N-m-1 N_at^(2)
( ฯ_1, ฯ_m, ๐ช)e^-i(m+1)ฯ_1
+
ฮฑ_01ฮฑ_0mฯ_1 ฯ_m /4
B_N-m+1 N_at^(2)
(- ฯ_1, ฯ_m, ๐ช)e^-i(m-1)ฯ_1
.
The next step is to substitute in Eq. (<ref>) the approximate expressions of the
generalized Bessel functions, B_N, which are given in the Appendix <ref>,
and to analyze two simple cases of even and odd harmonics, m=2 and 3.
The total transition amplitude that includes the first- and second-order
dressings in Eq. (<ref>) will be written in the next two subsections in a form that
allows us to analyze the dependence on the dynamical phases ฯ_1 and ฯ_m.
ยง.ยง Case m=2, superposition of a fundamental field and its second harmonic
Therefore, for two-photon absorption (N = 2) when the limit R_1(m)โช 1 is taken
for ฯ_2 = 2ฯ_1, by substituting the appropriate approximation
of the generalized Bessel functions, Eq. (<ref>),
keeping the second-order contributions in the fields,
and neglecting the higher powers of the fields the total transition
amplitude in the weak field domain is obtained from Eq. (<ref>) as
T_m=2(N=2) โฮฑ_01^2 A_1(ฯ_1,q) |ฮต_1 ยท๐ช|^2
+
ฮฑ_02 A_2(ฯ_2,q) |ฮต_2 ยท๐ช|
e^-i(2ฯ_1-ฯ_2) ,
that is the sum of the one- and two-photon transition amplitudes for the processes depicted
in Fig. <ref>(a), modulated by a phase factor.
The first term on the right-hand side describes two-photon absorption (ฯ_1),
whereas the second term describes one-photon absorption (ฯ_2), as schematically
shown in Fig. 1(a), with the amplitudes A_1 and A_2 defined by
A_1(ฯ_1,q) = 1/8ฯ^2[-f_el^B_1/4
+ ฯ_1/q^3 J_101(ฯ_1,q)
+ฯ_1 ^2 /q^4 Q ( ฯ_1, ๐ช)]
,
A_2 (ฯ_2,q) = 1/4ฯ^2[-f_el^B_1/2
+ ฯ_2/q^3 J_101(ฯ_2,q) ].
In the limit of low-photon energies (ฯ_1 and ฯ_2โช E_n -E_1s)
the DCS is approximated as
dฯ_m=2(N=2)/dฮฉโp'/p|
ฮฑ_01 ^2/2| ฮต_1 ยท๐ช |^2
( ฮฑ_d ฯ_1^2/q^2 -f_el^B_1/4)
+
ฮฑ_02| ฮต_2 ยท๐ช|
( ฮฑ_d ฯ_2^2/q^2 -f_el^B_1/2)
e^-i(2ฯ_1- ฯ_2)|^2,
where the dynamic polarizability ฮฑ_d is given by Eq. (<ref>).
Furthermore, if the atomic dressing is negligible in Eq. (<ref>)
the DCS is simply calculated as
dฯ_m=2(N=2)/dฮฉโp'/p (q^2+8)^2 /(q^2+4)^4|
ฮฑ_01 ^2/4| ฮต_1 ยท๐ช |^2
+
ฮฑ_02| ฮต_2 ยท๐ช|
e^-i(2ฯ_1- ฯ_2)|^2,
that is the equivalent of the Bunkin and Fedorov formula <cit.>
for a two-color CP laser field with different polarizations
and commensurate photon energies ฯ_1 and 2ฯ_1.
We note that the transition amplitude and the DCSs,
Eqs. (<ref>) and (<ref>),
have a simple dependence on the dynamical phases,
ฯ_1 and ฯ_2, as e^-i(2 ฯ_1 - ฯ_2),
that modulates the quantum interference between the one- and two-photon processes
depicted in Fig. <ref>(a).
ยง.ยง Case m=3, superposition of a fundamental field and its third harmonic
Similarly, for a two-color laser field that is a superposition of the
fundamental field and its third harmonic
ฯ_3 = 3ฯ_1, we get the total transition amplitude for two-photon absorption
in the weak field domain, R_1(m)โช 1,
T_3(N=2) โ ฮฑ_01^2 C_1(ฯ_1,q) |ฮต_1 ยท๐ช|^2
+
ฮฑ_01ฮฑ_03 C_2(ฯ_1,ฯ_3,q)
|ฮต_1^* ยท๐ช|
|ฮต_3 ยท๐ช| e^-i(3ฯ_1-ฯ_3)
+ฮฑ_01ฮฑ_03 C_3(ฯ_1,ฯ_3,q)
(ฮต_1^* ยทฮต_3)
e^-2iฯ_1,
that is the sum of the two-photon transition amplitudes for the processes depicted
in Fig. <ref>(b), modulated by phase factors.
The first term on the right-hand side describes two-photon absorption (ฯ_1),
while the rest of the terms describe one-photon absorption (ฯ_3) and emission (ฯ_1),
as schematically shown in Fig. <ref>(b),
with the amplitudes C_1, C_2, and C_3 defined by
C_1(ฯ_1,q) = 1/8ฯ^2[
-f_el^B_1/4 +ฯ_1/q^3 J_101(ฯ_1,q)],
C_2(ฯ_1,ฯ_3,q) = 1/8ฯ^2[
f_el^B_1/2 -ฯ_3/q^3 J_101(ฯ_3,q)
+ฯ_1 ฯ_3 / q^4 Q(-ฯ_1,ฯ_3,q)
],
C_3(ฯ_1,ฯ_3,q) = ฯ_1 ฯ_3 /8 ฯ^2 q^2 P(-ฯ_1,ฯ_3,q).
In the limit of low-photon energies (ฯ_1 and ฯ_3โช E_n -E_1s)
the DCS is approximated as
dฯ_m=3(N=2)/dฮฉ โ p'/4p|
ฮฑ_01ฮฑ_03
|ฮต_1 ยท๐ช ||ฮต_3 ยท๐ช|
( ฮฑ_d ฯ_3^2/q^2 - f_el^B_1/2)
e^-i(3ฯ_1- ฯ_3).
.
-
ฮฑ_01^2| ฮต_1 ยท๐ช |^2
( ฮฑ_d ฯ_1^2/q^2 - f_el^B_1/4)
|^2.
Moreover, if the atomic dressing is negligible in Eqs. (<ref>)
the DCS is simply given by
dฯ_m=3(N=2)/dฮฉโp'/p (q^2+8)^2 /(q^2+4)^4ฮฑ_01^2| ฮต_1 ยท๐ช |^2/4|
ฮฑ_03| ฮต_3 ยท๐ช|
e^-i(3ฯ_1- ฯ_3)
-
ฮฑ_01/2| ฮต_1 ยท๐ช |
|^2,
that is the equivalent of the Bunkin and Fedorov formula <cit.>
for a two-color CP laser field with different polarizations
and commensurate photon energies ฯ_1 and 3ฯ_1.
Similarly to the case ฯ_2= 2ฯ_1,
the DCS has a simple dependence on the
dynamical phases, ฯ_1 and ฯ_3, as
e^-i(3 ฯ_1 - ฯ_3), that modulates the quantum interference among
the different two-photon processes depicted in Fig. <ref>(b).
ยง NUMERICAL EXAMPLES AND DISCUSSION
In this section, we present representative results for the
the scattering process described by Eq. (<ref>), in which two photons
are absorbed by the e^- + H(1s) colliding system.
We focus our discussion on two particular field polarizations in which the two-color
laser beams are CP in the (x,y) plane
with one laser beam propagating in the z-axis direction,
ฮต_1 =ฮต_+ =(๐_x+i ๐_y)/โ(2),
while the other laser beam (a) has the same (left-handed) circular polarization
ฮต_m = ฮต_+,
i.e., corotating polarization case (LHCP-LHCP),
or (b) is (right-handed) CP in the opposite direction,
ฮต_m = ฮต_-=(๐_x-i ๐_y)/โ(2),
i.e., counter-rotating polarization case (LHCP-RHCP).
In this polarization geometry for the simple case of equal intensities
of the monochromatic field components, E_01= E_0m,
the bicircular electric field can be written as
E_+ (t) = E_01โ(2)
(๐_x sinฯ_+t -๐_y cosฯ_+t)
cosฯ_-t,
for corotating circular polarizations (ฮต_m = ฮต_+), and
E_- (t) = E_01โ(2)
(๐_x cosฯ_-t -๐_y sinฯ_-t) sinฯ_+t
,
for counter-rotating circular polarizations (ฮต_m = ฮต_-),
in which the energies are defined by
ฯ_ยฑ=ฮท_m ฯ_1/2,
where ฮท_m= m -1 for corotating CP fields and ฮท_m= m +1
for counter-rotating CP fields.
In Fig. <ref> is plotted the temporal dependence of the electric field vectors,
given by Eqs. (<ref>) and (<ref>),
in the polarization plane for two-color CP laser fields of equal intensities
with identical polarizations
ฮต_m = ฮต_+ in the right column, and
ฮต_m = ฮต_- in the left column.
The parameters we employ for Fig. <ref> are I_1=I_m=10^12 W/cm^2,
a fundamental photon energy ฯ_1=1.55 eV, with
ฯ_2=2ฯ_1 in Figs. <ref>(a) and <ref>(b), and
ฯ_3=3ฯ_1 in Figs. <ref>(c) and <ref>(d).
The electric field vectors are invariant with respect to translation in time by
an integer multiple of T_1 /ฮท_m, where T_1 = 2ฯ/ฯ_1 is the fundamental
field optical period, and with respect to rotation in the polarization plane
by an angle ฮฑ_m=2ฯ/ฮท_m around the z axis, such that
E_ยฑ(t + T_1 /ฮท_m) =
R (2ฯ/ฮท_m) E_ยฑ(t),
where R (ฮฑ_m) is a 2 ร 2 rotation matrix with angle ฮฑ_m around the z axis.
The upper sign (+) corresponds to corotating CP fields,
while the lower sign (-) corresponds to counter-rotating CP fields.
For counter-rotating bicircular field with ฯ_2=2ฯ_1,
this temporal symmetry means that
E_-(t + T_1 /3) = R (2ฯ /3) E_-(t),
i.e., the translation in time is one-third of the optical cycle
and the rotation angle in the polarization plane is 2ฯ/3,
which implies a threefold symmetry of the electric field in Fig. <ref>(a),
while for corotating bicircular field the translation in time is T_1
and the rotation angle is 2ฯ in Fig. <ref>(b), such that
E_+(t + T_1 ) = R (2ฯ ) E_+(t).
Next, we consider the scattering geometry depicted in Fig. <ref>
in which the momentum vector of the incident electron ๐ฉ is parallel
to the z axis.
The momentum transfer vector is given by
๐ช=(-p^'sinฮธcosฯ, -p^'sinฮธsinฯ,
p -p^'cosฮธ ),
with the amplitude
q=โ( p^2+ p^'^2 -2 p p^'cosฮธ),
where ฮธ is the the scattering angle between the momentum vectors
of the incident and scattered electron, p and p^',
and ฯ is the azimuthal angle of the scattered electron.
In this scattering geometry the scalar product in the argument
of the generalized Bessel functions, R_k, can be simply expressed as
ฮต_ยฑยท๐ช
= - p^'sinฮธ /โ(2) e^ยฑ i ฯ.
The evaluation of the dynamical phases of the CP laser fields gives
ฯ_ยฑ = ฯยฑฯ with
e^ i ฯ_ยฑ = - e^ยฑ i ฯ, and
we observe that a change of the field helicity implies
a change in the sign of the azimuthal angle ฯ.
After some simple algebra, by replacing the exponential term
in the weak field limit of the total transition amplitudes, Eqs. (<ref>) and (<ref>),
as e^ -i (mฯ_1- ฯ_m) = (-1)^m+1 e^- i ฮท_m ฯ,
we obtain
T_m=2(N=2) โฮฑ_01^2 A_1(ฯ_1,q) |ฮต_1 ยท๐ช|^2
-ฮฑ_02 A_2(ฯ_2,q) |ฮต_1 ยท๐ช|
e^-i ฮท_2 ฯ
,
for ฯ_2=2ฯ_1, with the parameter ฮท_2= 1 for two-color left-handed-CP fields
(equal helicities)
or ฮท_2= 3 for two-color left- and right-handed CP fields (opposite helicities), whereas for ฯ_3=3ฯ_1
T_m=3(N=2) โ ฮฑ_01^2 |ฮต_1 ยท๐ช|^2 [ C_1(ฯ_1,q)
+
ฮฑ_03/ฮฑ_01 C_2(ฯ_1,ฯ_3,q) e^-i ฮท_3 ฯ]
+
ฮด_ฮท_3 2 ฮฑ_01ฮฑ_03 C_3(ฯ_1,ฯ_3,q)
e^-2iฯ
,
with the parameter ฮท_3= 2 for two-color left-handed CP fields (equal helicities) or
ฮท_3= 4 for two-color left- and right-handed CP fields (opposite helicities), respectively.
Therefore, the transition amplitudes in the weak field domain, given by
Eqs. (<ref>) and (<ref>), as well
the corresponding DCSs,
depend on the azimuthal angle of the scattered projectile
as e^-i(m - 1)ฯ and are invariant with respect to the transformations
ฯโฯ + 2ฯ/(m - 1), for corotating polarizations.
In contrast, the transition amplitudes and DCSs for counter-rotating CP fields
depend on azimuthal angle as e^-i(m + 1)ฯ
and are invariant with respect to the transformations
ฯโฯ + 2ฯ/(m + 1).
Furthermore, if the atomic dressing is negligible in Eqs. (<ref>) and (<ref>),
the following simple analytic results are obtained for DCSs:
dฯ_m=2(N=2)/dฮฉโp'/p (q^2+8)^2 /(q^2+4)^4ฮฑ_01^2 | ฮต_1 ยท๐ช |^2/16| ฮฑ_01 | ฮต_1 ยท๐ช |
- 4 ฮฑ_02/ฮฑ_01 e^-i ฮท_2 ฯ|^2
,
and
dฯ_m=3(N=2)/dฮฉโp'/p (q^2+8)^2 /(q^2+4)^4ฮฑ_01^4| ฮต_1 ยท๐ช |^4/16|1 + 2 ฮฑ_03/ฮฑ_01 e^-i ฮท_3 ฯ|^2
.
We stress that the analytical formulas obtained for co- and counter-rotating
circular polarizations in the weak-field domain, Eqs. (<ref>) and (<ref>),
or for negligible atomic dressing, Eqs. (<ref>) and (<ref>),
indicate that DCSs are invariant with respect to rotation
of the projectile momentum in the polarization plane
by azimuthal angles 2ฯ/(m - 1) and 2ฯ/(m + 1) about the z axis, respectively.
It is obvious that the DCS by the two-color bicircular laser field with m=2
in Eq. (<ref>) presents the maximal interference between
the first- and second-order transition amplitudes
at the following optimal ratio of the two-color laser intensities
I_2 / I_1 โ I_1 | ฮต_1 ยท๐ช |^2/ ฯ_1^4,
which is very sensitive to the intensity and photon energy of the fundamental field,
and the scattering geometry.
In contrast, for m=3 the maximal interference
in DCS, Eq. (<ref>), between the second-order transition amplitudes occurs
at the optimal ratio of the two-color laser intensities I_3 /I_1 โ 20.2,
that shows the interference between the different two-photon processes
is more efficient compared to m=2, being independent on the scattering geometry.
First, we have checked that the two-photon DCSs for the elastic scattering
of fast electrons by hydrogen atoms in their ground state are in very good agreement
with the earlier analytical data obtained for the particular case of
a monochromatic CP laser field <cit.> and a bichromatic LP laser field <cit.>.
Next, we apply the analytic formulas derived in Sec. <ref> to evaluate the DCSs
for elastic electron scattering by a hydrogen atom
in its ground state in the presence of two-color co- and counter-rotating CP laser fields.
Since our formulas are derived up to second order in the field for the atomic dressing,
we analyze two-photon absorption DCSs (N = 2) at moderate field intensities.
We choose high energies of the projectile electron and low-photon energies, such that
neither the projectile electron nor the photon can separately excite an upper atomic state.
To start with a simple case we present in Fig. <ref> our numerical results
for an initial scattering energy E_p=100 eV,
photon energies that correspond to the Ti:sapphire laser ฯ_1 =1.55 eV
and its second harmonic ฯ_2= 2ฯ_1, and a fundamental laser
intensity I_1=10^12 W/cm^2.
These laser parameters correspond to a quiver motion amplitude
ฮฑ_01โ 1.65 a.u. and an argument of the Bessel function
R_1 โ 1.65 |ฮต_1ยท๐ช|.
The intensity of the second-harmonic laser is given by I_2= f I_1,
with the laser intensity ratios
f = 1,10^-1,10^-2, and 10^-3 from top to bottom,
which results in a quiver motion amplitude ฮฑ_02 = ฮฑ_01โ(f)/4
and an argument of the Bessel function R_2 = R_1 โ(f)/4.
The three-dimensional electron projectile DCSs, projected in the polarization plane
as a function of the normalized projectile momentum,
p_x^'/p^' and p_y^'/p^', are plotted
for two-color left-handed CP fields (corotating) in the right column
and for left- and right-handed CP fields (counter-rotating) in the left column.
In the weak field domain,
as long as ฯ_1 < |E_1s|/2 and ฯ_2=2ฯ_1, the one- and two-photon
atomic transition matrix elements
M_at^(1), M_at^(2), and N_at^(2) are real
and the DCS can be formally expressed
from Eqs. (<ref>) and (<ref>) as a function of the scattering ฮธ
and azimuthal ฯ angles,
dฯ_m=2(N=2)/dฮฉ( ฮธ,ฯ ) โ
a_1 sin^4ฮธ +a_2sin^2ฮธ - a_3 sin^3ฮธcos (ฮท_2ฯ)
,
where a_1=4ฯ^4 ฮฑ_01^4 A_1^2 p^' 5/p,
a_2=8ฯ^4 ฮฑ_02^2 A_2^2 p^' 3/p,
and a_3=(2ฯ)^4 ฮฑ_01^2 ฮฑ_02 A_1 A_2 p^' 4/( p โ(2) ).
The last term in the right-side hand a_3 sin^3ฮธcos (ฮท_2ฯ),
with ฮท_2=1 for equal helicities and ฮท_2=3 for opposite helicities,
describes the coherent interference between
the first- and the second-order transition amplitudes in Eq. (<ref>),
as schematically shown by the one- and two-photon pathways in Fig. <ref>(a).
We observe that the DCS, Eq. (<ref>), is invariant to the following transformations:
(i) ฯโฯ +2ฯ/ฮท_2, that is equivalent to a rotation
in the azimuthal plane by an 2ฯ/ฮท_2 angle around the z axis
and (ii) ฯ-ฯโฯ+ฯ, that is equivalent to a reflection
with respect to the (x,z) plane.
For a better understanding of the numerical results presented in Fig. <ref> we show
the DCSs as a function of the scattering angle ฮธ for I_2=I_1=10^12 W/cm^2,
at the azimuthal angles ฯ = 60^โ
in Figs. <ref>(a) and <ref>(b) and ฯ =240^โ
in Figs. <ref>(c) and <ref>(d),
while the rest of the parameters are the same as in Fig. <ref>.
For the employed laser parameters, the laser-atom interaction is quite strong at very small
scattering angles, ฮธ<7^โ, only.
Excepting the forward scattering angles, the projectile electron is scattered with
a high probability at ฮธโ 40^โ and ฮธโ 46^โ
in Figs. <ref>(a) and <ref>(d)
and at ฮธโ 29^โ in Figs. <ref>(b) and <ref>(c).
At larger scattering angles, the projectile electron contribution is
dominant due to nuclear scattering and determines the angular distribution
of the total DCS.
Very recently, there have been reported new experimental observations of
the very sharp peak profile at forward scattering angles for e^--Xe scattering
in LP fields and a few attempts to explain it based on the Zon's model <cit.>,
which involves the static polarizability.
In order clarify the origin of the symmetry patterns in Fig. <ref> for ฯ_2=2ฯ_1,
we show DCSs in Fig. <ref> for counter-rotating (solid lines) and corotating (dashed lines)
CP fields as a function of the azimuthal angle ฯ,
at the scattering angle ฮธ = 20^โ, I_1=10^12 W/cm^2,
and the harmonic field intensity
I_2= f I_1, with the laser intensity ratios
f =1 in Fig. <ref>(a), 10^-1 in Fig. <ref>(b),
10^-2 in Fig. <ref>(c), and 10^-3 in Fig. <ref>(d).
For counter-rotating CP fields we found that the projectile electron is scattered
with a high probability in the directions of the azimuthal angles
ฯ=60^โ, 180^โ, and 300^โ.
DCS has a specific โthree-leaf cloverโpattern, as
described in the weak field domain by the cos 3ฯ term in Eq. (<ref>),
and is invariant to rotation around the z axis by an azimuthal angle ฯ=2ฯ/3.
In contrast, for the corotating CP fields DCS, has a pattern described by the cosฯ
term in Eq. (<ref>) and the invariant rotation angle is ฯ=2ฯ.
For both co- and counter-rotating CP fields, the DCSs are symmetric with respect
to reflection in the (x,z) plane, such that
dฯ_m=2( ฮธ, ฯ-ฯ )/dฮฉ
=dฯ_m=2( ฮธ, ฯ+ฯ )/dฮฉ.
Figure <ref> presents similar results as in Fig. <ref> but for
a combination of the fundamental laser field and
its third harmonic, ฯ_3= 3ฯ_1, which results
in the quiver motion amplitude ฮฑ_03 = ฮฑ_01โ(f)/9
and the argument of the Bessel function R_3 =R_1 โ(f)/9.
In order to understand the different symmetry patterns in Fig. <ref>,
we show DCSs in Fig. <ref> for counter-rotating (solid lines) and corotating (dashed lines)
CP fields as a function of the azimuthal angle ฯ,
at the scattering angle ฮธ = 20^โ, and the harmonic field intensity
I_3= f I_1, with laser intensity ratios f =10 in Fig. <ref>(a), 1 in Fig. <ref>(b),
10^-1 in Fig. <ref>(c), and 10^-3 in Fig. <ref>(d).
In contrast with the ฯ_2=2ฯ_1 case we found that the projectile electron
is scattered with a high probability in the directions of the azimuthal angles
ฯ= 45^โ, 135^โ, 225^โ, and 315^โ
for counter-rotating CP fields.
DCS has a โfour-leaf cloverโpattern which is described in the weak field domain,
Eqs. (<ref>) and (<ref>), by a cos 4ฯ term
and is invariant to rotation around the z axis by an azimuthal angle
ฯ=ฯ/2 for counter-rotating CP fields.
For corotating CP fields, DCS has a specific pattern described by cos 2ฯ
and the invariant rotation angle is ฯ=ฯ.
The DCSs for co- and counter-rotating polarizations
are symmetric with respect to reflection in the (x,z)- and (y,z)-planes, such that
dฯ_m=3( ฮธ, ฯ-ฯ )/dฮฉ
=dฯ_m=3( ฮธ, ฯ+ฯ )/dฮฉ
and
dฯ_m=3( ฮธ, ฯ/2-ฯ )/dฮฉ
=dฯ_m=3( ฮธ, ฯ/2+ฯ )/dฮฉ, respectively.
Recently, three-dimensional electron distributions with one and three lobes were
experimentally observed in strong-field ionization by two-color CP fields <cit.>,
using a superposition of fundamental and second harmonic of a Ti:sapphire laser.
Similar rotational and reflection symmetries
were obtained in above-threshold detachment of negative fluorine ions
by a two-color bicircular laser field <cit.>.
As expected, at relatively low harmonic intensities I_mโช I_1 (m=2,3),
where the one-color (ฯ_1) two-photon processes are dominating,
the DSCs are almost independent on the azimuthal angle ฯ
and have nearly circularly symmetric patterns in the polarization plane <cit.>,
as shown in Figs. <ref> and <ref> and
at ฮธ=20^โ in Figs. <ref>(d) and <ref>(d) for two left-handed-CP pulses and
two left- and right-handed-CP pulses, respectively.
In conclusion, the DCSs presented in Fig. <ref> and Figs. <ref>-<ref>
for both co- and counter-rotating CP fields have different interference patterns
between the different paths leading to the
same final state and have almost the same peak magnitude.
The counter-rotating circular polarization case has a different symmetry profile
and obeys different symmetry operations:
The DCS is invariant to rotation of the projectile electron momentum around the z axis by an angle
2ฯ/(m+1) for counter-rotating CP fields, whereas for the corotating CP fields
the invariant rotation angle is 2ฯ/(m-1).
ยง SUMMARY AND CONCLUSIONS
Using a semiperturbative method, we have studied the electron-hydrogen scattering by
a two-color CP laser field of commensurate photon energies ฯ_1 and mฯ_1,
and derived useful analytical formulas for DCSs that are valid for
circular and/or linear polarization and give more physical insight of the scattering process
and valuable information for experimental investigations.
A comparison between the two-photon absorption DCSs for two-color co- and counter-rotating
CP laser fields is made at different photon energies of the harmonic field
2ฯ_1 and 3ฯ_1 (m=2 and 3),
and the effect of the intensity ratio of the two-color laser field components
on the DCSs is analyzed.
The DCSs of the scattered electrons by hydrogen atoms in a two-color bicircular
laser field of photon energies ฯ_1 and mฯ_1
present a rotational symmetry with respect to rotation of the projectile electron momentum
by an azimuthal angle 2ฯ/(m-1) for corotating polarizations,
while for counter-rotating polarizations the invariant rotation angle is 2ฯ/(m+1).
In addition to the rotational symmetry, for the studied scattering geometry
ฮต_ยฑ =(๐_x ยฑ i ๐_y)/โ(2)
and p โฅ๐_z,
the DCSs are symmetric with respect to reflection in the (x,z) plane
for both m=2 and 3, while the (y,z) plane is
a reflection symmetry plane for m=3 only.
It was found that the modification of the photon helicity implies a change
in the symmetries of the DCSs and by changing the laser field intensity ratio
the angular distribution of the scattering signal can be modified.
By choosing the photon energies ratio to be even (m=2) or odd (m=3) and
varying the intensity ratio of the co- and counter-rotating two-color CP laser field
components we can manipulate the angular distribution of the scattered electrons.
The optimization of the scattering signal in laser-assisted electron-hydrogen
scattering process depends for ฯ_3=3ฯ_1 on the intensity ratio
of the two-color laser field components,
whereas for ฯ_2=2ฯ_1 depends on the scattering geometry,
fundamental field intensity, and photon energy.
ยง SECOND-ORDER CORRECTION TO THE ATOMIC WAVE FUNCTION FOR A TWO-COLOR LASER FIELD
In order to calculate the second-order atomic transition amplitude,
Eq. (<ref>),
we use the expression of the quadratic response tensors defined in Ref. <cit.>
for the two-color laser field given by Eq. (<ref>)
to obtain the second-order correction to the atomic ground state as
ฯ_1s^(2)( ๐ซ,t) = โ_k=1,mฮฑ_0k^2 ฯ_k^2/4โ_j,l=1^3[
ฮต_kjฮต_kl w_jl,100 ( ฮฉ_k^' +, ฮฉ_k^+;๐ซ) e^-2iฯ_kt
+ ฮต_kj^* ฮต_kl^*
w_jl, 100( ฮฉ_k^' -, ฮฉ_k^-;๐ซ) e^2iฯ_kt.
.
+ฮต_kjฮต_kl^* w_jl,100 ( E_1s, ฮฉ_k^-;๐ซ)
+ฮต_kj^* ฮต_kl w_jl, 100( E_1s, ฮฉ_k^+;๐ซ)
]
+
ฮฑ_01ฮฑ_0mฯ_1ฯ_m/4โ_j,l=1^3{ฮต_1jฮต_ml
[ w_lj,100 ( ฮฉ^' +, ฮฉ_1^+;๐ซ)
+ w_jl,100( ฮฉ^' +, ฮฉ_m^+;๐ซ) ]
e^-i(ฯ_1+ฯ_m)t.
+ ฮต_1j^* ฮต_ml^*
[w_lj, 100( ฮฉ^' -, ฮฉ_1^-;๐ซ)
+ w_jl, 100( ฮฉ^' -, ฮฉ_m^-;๐ซ)]
e^i(ฯ_1+ฯ_m)t
+ ฮต_1jฮต_ml^*
[ w_lj,100 ( ฮฉ^ +, ฮฉ_1^+;๐ซ)
+ w_jl,100( ฮฉ^ +, ฮฉ_m^-;๐ซ) ]
e^-i(ฯ_1-ฯ_m)t
.
+ ฮต_1j^* ฮต_ml
[w_lj, 100( ฮฉ^-, ฮฉ_1^-;๐ซ)
+ w_jl, 100( ฮฉ^ -, ฮฉ_m^+;๐ซ)]
e^i(ฯ_1-ฯ_m)t}
,
where the tensors w_jl, 100 and w_jl,100 are defined by
Eqs. (<ref>) and (<ref>), and
the energy parameters are ฮฉ_k^ยฑ=E_1sยฑฯ_k,
ฮฉ_k^' ยฑ=E_1sยฑ 2 ฯ_k,
ฮฉ^ยฑ = E_1sยฑ ( ฯ_1-ฯ_m),
and
ฮฉ^' ยฑ = E_1sยฑ ( ฯ_1 +ฯ_m), with k=1 and m.
ยง APPROXIMATE FORMULA FOR THE GENERALIZED BESSEL FUNCTIONS
B_N(R_1,R_M,ฮฆ_1, ฮฆ_M)
We recall the definition introduced in Subsec. <ref> of the phase-dependent
generalized Bessel function for commensurate photon energies ฯ_1 and
ฯ_m=mฯ_1,
B_N(R_1,R_m;ฯ_1,ฯ_m)
=โ_l=-โ^+โ
J_N-m l(R_1) J_l(R_m)exp[-i l(mฯ_1- ฯ_m)],
which is a 2ฯ periodic function in ฯ_1/m and ฯ_m <cit.>.
Whenever the argument of the Bessel functions of the first kind is small, i.e.,
R_1โช 1 and R_mโช 1, which is satisfied at low laser intensities or
at small scattering angles with moderate laser intensities,
the approximate expressions of the generalized Bessel functions can be used.
In the present paper we keep the second-order terms in the fields in the
transition amplitudes and, therefore, we restrict to the following approximate formula:
B_N โ
J_N(R_1)J_0(R_m)+ J_N-m(R_1)J_1(R_m)e^-i(mฯ_1- ฯ_m),
for N โฅ 0.
Using the approximate relation J_N(R_k)โ R_k^N/(2^N N!)
and the symmetry formula J_-N(R_k) = (-1)^N J_N(R_k)
of the Bessel functions of the first kind <cit.>, we further obtain
B_N โR_1^N/2^NN! +
(ฮฒ R_1)^|N-m|R_m/ 2^|N-m|+1 |N-m|!e^-i(mฯ_1- ฯ_m),
for N โฅ 0,
where
ฮฒ=-1 for 0< N <m and ฮฒ=1 for N โฅ m.
Because the appropriate leading terms are kept in the total transition amplitude
in the weak field domain, Eq. (<ref>),
the following approximate expressions are used for
two-photon absorption scattering process:
B_0 โ 1, B_1 โ R_1/2,
B_2 โ R_1^2/8 + (ฮฒ R_1/2 )^|2-m|(R_m/2)e^-i(mฯ_1- ฯ_m)/|2-m|!,
and B_N โ 0 for N โฅ 3.
Explicitly, for N=2 the generalized Bessel function is approximated as
B_2 โ
[ ฮฑ_01^2(ฮต_1 ยท๐ช)^2
+ 4ฮฑ_02 ฮต_2 ยท๐ช ] e^-i2ฯ_1/8,
for m=2,
[ฮฑ_01^2(ฮต_1 ยท๐ช)^2
- 2ฮฑ_01ฮฑ_03 (ฮต_1^* ยท๐ช)(ฮต_3 ยท๐ช ) ]
e^-i2ฯ_1/8,
for m=3.
Similar results can be obtained for N < 0 by using the symmetry relation for B_-N,
B_-N(R_1,R_m;ฯ_1,ฯ_m) = (-1)^N B_N^ [R_1,R_m;ฯ_1,ฯ_m+(m-1)ฯ].
For odd harmonic orders, such that m=2s+1, with s being a positive integer, the following symmetry relation occurs:
B_-N(R_1,R_m;ฯ_1,ฯ_m) = (-1)^N B_N^ (R_1,R_m;ฯ_1,ฯ_m).
For even harmonic orders, such that m=2s, the following symmetry relation holds:
B_-N(R_1,R_m;ฯ_1,ฯ_m) = (-1)^N B_N^ (R_1,R_m;ฯ_1,ฯ_m+ฯ).
ยง ACKNOWLEDGMENTS
The work by G. Buica was supported by research programs PN 16 47 02 02
Contract No. 4N/2016 and FAIR-RO Contract No. 01-FAIR/2016
from the UEFISCDI and the Ministry of Research and Innovation of Romania.
99
ehl2001 F. Ehlotzky,
Phys. Rep. 345, 175 (2001).
plasma
Y. Shima and H. Yatom, Phys. Rev. A 12, 2106 (1975);
M. B. S. Lima, C. A. S. Lima, and L. C. M. Miranda, ibid. 19, 1796 (1979).
astro
S. Chandrasekhar,
An Introduction to the Study of Stellar Structure,
(Dover, Mineola, NY, 1967), p. 261;
M. J. Seaton, in Advances in Atomic, Molecular and Optical Physics
edited by B. Bederson and A. Dalgarno
(Academic Press, New York, 1994), p. 395.
masseyN. F. Mott and H. S. W. Massey,
The Theory of Atomic Collisions (Oxford University Press, London, 1965);
C. Joachain, Quantum Collision Theory (North Holland, Amsterdam, 1987).
mason N. J. Mason,
Rep. Prog. Phys. 56, 1275 (1993).
ehl1998 F. Ehlotzky, A. Jaroล, and J. Z. Kamiลski,
Phys. Rep. 297, 63 (1998).
bransden B. H. Bransden and C. J. Joachain,
Physics of Atoms and Molecules (Longman, London, 1983).
joa2012 C. J. Joachain, N. J. Kylstra, and R. M. Potvliege,
Atoms in Intense Laser Fields
(Cambridge University Press, Cambridge, UK, 2012), p. 466.
musa M. O. Musa, A. MacDonald, L. Tidswell, J. Holmes, and B. Wallbank,
J. Phys. B: At. Mol. Phys. 43, 175201 (2010).
Kanya
R. Kanya, Y. Morimoto, and K. Yamanouchi,
Phys. Rev. Lett. 105, 123202 (2010).
Harak
B. A. de Harak, L. Ladino, K. B. MacAdam, and N. L. S. Martin,
Phys. Rev. A 83, 022706 (2011).
Kanya2
Y. Morimoto, R. Kanya, and K. Yamanouchi,
Phys. Rev. Lett. 115, 123201 (2015);
R. Kanya, Y. Morimoto, and K. Yamanouchi,
in Progress in Ultrafast Intense Laser Science X,
edited by K. Yamanouchi, G. Paulus, and D. Mathur
(Springer, Cham, Switzerland, 2014), p. 1.
gabi2017 G. Buica, Phys. Rev. A 92, 033421 (2015);
J. Quant. Spectrosc. Radiat. Transf. 187, 190 (2017).
LongS. Long, W. Becker, and J. K. McIver,
Phys. Rev. A 52, 2262 (1995).
Eichmann H. Eichmann, A. Egbert, S. Nolte, C. Momma, B. Wellegehausen, W. Becker, S. Long, and J. K. McIver, Phys. Rev. A 51, R3414 (1995).
Fleischer2014
A. Fleischer, O. Kfir, T. Diskin, P. Sidorenko, and O. Cohen,
Nat. Photon. 8, 543 (2014).
Mancuso15
C. A. Mancuso, D. D. Hickstein, P. Grychtol, R. Knut, O. Kfir, X. M. Tong, F. Dollar,
D. Zusin, M. Gopalakrishnan, C. Gentry, E. Turgut, J. L. Ellis, M. C. Chen,
A. Fleischer, O. Cohen, H. C. Kapteyn, and M. M. Murnane,
Phys. Rev. A 91, 031402(R) (2015).
Mancuso16
C. A. Mancuso, K. M. Dorney, D. D. Hickstein, J. L. Chaloupka,
J. L. Ellis, F. J. Dollar, R. Knut, P. Grychtol, D. Zusin, C. Gentry,
M. Gopalakrishnan, H. C. Kapteyn, and M. M. Murnane,
Phys. Rev. Lett. 117, 133201 (2016).
Odzak1
S. Odลพak and D. B. Miloลกeviฤ,
Phys. Rev. A 92, 053416 (2015).
Odzak
S. Odลพak, E. Hasoviฤ, W. Becker, and D. B. Miloลกeviฤ,
J. Mod. Opt. 64, 971 (2017).
acgabi2 A. Cionga, F. Ehlotzky, and G. Zloh,
Phys. Rev. A 61, 063417 (2000).
acgabiopt A. Cionga, F. Ehlotzky, and G. Zloh,
Phys. Rev. A 62, 063406 (2000);
J. Phys. B: At. Mol. Opt. Phys. 33, 4939 (2000).
b-j
F. W. Byron Jr. and C. J. Joachain,
J. Phys. B 17, L295 (1984).
volkov
D. M. Volkov,
Z. Phys. 94, 250 (1935).
vf1
V. Florescu and T. Marian,
Phys. Rev. A 34, 4641 (1986).
vf2
V. Florescu, A. Halasz, and M. Marinescu,
Phys. Rev. A 47, 394 (1993).
Watson G. N. Watson,
Theory of Bessel Functions (Cambridge University Press, Cambridge, UK, 1962).
varro S. Varrรณ and F. Ehlotzky,
J. Phys. B: At. Mol. Opt. Phys. 28, 1613 (1995).
bf F. V. Bunkin and M. V. Fedorov,
Zh. Eksp. Teor. Fiz., 49, 1215 (1965)
[Sov. Phys. JETP 22, 844 (1966)].
miloD. B. Miloลeviฤ, F. Ehlotzky, and B. Piraux,
J. Phys. B 30, 4347 (1997).
acgabi3 A. Cionga, F. Ehlotzky, and G. Zloh,
Phys. Rev. A 64, 043401 (2001).
gavrila1970M. Gavrila and A. Costescu,
Phys. Rev. A 2, 1752 (1970).
gavrila
V. Veniard, M. Gavrila, and A. Maquet,
Phys. Rev. A 35, 448(R) (1987);
A. A. Krylovetskiฤญ, N. L. Manakov, S. I. Marmo, and A. F. Starace,
J. Exp. Theor. Phys. 95, 1006 (2002).
manakov
N. L. Manakov, A. V. Meremianin, J. P. J. Carney, and R. H. Pratt,
Phys. Rev. A 61, 032711 (2000).
taieb R. Taieb, V. Veniard, A. Maquet, N. L. Manakov, and S. I. Marmo,
Phys. Rev. A 62, 013402 (2000).
fifirig2000
M. Fi๏ฌrig, V. Florescu, A. Maquet, and R. Taรฏeb,
J. Phys. B: At. Mol. Opt. Phys. 33, 5313 (2000);
M. Fi๏ฌrig, A. Cionga, and F. Ehlotzky,
Eur. Phys. J. D 23, 333 (2003).
staraceA. Y. Istomin, E. A. Pronin, N. L. Manakov, S. I. Marmo, and A. F. Starace,
Phys. Rev. Lett. 97, 123002 (2006).
acgabi99 A. Cionga and G. Zloh,
Laser Phys. 9, 69 (1999).
|
http://arxiv.org/abs/2306.05998v1
|
20230609161026
|
Distributed Consensus Algorithm for Decision-Making in Multi-agent Multi-armed Bandit
|
[
"Xiaotong Cheng",
"Setareh Maghsudi"
] |
cs.LG
|
[
"cs.LG",
"cs.MA",
"stat.ML"
] |
Magnetic pinning of Andreev levels in epitaxial semiconductor-superconductor nanowires
Jesper Nygรฅrd^1
Received 2023.06.09; accepted YYYY.MM.DD
======================================================================================
We study a structured multi-agent multi-armed bandit (MAMAB) problem in a dynamic environment. A graph reflects the information-sharing structure among agents, and the arms' reward distributions are piecewise-stationary with several unknown change points. The agents face the identical piecewise-stationary MAB problem. The goal is to develop a decision-making policy for the agents that minimizes the regret, which is the expected total loss of not playing the optimal arm at each time step. Our proposed solution, Restarted Bayesian Online Change Point Detection in Cooperative Upper Confidence Bound Algorithm (RBO-Coop-UCB), involves an efficient multi-agent UCB algorithm as its core enhanced with a Bayesian change point detector. We also develop a simple restart decision cooperation that improves decision-making. Theoretically, we establish that the expected group regret of RBO-Coop-UCB is upper bounded by ๐ช(KNMlog T + Kโ(MTlog T)), where K is the number of agents, M is the number of arms, and T is the number of time steps. Numerical experiments on synthetic and real-world datasets demonstrate that our proposed method outperforms the state-of-the-art algorithms.
Change point detection; Distributed learning; Multi-armed bandit; Multi-agent cooperation
ยง INTRODUCTION
Multi-armed bandit (MAB) is a fundamental problem in online learning and sequential decision-making. In the classical setting, a single agent adaptively selects one among a finite set of arms (actions) based on past observations and receives a reward accordingly. The agent repeats this process over a finite time horizon to maximize its cumulative reward. The MAB framework has been applied in several areas such as computational advertisement <cit.>, wireless communications <cit.> and online recommendation <cit.>.
To date, most research on the MAB problem focuses on single-agent policies, neglecting the social components of the applications of the MAB framework; nevertheless, the ever-increasing importance of networked systems and large-scale information networks motivates the investigation of the MAB problem with multiple agents <cit.>. For example, the users targeted by a recommender system might be a part of a social network <cit.>. In that case, the network structure is a substantial source of information, which, if taken advantage of, significantly improves the performance of social learners through information-sharing. The core idea is to integrate a graph/network, where each node represents an agent, and the edges identity information-sharing or other relations among agents <cit.>.
The state-of-the-art literature mainly concerns two variants of the MAB problem: (1) Stochastic bandits, where each arm yields a reward from an unknown, time-invariant distribution <cit.>; and (2) Adversarial bandits, where the reward distribution of each arm may change adversarially at each time step <cit.>. However, some application scenarios do not fit in these two models. Specifically, in some applications, the armsโ reward distributions vary much less frequently which has difficulties in suiting an adversarial bandit model <cit.>.
For example, in recommendation systems where each item represents an arm and users' clicks are rewards, the users' preferences towards items are unlikely to be time-invariant or change significantly at all time steps <cit.>. Other examples include investment options and dynamic pricing with feedback <cit.>. Consequently, we investigate an โintermediate settingโ of a multi-agent system, namely the piecewise-stationary model. In such a model, the reward distribution of each arm is piecewise-constant and shifts at some unknown time steps called the change points.
ยง.ยง Related Work
The rising importance of networked systems, the development of online information-sharing networks, and emerging novel concepts such as the system of systems, motivate studying the multi-agent multi-armed bandit (MAMAB) problem as a framework to model distributed decision-making. And the MAMAB provides the required โlearning-to-coordinateโ framework in a distributed fashion <cit.>.
Reference <cit.> investigates the multi-armed bandit problem with side observations from the network, where an external agent decides for the network's users. The authors propose the ฯต-greedy-LP strategy, which explores the action for each user at a rate that is a function of its network position. Reference <cit.> studies the distributed MAMAB problem and proposes an online indexing policy based on distributed bipartite matching. The method ensures that the expected regret grows no faster than ๐ช(log^2T). However, the proposed policies in <cit.> are agnostic of the network structure and do not consider the agents' heterogeneity. In <cit.>, the authors introduce the notion of sociability to model the likelihood probabilities that one agent observes its neighbors' choices and rewards in the network graph. The proposed algorithm has ๐ช(log T) as regret bound. Reference <cit.> assumes that each agent observes the instantaneous rewards and choices of all its neighbors only when exploring. Based on such an assumption, it proposes an algorithm with a regret-bound ๐ช(log T). In <cit.>, the observation between agents are opportunistic and occasional, whereas, in the real-world applications like social networks, the observations are continuous. In <cit.>, it is assumed that the communication among agents is only allowed by playing arms in a certain way. Two decentralized policies E^3 and E^3-TS are proposed for both single player MAB problem and multi-player MAB problem respectively, where E^3 stands for Exponentially spaced Exploration and Exploitation policy and TS stands for Thompson sampling. In <cit.>, the problem setting includes several agents that synchronously play the same MAB game. They develop a gossip-based algorithm that guarantees ๐ช(log T) expected regret bound. Reference <cit.> considers the same problem with and without collisions. The authors propose two algorithms and investigate the influence of communication graph structure on group performance. In <cit.>, the stochastic MAMAB problem is studied. Different as previous research, <cit.> considers a dynamic scenario, where the players enter and leave at any time. Algorithms based on โTrekking approachโ are proposed and sublinear regret is guaranteed with high probability in the dynamic scenario. Reference <cit.> investigates a decentralized MAMAB problem, where the arm's reward distribution is the same for every agent while the information exchange is limited to be at most poly(K) times, K is the number of arms. A decentralized Thompson Sampling algorithm and a decentralized Bayes-UCB algorithm are proposed to solve the formulated MAMAB problem.
However, most previous research on MAMAB focuses on either the stochastic- or adversarial environment, while it largely neglects the piecewise-stationary setting. References <cit.> introduce the piecewise-stationary MAB, where the reward distributions of arms remain stationary for some intervals (piecewise-stationary segments) but change abruptly at some potentially unknown time steps (change points). The piecewise-stationary environment is specifically beneficial for modeling the real environments<cit.>, giving rise to several decision-making methods, especially for the single-agent MAB. The cutting-edge research on piecewise-stationary MAB includes two methods categories: passively adaptive and actively adaptive. The former makes decisions based on the most recent observations while unaware of the underlying distribution changes <cit.>. The latter incorporates a change point detector subroutine to monitor the reward distributions and restart the algorithm once a change point is detected <cit.>. Below, we discuss some examples of the two categories in single-agent MAB emphasizing adaptive methods due to their higher relevance to our research.
Algorithm Discounted UCB (D-UCB) <cit.> averages the past rewards with a discount factor, which results in ๐ช(โ(NT)log T) as regret bound. In <cit.>, the authors propose the sliding-window UCB (SW-UCB) method, which incorporates the past observations only within a fixed-length moving window for decision-making. The authors prove the regret bound ๐ช(โ(NT)log T). In <cit.>, the authors propose a change-detection-based framework that actively detects the change points and restarts the MAB indices. They establish the regret bound ๐ช(โ(NT logT/N)) for their proposed framework. The Monitored-UCB (M-UCB) algorithm <cit.> has a ๐ช(โ(NMTlog T)) regret bound, which is slightly higher than CUMSUM-UCB; nonetheless, it is more robust as it requires little parameter specification. Reference <cit.> combines the bandit algorithm KL-UCB with a parameter-free change point detector, namely, the Bernoulli Generalized Likelihood Ratio Test (GLRT), to obtain a dynamic decision-making policy. The authors develop two variants, global restart, and local restart. They also prove the regret upper-bound ๐ช(โ(NTlog T)) for known number of change points N. Similarly, <cit.> applies GLRT change point detector in solving a combinatorial semi-bandit problem in a piecewise-stationary environment, where the regret is upper bounded by ๐ช(โ(NMTlog T)).
ยง.ยง Our Contribution
Our main contributions are as follow:
* We propose an efficient running consensus algorithm for piecewise-stationary MAMAB, called RBO-Coop-UCB. It addresses the multi-agent decision-making problem in changing environments by integrating a change point detector proposed in <cit.>, namely, Restarted Bayesian online change point detector (RBOCPD) to a cooperative UCB algorithm.
* To improve the decision-making performance via information-sharing, we incorporate an efficient cooperation framework in the proposed strategy RBO-Coop-UCB: 1) The agents share observations in neighborhoods to enhance the performance in arm selection. 2) We integrate a majority voting mechanism into the restart decision part. The cooperation framework is generic and easily integrable to solve piecewise-stationary bandit problems, especially in various actively adaptive MAB policies.
* For any networked multi-agent systems, we establish the group regret bound ๐ช(KNMlog T + Kโ(MTlog T)). To the best of our knowledge, this is the first regret bound analysis for bandit policies with a RBOCPD.
* By intensive experiments on both synthetic and real-world datasets, we show that our proposed policy, RBO-Coop-UCB, performs better than the state-of-the-art policies. We also integrate our proposed cooperative mechanism to different bandit policies and demonstrate performance improvement as a result of cooperation.
The paper is structured as follows. We formulate the problem in Sectionย <ref>. In Sectionย <ref>, we first explain the restarted Bayesian online change point detector, which utilized in the proposed algorithm. Then we develop an algorithmic solution to the formulated problem. In Sectionย <ref>, we establish the theoretical guarantees, including the regret bound, for the proposed solution. In Sectionย <ref>, we evaluate our proposal via numerical analysis based on both synthetic- and real-world datasets. Sectionย <ref> concludes the paper.
ยง PROBLEM FORMULATION
We consider an M-armed bandit with K players (agents, hereafter) gathered in the set ๐ฆ. The agents play the same piecewise-stationary MAB problems simultaneously for T rounds. We use โณ to denote the time-invariant action set. At time t, the reward associated with arm m โโณ is randomly sampled from distribution f_t^m with mean ฮผ_t^m. The rewards are independent across the agents and over time. The agents form a network modeled by an undirected graph ๐ข(๐ฆ, โฐ) where โฐ = {e(k,j)}_k, j โ๐ฆ is the edge set. In ๐ข, nodes and edges represent agents and the potential of communication, respectively. Two agents k and j are neighbors if e(k,j) โโฐ. In addition to its own observation, each agent can observe its neighbors' selected arms and sampling rewards.
At each time step, each agent k โ๐ฆ pulls one arm m โโณ and obtains a reward sampled from f_t^m. We use I_t^k to show the action of agent k at time t. Besides, X_t^m is the sampling reward of arm m at time t. We assume that the reward distributions of arms are piecewise-stationary satifying the following Assumptionย <ref>.
[Piecewise-stationary Bernoulli process <cit.>]
The environment is piecewise-stationary, meaning that it remains constant over specific periods and changes from one to another. Let T denote the time horizon and N the overall number of piecewise-stationary segments observed until time T.
N = 1 + โ_t=1^T-11{f_t^m โ f_t+1^m for some m โโณ}.
The reward distributions of arms are piecewise-stationary Bernoulli processes โฌ(ฮผ_t^m) such that there exists an non-decreasing change points sequence (ฮฝ_n)_n โ [1,N-1]โโ^N-1 verifying
โ n โ [1,N-1], โ t โ [ฮฝ_n,ฮฝ_n+1), โ m โโณ, ฮผ_t^m = ฮผ_n^m
ฮฝ_1 = 1 < ฮฝ_2 < โฆ < ฮฝ_N=T
The performance of each single agent k is measured by its (dynamic) regret, the cumulative difference between the expected reward obtained by an oracle policy playing an optimal arm I_t^* at time t, and the expected reward obtained by action I_t^k selected by agent k
R_T^k = โ_t=1^T [๐ผ(X_t^I_t^*) - ๐ผ(X_t^I_t^k)].
Reference <cit.> introduces the concept of dynamic regret. In contrast to a fixed benchmark in the static regret, dynamic regret compares with a sequence of changing comparators and therefore is more suitable for measuring the performance of online algorithms in piecewise-stationary environments <cit.>. In the multi-agent setting considered here, we study the network performance in terms of the regret experienced by the entire network; As such, the define the decision-making objective as to minimize the expected cumulative group regret,
R_T=Kโ_t=1^T๐ผ(X_t^I_t^*)-โ_k=1^K โ_t=1^T ๐ผ(X_t^I_t^k).
ยง THE RBO-COOP-UCB ALGORITHM
Notations. We use boldface uppercase letters to represent vectors and use calligraphic letters to represent sets. For example, X_s:t denotes the sequence of observations from time step s to t and ๐ณ^k,m refers to the set containing the sampled rewards of arm m by agent k. The inner product is denoted by โจยท, ยทโฉ. ๐ช(ยท) refers to the big-O notation while o(ยท) refers to the small-o notation. 1{ยท} denotes the indicator function. Tableย <ref> gathers the most important notations.
The proposed algorithm, RBO-Coop-UCB, combines a network UCB algorithm (similar to <cit.>) with a change detector running on each arm (based on Bayesian change point detection strategy <cit.>). Roughly-speaking, RBO-Coop-UCB involves three ideas:
(1) A unique cooperative UCB-based method that guides the network system towards the optimal arm in a piecewise-stationary environment;
(2) A change point detector introduced in Algorithmย <ref>;
(3) A novel cooperation mechanism for change point detection to filter out the false alarms.
We summarize the proposed policy in Algorithmย <ref>.
To the best of our knowledge, this is the first attempt to solve the MAMAB problem in a piecewise-stationary environment. Compare to the previous work <cit.>, our proposed algorithm is applicable to different multi-agent systems and non-stationary environment. In the following, we first introduce the restarted Bayesian online change point detection procedure (RBOCPD), which we utilize to develop our decision-making policy, RBO-Coop-UCB, which we then explain in detail.
ยง.ยง Restarted Bayesian Online Change Point Detector
Detecting changes in the underlying distribution of a sequence of observations <cit.> is a classic problem in statistics. The Bayesian online change point detection appeared first in <cit.>. Since then, several authors have used it in different contexts <cit.>. Nevertheless, the previous research seldom considers the theoretical analysis of its performance bounds, such as false alarm rate and detection delay. In <cit.>, the authors develop a modification of Bayesian online change point detection and prove the non-asymptotic guarantees related to the false alarm rate and detection delay. Remarkย <ref> in Appendixย <ref> compares RBOCPD and other change point detectors in detail.
Let r_t denote the number of time steps since the last change point, given the observed data X_1:t, generated from the piecewise-stationary Bernoulli process described in Definitionย <ref>. The Bayesian strategy computes the posterior distribution over the current run-length r_t, i.e., p(r_t|X_1:t) <cit.>. The following message-passing algorithm recursively infers the run-length distribution <cit.>
p(r_t|X_1:t) โ
โ_r_t-1p(r_t|r_t-1)_hazardp(X_t|r_t-1,X_1:t-1)_UPM p(r_t-1|X_1:t-1).
A simple example of the hazard function h is a constant h = 1/ฮปโ (0,1) <cit.>.
The RBOCPD algorithm assumes that each possible value of the run-length r_t corresponds to a specific forecaster. The loss l_s,t of forecaster s at time t related to underlying predictive distribution (UPM) p(X_t|r_t-1,X_s:t-1) then follows as
l_s:t = - log Lp(X_t|X_s:t-1),
= - X_t log Lp(1|X_s:t-1) - (1-X_t)log Lp(0|X_s:t-1),
where Lp(ยท) is the Laplace predictor <cit.>, defined below.
The Laplace predictor Lp(X_t+1|X_s:t) takes as input a sequence X_s:tโ{0,1}^n_s:t and predicts the value of the next observation X_t+1โ{0,1} as
Lp(X_t+1|X_s:t) = โ_i=s^t X_i + 1/n_s:t+2, if X_t+1 = 1,
โ_i=s^t(1-X_i)+1/n_s:t+2, if X_t+1 = 0,
where โ Xโ{0,1}, Lp(X|ฯ) = 1/2 corresponds to the uniform prior given to the process generating ฮผ_c.
The weight ฯ_r,s,t of forecaster s at time t for starting time r is the posterior ฯ_r,s,t = p(r_t = t-s|X_s:t), where
ฯ_r,s,t = ฮท_r,s,t/ฮท_r,s,t-1exp(-l_s,t)ฯ_r,s,t-1, โ s<t
ฮท_r,t,tร๐ฑ_r,t-1, s=t
by using the hyperparameter ฮท_r,s,t (instead of the constant hazard function value 1/ฮป) and the initial weight ๐ฑ_r,t-1. The initial weight ๐ฑ_r,t-1 is defined as
๐ฑ_r:t-1 = exp(-Lฬ_r:t-1),
for some starting time r, where Lฬ_r:t-1 = โ_r' = r^t-1 l_r':t-1 is the cumulative loss incurred by the forecaster r from time r until time t-1. Based on (<ref>), the cumulative loss yields
Lฬ_r:t-1 = โ_r' = r^t-1 -log Lp(x_t-1|x_r':t-2).
Besides, RBOCPD includes a restart procedure to detect changes based on the forecaster weight. For any starting time r โค t,
Restart_r:t = 1{โ s โ (r,t]: ฯ_r,s,t > ฯ_r,r,t}.
The intuition behind the criterion Restart_r:t is the following: At each time t < ฮฝ with no change, the forecaster distribution concentrates around the forecaster launched at the starting time r. Thus, if the distribution ฯ_r,s,t undergoes a change, a change becomes observable.
The restarted version of Bayesian Online Change Point Detector <cit.> can be formulated as follows.
ยง.ยง RBO-Coop-UCB Decision-Making Policy
The proposed algorithm is a Network UCB algorithm that allows for some restarts. Each agent runs the RBO-Coop-UCB in parallel to choose an arm to play. Also, each agent observes its neighbors' choices and the corresponding rewards. To guarantee enough samples for change point detection, each arm will be selected several times in the exploration steps. Let I_t^k and X_t^k,m be some variables to denote the selected arm and received reward of agent k at time t, respectively, where X_t^k,m is the i.i.d copy of X_t^m. The total number of times that agent k observes option m's rewards yields
N_T^k,m = โ_t=1^Tโ_j=1^K 1{I_t^j = m}1{e(k,j) โโฐ}.
The empirical rewards of arm m by agent k at time T is
ฮผฬ_T^k,m = S_T^k,m/N_T^k,m,
where S_T^k,m = โ_t=1^Tโ_j=1^K X_t^k,m1{I_t^k = m}1{e(k,j) โโฐ} is the total reward observed by agent k from option m in T trials. At every sampling time step, if the agent k is in a forced exploration phase, it selects the arm using (<ref>) to ensure a sufficient number of observations for each arm. Otherwise, it chooses the arm according to the sampling rule described in Definitionย <ref>.
The sampling rule {I_t^k}_1^T for agent k at time t โ{1,2,โฆ,T} is
1{I_t^k = m} = 1, if Q_t^k,m = max{Q_t^k,1, โฆ, Q_t^k,M}
0, otherwise
with
Q_t^k,m = ฮผฬ_t^k,m + C_t^k,m,
C_t^k,m = โ(ฮพ(ฮฑ^k+1)log (t-ฯ^k)/N_t^k,m),
where ฯ^k is the latest change points detected by agent k. ฮพโ (0,1] is a constant and ฮฑ^k = ฮท_k - ฮท_k^avg/ฮท_k is an agent-based parameter where ฮท_k is the number of neighbors of agent k. ฮท_k = โจ1, W_k โฉ - 1 with 1 a K dimensional vector with all elements equal to 1, W_k is the k-th column of matrix W. ฮท_k^avg = 1/ฮท_kโ_e(k,j)โโฐ^K ฮท_j is the average degree of neighbors of agent k. We assume that โ k โ๐ฆ, ฮท_k^avgโค 2ฮท_k, which indicates โ k โ๐ฆ, ฮฑ_k โ (-1,1).
Different values of ฮฑ^k imply heterogeneous agents' exploration. On the one hand, agents with more neighbors have more observations that reduce the uncertainties of their reward estimations and increase their exploitation potential. Less exploration then lowers the usefulness of the information they broadcast, thus decreasing its neighbors' exploitation potential <cit.>. Therefore, to improve the group performance, we propose the heterogeneous explore-exploit strategies with sampling rule in Definitionย <ref> that regulate exploitation potential across the network.
After agent k receives the sampling reward X_t^k,m and observes its neighbors' sampling rewards, it combines the observed rewards into the observation collection ๐ณ^k,m to run the RBOCPD (Algorithmย <ref>). In general, the set ๐ณ^k,m = X_ฯ^k:N_t^k,m contains all the sampling rewards of arm m observed by agent k since the last change point ฯ^k. Each agent k receives a binary restart signal r_t^k,m afterward, where r_t^k,m = 1 if there is a change point and zero otherwise. The agent calculates the restart signal r_t^k,m using (<ref>). The agents make final restart decisions using the cooperation mechanism described in Definitionย <ref> based on its restart signal and observations.
Each agent makes the final restart decision using the majority voting outcome among neighbors: If more than half of its neighbors detect a change in one of the played arms, then the agent restarts its UCB indices as
โ_j โ๐ฉ_k1{r_t^j,m > 0}โฅโฮท_k/2โโRestart_t^k = True.
However, different agents have distinct observations, so a simple majority voting mechanism at each time step might lead to miss-detection due to the agent's asynchronous detection. Therefore, we propose an efficient cooperation mechanism for restart decision, where a restart memory time window records the previous restart of agents' neighbors in a short period d. The cooperation restart detection considers the majority voting of restart among neighbors in that period, i.e.,
โ_j โ๐ฉ_k1{โ i โ [N_t-d^j,m, N_t^j,m], r_i^j,m > 0}โฅโฮท_k/2โ
โRestart_t^k = True.
Hence, the slower detector receives the restart information from the faster ones to prevent missing change points.
The length of restart memory time window d depends on the detection delay d = ๐_ฮ, ฮฝ_n-1+d_n-1^k,m,ฮฝ_n of RBOCPD, as shown in Theoremย <ref>. Although each agent maintains a change point detector with its observations, which could be faster or slower, all detectors follow the same principle. We bound the detection delay in Theoremย <ref>; Therefore, for every change point, the maximum detection time difference is bounded. As a result, selecting d based on the delay to include all possible correct detections improves the regret bound.
ยง PERFORMANCE ANALYSIS
In this section, we analyse the T-step regret of our proposed algorithm RBO-Coop-UCB.
Define d_n^k,m = โM/p๐_ฮ,(ฮฝ_n-1+d_n-1^k,m),ฮฝ_n+ M/pโ, where ๐_ฮ,r,ฮฝ_n = min{d โโ^*: d > n_r:ฮฝ_n-1(f_r,ฮฝ_n,d+ฮฝ_n-1 - logฮท_r,ฮฝ_n,d+ฮฝ_n-1) /2n_r:ฮฝ_n-1(ฮ-๐_r,ฮฝ_n,d+ฮฝ_n-1,ฮด)^2+ logฮท_r,ฮฝ_n,d+ฮฝ_n-1 - f_r,ฮฝ_n,d+ฮฝ_n-1} and ๐_r,ฮฝ_n,d+ฮฝ_n-1,ฮด = โ(2)/2 (โ(n_r:ฮฝ_n-1+1/n_r:ฮฝ_n-1^2log(2โ(n_r:ฮฝ_n)/ฮด)) + โ(n_ฮฝ_n:ฮฝ_n+d-1+1/n_ฮฝ_n:ฮฝ_n+d-1^2log(2n_r:d+ฮฝ_n-1โ(n_ฮฝ_n:ฮฝ_n+d-1)log^2(n_r:d+ฮฝ_n-1)/log 2 ฮด))). Then we assume that for all n โ{1,โฆ,N}, k โ๐ฆ, m โโณ, ฮฝ_n - ฮฝ_n-1โฅ 2max (d_n^k,m,d_n-1^k,m).
Assumptionย <ref> is a standard assumption in non-stationary multi-armed bandit literature <cit.>. It guarantees that the length between two change points is sufficient to detect the distribution change with high probability. Besides, the detection delay ๐_ฮ,r,ฮฝ_n is asymptotically order optimal. The asymptotic regime is reached when n_r:ฮฝ_n-1/log (1/ฮด)โโ, and log n_r:ฮฝ_n-1 = o(log1/ฮด), when ฮดโ 0, we obtain that <cit.>
๐_ฮ,r,ฮฝ_n-logฮท_r,ฮฝ_n, d+ฮฝ_n-1+ o(log1/ฮด)/2 |ฮผ_n - ฮผ_n-1|^2.
If we choose the hyperparameter ฮท_r,s,tโ1/n_r:t in RBOCPD and plug it into the asymptotic expression of detection delay (<ref>), we get
๐_|ฮผ_n - ฮผ_n-1|,r,ฮฝ_no(log1/ฮด)/2 |ฮผ_n - ฮผ_n-1|^2.
Consider a stationary environment, i.e., when N = 1, ฮฝ_0 = 0, and ฮฝ_1 = T. Then the upper bound of the expected cumulative regret of RBOCPD-Coop-UCB for each agent follows as
โ_T^k โคฮ_1^*[Mฯ T + pT + Mโ8 ฮพlog T/(ฮ_1^min)^2โ + M(1+ฯ^2/3)],
where ฮ_1^* is the maximum gap between the mean rewards of arms, ฮ_1^min is the minimum gap between the mean rewards of arms, and ฯ < ฮด is the false alarm rate under cooperation.
See Appendixย <ref>.
Lemmaย <ref> shows that the regret of RBO-Coop-UCB incorporates several sources: false alarm, forced exploration, and the classic regret of the UCB algorithm.
Consider the stationary scenario, i.e., N =1, with confidence level ฮด > 0. Define ฯ_1^k,m as the time of detecting the first change point of the m-th base arm. Let ฯ_1^k = min_mโโณฯ_1^k,m, because RBO-Coop-UCB restarts the entire algorithm if a change point is detected on any of the base arms. The false alarm rate under cooperation is <cit.>
P(ฯ_1^k โค T) โคโ_m=1^M P(ฯ_1^k,mโคฯ) โค Mฯ
See Appendixย <ref>.
Define ๐_n^k as the event when all the change points up to the n-th one have been detected successfully by agent k within a small delay. Formally, <cit.>:
๐_n^k = {โ i โค n, ฯ_i^k โ{ฮฝ_i+1, โฆ, ฮฝ_i+d_i^k}},
and
P(๐_n^k) โค P(ฯ_n^k < ฮฝ_n | ๐_n-1^k) + P(ฯ_n^k > ฮฝ_n + d_n^k | ๐_n-1^k)
Then, (a) = P(ฯ_n^k < ฮฝ_n | ๐_n-1^k) โค Mฯ and (b) = P(ฯ_n^k > ฮฝ_n + d_n^k | ๐_n-1^k) โคฮด, where ฯ_n^k is the detection time of the n-th change point.
See Appendixย <ref>.
Running Algorithmย <ref> with Assumptionย <ref>. Define the suboptimality gap in the i-th stationary segment as
ฮ_i^* = ฮผ_t^* - min_m โโณฮผ_t^m, t โ [ฮฝ_i-1,ฮฝ_i],
ฮ_i^min = ฮผ_t^* - max_m โโณ / m*ฮผ_t^m, t โ [ฮฝ_i-1,ฮฝ_i].
then the expected cumulative regret of RBO-Coop-UCB with exploration probability p and confidence level satisfies
R_T^k โคโ_i=1^NCฬ_i^k + ฮ^*T(p+2MNฯ + Mฮด),
where Cฬ_i^k = ฮ_i^*[Mโ8 log T/(ฮ_i^min)^2โ + M(1+ฯ^2/3)], and ฯ < ฮด is the maximum false alarm under cooperation.
See Appendixย <ref>.
Let ฮด = 1/T and p = โ(Mlog T/T). Then the regret of RBO-Coop-UCB have the following upper bound:
R_T โค๐ช(KNMlog T + Kโ(MTlog T))
See Appendixย <ref>.
ยง EXPERIMENTS
In this section, we evaluate the performance of our proposed RBO-Coop-UCB algorithm in different non-stationary environments using synthetic datasets and real-world datasets. We compare RBO-Coop-UCB with five baselines from the cutting-edge literature and one variant of RBO-Coop-UCB. More precisely, we use DUCB <cit.> and SW-UCB (Sliding Window UCB) <cit.> as passively adaptive benchmarks for the piecewise-stationary multi-armed bandit, and M-UCB (Monitored-UCB) <cit.> and GLR-UCB <cit.> as actively adaptive ones. For consistency, in addition to the non-cooperative setting (each agent runs the algorithm independently), we implement each of the above piecewise-stationary algorithms also in a cooperative setting (information-sharing). In addition, we compare with UCB <cit.> as a stochastic bandit policy and EXP3 <cit.> as an adversarial one. Finally, to validate the effectiveness of cooperation in change point detection, we implement GLR-Coop-UCB, which has a similar flow to RBO-Coop-UCB: It includes information-sharing and cooperative change point detection decision-making. The only difference between RBO-Coop-UCB and GLR-Coop-UCB is the implemented change point detector.
The hyperparameter in our experiments are as follow.
* UCB: None.
* DUCB: Discount factor ฮณ = 1 - (4B)^-1โ(N/T), here ฮณ = 1- โ(N/T)/4.
* SW-UCB: Sliding window length ฯ = 2Bโ(T log T/N), here ฯ = 2โ(T log T/N).
* M-UCB: ฮด = max_i โ N, m โโณ|ฮผ_i^m - ฮผ_i+1^m|, window size ฯ = 800, b = [ฯlog(2MT^2)/2]^1/2, and ฮณ = 0.05 โ((N-1)(2b+3โ(ฯ))/2T).
* GLR-UCB and GLR-Coop-UCB: ฮด = 10/T, and p = โ(log T/T).
* RBO-UCB and RBO-Coop-UCB: ฮท_r,s,t = 10/T, and p = โ(log T/T).
Tableย <ref> summarizes the performance of different change point detectors in all datasets. In the following section, we analyze the algorithms' performance in details. All the experimental results are based on ten independent runs.
ยง.ยง Synthetic Dataset
In this section, we consider the following MAMAB setting with synthetic dataset:
* The network consists of K = 3 agents and the agents face an identical MAB problem with M = 5 arms.
* There are N = 4 piecewise-stationary Bernoulli segments, where only one base arm changes its distribution between two consecutive piecewise-stationary segments.
Figureย <ref> and Figureย <ref> respectively show the network and the arms' reward distributions.
Figureย <ref> summarizes the average regret of all algorithms.
Based on Figureย <ref>, RBO-Coop-UCB has the lowest regret among all piecewise-stationary algorithms. Compared with the algorithms with cooperation (such as RBO-Coop-UCB, GLR-Coop-UCB, and M-Coop-UCB, shown by solid lines), most algorithms without cooperation (such as RBO-UCB, GLR-UCB, and M-UCB, depicted by dashed lines) suffer from higher regrets. An exception is SW-UCB, whose regret is lower than that of SW-Coop-UCB. Our proposed cooperative framework is originally designed for actively adaptive algorithms (i.e., RBO-Coop-UCB, GLR-Coop-UCB, and M-Coop-UCB) and the cooperative restart decision-making does not exist in the passively adaptive algorithms (D-Coop-UCB, SW-Coop-UCB), that could be a reason for the higher regret in SW-Coop-UCB. In general, most algorithms have a lower regret than UCB and EXP3, which means that for optimal decision-making, it is essential to take into account the environment dynamics. Note that RBO-Coop-UCB performs better than GLR-Coop-UCB, whereas RBO-UCB has a higher regret than GLR-UCB. We investigate this effect in the following in detail.
Figureย <ref> shows the histogram of time steps that different algorithms have identified as a change point. In Figureย <ref>, there are three peaks (t = 2500, 5000, 7000). Especially, for t=2500 and t = 7500, the times to be regarded as change point reaches almost 30 (three agents ten times). That means, with a high probability, RBO-Coop-UCB, and RBO-UCB detect the changes. Compared with RBO-Coop-UCB, RBO-UCB has more false alarms. Figureย <ref> demonstrates the change point detection of GLR-Coop-UCB and GLR-UCB. Compared with the algorithms with RBOCPD, those with GLRCPD benefit from fewer false alarms; however, they ignore the second change point t = 5000. Figureย <ref> shows that RBO-Coop-UCB has a better performance than GLR-Coop-UCB in change point detection: It provides a higher correct detection rate for the first and third change points and has a smaller time step delay. Besides, RBO-Coop detects the GLR โundetectable" change point (second change point), which indicates its wider detectable range. In Figureย <ref>, RBO has a higher correct detection rate than GLR; nevertheless, the false alarm rate of RBO is also very high, which is not negligible. The statistics shown in Tableย <ref> also lead to the same conclusion. Frequent false alarm increases regret in RBO-UCB. In general, RBO-Coop and GLR-Coop have better detection than RBO and GLR, especially RBO-Coop, which proves the designed cooperation mechanism is effective and particularly suitable for the UCB algorithm incorporating RBOCPD. These are consistent with the conclusion drawn based on Figureย <ref>.
To clarify the cooperation mechanism in change point detection, we show the detection record and the final decisions in Figureย <ref>. The figure illustrates the detection of each individual agent together with the final decision based on the neighborhood majority voting for arm 2. In that arm, the first change point occurs at t=5000. First, agent 2 detects that change, and then agent 1. Given the majority voting, agent 1 and 2 will restart their MAB policies after agent 1 detects the change. After several time steps, when agent 3 detects the change point, it has the restart history from its neighbor agent 2. Thus, it also restarts immediately at this time step. The same decision process follows by the second change at t = 7500. Agent 2 suffers from several false alarms between the two points, which, however, the cooperation mechanism filters out. Thus, in the final decision, this period does not include any restart. In conclusion, the developed cooperation mechanism significantly reduces the false alarm rate and improves the chances of correct change detection.
ยง.ยง Real World Dataset
In this section, we evaluate our proposed algorithm based on two different real-world datasets: Yahoo dataset[Yahoo! Front Page Today Module User Click Log Dataset on <https://webscope.sandbox.yahoo.com>] and digital marketing, which are same as the real-world dataset in <cit.>. To be more detailed, in Yahoo dataset we verify the effectiveness in piecewise-stationary Bernoulli distribution, which is also our original design while with the digital marketing dataset we verify a more general situation.
ยง.ยง.ยง Yahoo! Dataset
We evaluate our proposed algorithm based on a real-world dataset from Yahoo, which contains the user click log for news articles displayed on the Yahoo! Front Page <cit.>. To prepare the dataset, we follow the lines of <cit.>. Therein, the authors preprocess the Yahoo! dataset by manipulating it. They then demonstrate the arm distribution information in Figure 3 of that paper. To adapt to our multi-agent setting, we implement the experiment with K=5 agents. We enlarge the click rate of each arm by ten times <cit.>.[This step is essential because in the original setting, the gap between RBOCPD and GLRCPD can be smaller than the minimum detectable value.]ย Figureย <ref> and Figureย <ref> respectively show
the network and the arms' reward distributions.
Figureย <ref> shows the average regret of all algorithms while Figureย <ref> depicts the change point detection signals.
In Figureย <ref>, RBO-Coop-UCB and GLR-Coop-UCB have the lowest regret, while GLR-UCB has a regret lower than RBO-UCB. The reason is explainable using Figureย <ref> and Tableย <ref>. Algorithms with RBOCPD are more sensitive in detection; As such, they can detect change points faster. Besides, our proposed cooperation mechanism reduces false alarms while maintaining other performance indicators, such as correct detections and delays, at a comparable level. RBO-Coop detects more change points than GLR-Coop and guarantees fewer false alarms. In addition, according to the armsโ reward distributions, there are four change points in which the optimal arm remains fixed (t=4800,19200, 33600, 38400); thus, a restart at these steps increases the regret. That observation explains the similar regret performance of RBO-Coop-UCB and GLR-Coop-UCB despite the former detecting more change points. Besides, according to Figureย <ref>, the algorithms with cooperation (solid lines) always have lower regret than the algorithms without cooperation (dashed lines), which proves our designed cooperation mechanism enhances the performance.
ยง.ยง.ยง Digital Marketing Dataset
In <cit.>, the authors preprocess the dataset. They show the corresponding arm reward distribution in Figure 5 of <cit.>. Unlike the Yahoo! dataset, where the arms distribution is piecewise-Bernoulli, here, we only assume that the distribution is bounded within [0,1]. Similar to the previous experiment, we implement the dataset with a network with K = 7 agents as shown in Figureย <ref>. There are 12 piecewise-stationary segments as illustrated in Figureย <ref>.
Figureย <ref> shows the average regret of different policies and Figureย <ref> demonstrates the performance of different change point detectors.
From Figureย <ref>, RBO-Coop-UCB has the lowest regret. Figureย <ref> shows the detection performance. Accordingly, RBO-Coop performs better than GLR-Coop due to more sensitivity and lower delay.
According to the theoretical analysis in Remarkย <ref>, RBOCPD suffers a higher false alarm rate. In Experiment I, the regret of RBO-UCB is higher than GLR because of more frequent false alarms. In Experiment II, given a lower false alarm empirically according to Tableย <ref>, the reason for higher regret of RBO-UCB than GLR-UCB is restarting all of the arms, even if the optimal one remains unchanged. As shown in Figureย <ref> and Tableย <ref>, the false alarm of RBO in Experiment III is comparable to that of GLR, whereas its detection rate is much higher. Therefore, RBO has the second-best performance in this experiment. GLR does not perform at the expected level because it is not as robust as RBO when the gap in arms' rewards between change points is small. Finally, except for SW-UCB, the other algorithms with cooperation have lower regret than the version without, which is the same as the results that appear in Figureย <ref>.
ยง CONCLUSION
We propose a decision-making policy for the multi-agent multi-armed bandit problem in a piecewise-stationary environment. The process involves the seminal UCB framework combined with an information-sharing mechanism and a change point detector based on the Bayesian strategy. We prove an upper bound for the group regret of our proposal. Numerical analysis shows the superior performance of our method compared to the state-of-art research. Future research directions include improving the UCB strategy by personalizing it for each agent. For example, it is beneficial to consider heterogeneous agents' preferences. Besides, one might allow for different agent types and influence models and information-sharing under a directed graph model.
ยง.ยง Performance Guarantees of RBOCPD
Assume that X_r:tโผโฌ(ฮผ). Let ฮฑ > 1. If ฮท_r,s,t is small enough such that <cit.>
โ t โ [r,ฮฝ_n), s โ (r,t]:
ฮท_r,s,t < โ(n_r:s-1ร n_s:t)/10(n_r:t+1)ร
(log 2 log^4(ฮฑ) ฮด^2/4n_r:tlog^2(n_r:t) log(ฮฑ n_r:s-1) log(n_r:s-1) log(ฮฑ n_s:t) log(n_s:t))^ฮฑ
then with probability higher than 1-ฮด, no false alarm occurs in the interval [r,ฮฝ_n):
โ_ฮธ{โ t โ [r,ฮฝ_n), Restart_r:t = 1}โคฮด.
Let ฮโ [0,1]. The relative gap ฮ_r,s,t for the forecaster s at time t takes the following form (depending on the position of s) <cit.>:
ฮ_r,s,t = (n_r:ฮฝ_n-1/n_r:s-11{ฮฝ_n โค s โค t} + n_ฮฝ_n:t/n_s:t1{s < ฮฝ_n})ฮ
Let x_r:ฮฝ_n-1โผโฌ(ฮผ_1), x_ฮฝ_n:tโผโฌ(ฮผ_2) and f_r,s,t = log n_r:s+ log n_s:t+1 - 1/2log n_r:t + 9/8. Also, ฮ = |ฮผ_1 - ฮผ_2| is the change point gap. If ฮท_r,s,t is large enough such that <cit.>
ฮท_r,s,t > exp (-2n_r,s-1(ฮ_r,s,t - ๐_r,s,t,ฮด)^2 + f_r,s,t),
Then the change point ฮฝ_n is detected (with a probability at least 1-ฮด) with a delay not exceeding ๐_ฮ,r,ฮฝ_n such that
๐_ฮ,r,ฮฝ_n = min {d โโ^*: d > (1-๐_r,ฮฝ_n,d+ฮฝ_n-1,ฮด/ฮ)^-2/2ฮ^2
ร-logฮท_r,ฮฝ_n,d+ฮฝ_n-1+f_r,ฮฝ_n,d+ฮฝ_n-1/1+logฮท_r,ฮฝ_n,d+ฮฝ_n-1-f_r,ฮฝ_n,d+ฮฝ_n-1/2n_r,ฮฝ_n-1(ฮ-๐_r,ฮฝ_n,d+ฮฝ_n-1,ฮด)^2},
where
๐_r,s,t,ฮด = โ(2)/2(โ(1+1/n_r:s-1/n_r:s-1log(2โ(n_r:s)/ฮด))
+ โ(1+1/n_s:t/n_s:tlog(2n_r:tโ(n_s:t+1)log^2(n_r:t)/log(2)ฮด))) .
A change point ฮฝ_n is (ฮต,d)-detectable (with probability 1-ฮด) with respect to the sequence (ฮฝ_n)_n for the delay function ๐ if <cit.>
๐_ฮ,(ฮฝ_n-1+1+ฮต l_n-1),ฮฝ_n < l_n,
where l_n-1 = ฮฝ_n -ฮฝ_n-1. That is, using only a fraction 1-ฮต of the l_n-1-many observations available between the previous change point and ฮฝ_n + 1, the change can be detected at a time not exceeding the next change point.
Compared to other change point detection methods used in MAB algorithms, RBOCPD has the same advantages as GLRCPD, such as few hyperparameters (in RBOCPD, only one hyperparameter ฮท_r,s,t). Besides, it is robust to the lack of prior knowledge. Based on the theoretical analysis in Appendix E, F of <cit.> and our work, RBOCPD has lower computational complexity and higher robustness against small gap cases <cit.> while slightly higher false alarm rate than GLRCPD given the same hyperparameter (In GLRCPD the false alarm rate ฮด can be directly selected as hyperparameter while according to Lemmaย <ref>, in RBOCPD the false alarm rate ฮดโผ o(ฮท_r,s,t^1/2ฮฑ)).
ยง.ยง Proof of Lemmaย <ref>
First, we investigate the improvement in the false alarm as a result of cooperation in making the restart decision in each agent's neighborhood, as the following Lemmaย <ref> states.
Define ฯ as the cooperative false alarm rate where each agent implements a change point detector. The false alarm rate in a information-sharing mechanism can be concluded as
P(โ t โ [r,ฯ_c): โ_j โ๐ฉ_k1{โ i โ [N_t-d^j,m, N_t^j,m], r_i^j,m > 0}โฅโฮท_k/2โ)
= โ_j=0^โฮท_k/2โฮท_kjฮด^ฮท_k - j(1-ฮด)^j = ฯ,
where ฮด is the false alarm rate of every single agent, and ฮท_k is the number of neighbors of agent k (degrees of node k in graph ๐ข).
For each agent, the false alarm rate is ฮด. Therefore, when cooperating, the probability that there are j agents having false alarms at the same time is ฮท_kjฮด^j(1-ฮด)^ฮท_k - j. Based on the information-sharing mechanism, the algorithm has a false alarm only when more than half of the neighbor agents report a false alarm. Therefore, considering all possible scenarios, the false alarm rate yields ฯ = โ_j=โฮท_k/2โ^ฮท_kฮท_kjฮด^j(1-ฮด)^ฮท_k - j=โ_j=0^โฮท_k/2โฮท_kjฮด^ฮท_k - j(1-ฮด)^j. Besides, a ฮด << 1 satisfies โ_j=0^โฮท_k/2โฮท_kjฮด^ฮท_k - j-1(1-ฮด)^j โค 1, ฯ < ฮด, which implies that the cooperation in change point detectors reduces the false alarm rate. Different number of neighbors ฮท_k implies different value of ฯ and the higher ฮท_k is, the smaller ฯ is.
Now we are in the position to prove Lemmaย <ref>.
The expected regret can be written as
โ_T^k โคฮ_1^* โ_m โโณN_T^k,m,
where ฮ_1^* is the expected largest reward difference. Besides, โ_mโโณ N_T^k,m = โ_mโโณ N_T^k,m1{ฯ_1 โค T} + โ_mโโณ N_T^k,m1{ฯ_1 > T} and we have,
โ_mโโณ N_T^k,m1{ฯ_1 > T} โค p T + โ_mโโณโ_t=1^T 1{I_t^k โ i^*_t, N_T^k,m < l}
+ โ_mโโณโ_t=1^T 1{I_t^k โ i^*_t, N_T^k,m > l},
where pT refers to the regret of the forced exploration. Considering the worst situation, where all l steps are not optimal, โ_mโโณโ_t=1^T 1{I_t^k โ i^*_t, N_T^k,m > l}โค Ml. Besides, {I_t^k โ i^*_t, N_T^k,m > l}โ{Xฬ
_t^k,I_t^kโฅฮผ_t^k,I_t^k + C_t^k,I_t^k}โช{Xฬ
_t^k,i^*_tโฅฮผ_t^k,i^*_t - C_t^k,i^*_t}โช{ฮผ_t^k,i^*_t-ฮผ_t^k,I_t^kโค 2C_t^k,I_t^k, N_T^k,m > l } <cit.>. Denote ฮ_1^min as the expected lowest reward difference, then for l = โ8 ฮพlog T/ (ฮ_1^min)^2โ, {ฮผ_t^k,i^*_t -ฮผ_t^k,I_t^kโค 2C_t^k,I_t^k, N_T^k,m > l } = โ
, because ฮผ_t^k,i^*_t -ฮผ_t^k,I_t^k - 2C_t^k,I_t^k = ฮผ_i^*_t,t -ฮผ_I_t,t - 2โ(ฮพ (ฮฑ+1)log t/N_t^k,I_t^k)โฅฮ_1^min - 2โ(ฮพ (ฮฑ+1)log t/8 ฮพlog T/ (ฮ_1^min)^2)โฅฮ_1^min (1 - โ((ฮฑ+1)log t/2log T)) โฅ 0, โ I_t^k โโณ,.
[Chernoff-Hoeffding bound]
Let X_1,X_2, โฆ, X_n be random variables with common range [0,1] and such that ๐ผ[X_t|X_1, โฆ, X_t-1] = ฮผ. Let S_n = X_1+X_2+ โฆ+X_n, then for all a โฅ 0,
P(S_n โฅ nฮผ+a) โค e^-2a^2/n and P(S_n โค nฮผ-a) โค e^-2a^2/n.
According to Factย <ref>, P(Xฬ
_t^k,I_t^kโฅฮผ_t^k,I_t + C_t^k,I_t^k) โค e^-2ฮพ(ฮฑ+1)log t = t^-2ฮพ(ฮฑ+1), and P(Xฬ
_t^k,i^*_tโฅฮผ_t^k,i^*_t - C_t^k,i^*_t) โค t^-2ฮพ(ฮฑ+1). Choose ฮพ > 2/ฮฑ+1, t^-2ฮพ(ฮฑ+1)โค t^-4. Therefore,
โ_mโโณ N_T^k,m โค TP(ฯ_1 โค T) + pT + M(โ8 ฮพlog T/(ฮ_1^min)^2โ + 1+ฯ^2/3),
โค Mฯ T + pT + M(โ8 ฮพlog T/(ฮ_1^min)^2โ + 1+ฯ^2/3),
and the expected regret follows as
โ_T^k โคฮ_1^*[Mฯ T + pT + Mโ8 ฮพlog T/(ฮ_1^min)^2โ + M(1+ฯ^2/3)],
= ฮ_1^*Mฯ T + ฮ_1^*pT + Cฬ_1^k,
where Cฬ_1^k = ฮ_1^*[Mโ8 ฮพlog T/(ฮ_1^min)^2โ + M(1+ฯ^2/3)]. The first term is due to the false alarm probability in RBOCPD, the second term is because of the uniform exploration, and the last term is caused by UCB exploration.
ยง.ยง Proof of Lemmaย <ref>
In the stationary scenario, based on the theoretical false alarm rate of RBOCPD stated by Theoremย <ref> and Lemmaย <ref>, P(ฯ_1^k,mโคฯ) โคฯ for every k โ๐ฆ and every m โโณ. Therefore, P(ฯ_1^k โค T) โคโ_m=1^M P(ฯ_1^k,mโคฯ) โค Mฯ.
ยง.ยง Proof of Lemmaย <ref>
(a) is from Lemmaย <ref>.
According to Factย <ref> and Definitionย <ref>, between change points ฮฝ_n and ฮฝ_n-1, each arm has been selected at least 2 p/Mร (M/p๐_ฮ,(ฮฝ_n-1+d_n-1^k,m),ฮฝ_n+ M/p) = 2๐_ฮ,(ฮฝ_n-1+d_n-1^k,m),ฮฝ_n + 2 times. Given good event ๐_n-1^k, we have ฯ_n-1โคฮฝ_n-1 + d_n-1^k. Besides, ๐_ฮ,(ฮฝ_n-1+d_n-1^k,m),ฮฝ_n is detectable with probability 1 - ฮด based on Definitionย <ref> within length ฮฝ_n - ฯ_n-1 because ฮฝ_n - ฯ_n-1โฅฮฝ_n - (ฮฝ_n-1+d_n-1) โฅ 2 max(d_n,d_n-1) - d_n-1โฅmax(d_n,d_n-1), where the first inequality results from the good event ๐_n-1^k. Therefore, one can conclude that P(ฯ_n^k โคฮฝ_n + d_n^k | ๐_(n-1)^k) โฅ 1 - ฮด, which is equivalent to P(ฯ_n^k > ฮฝ_n + d_n^k | ๐_(n-1)^k) โคฮด. That completes the proof.
ยง.ยง Proof of Theoremย <ref>
The regret in the piecewise-stationary environment is a collection of regrets of good events and bad events. The regret of good events is because of UCB exploration, whereas that of bad events results from false alarms and long detection delays. The derivation of regret upper bound is similar to Lemmaย <ref>, as the regret of bad events can be bounded using the theoretical guarantee of the RBOCPD described in Appendixย <ref>. Before proceeding to the detailed proof, we state the following fact:
For every pair of instants s โค t โโ^* between two restarts on arm m, it holds n_t^k,m - n_s^k,mโฅ 2 โp/M(t-s) โ.
Define the good event โฑ_n^k = {ฯ_n > ฮฝ_n } and good event ๐ฏ_n^k = {ฯ_n^k< ฮฝ_n+d_n^k }. Recall the definition of the good event ๐_n^k in (<ref>). We note that ๐_n^k = โฑ_1^k โฉ๐ฏ_1^k โฉโฆโฉโฑ_n^k โฉ๐ฏ_n^k is the intersection of the event sequence of โฑ_n^k and ๐ฏ_n^k up to the n-th change point. We decompose the expected cumulative regret with respect to the event โฑ_1^k. That yields
โ_T^k = ๐ผ[R_T^k] = ๐ผ[R_T^k๐(โฑ_1^k)] + ๐ผ[R_T^k๐(โฑ_1^k)]
โค๐ผ[R_T^k๐(โฑ_1^k)] + Tฮ_opt^max P(โฑ_1^k)
โค๐ผ[R_ฮฝ_1^k๐(โฑ_1^k)] + ๐ผ[R_T-ฮฝ_1^k] + T ฮ_opt^max M ฯ
โคCฬ_1^k + ฮ_1^*pฮฝ_1 + T ฮ_opt^max Mฯ + ๐ผ[R_T-ฮฝ_1^k],
where Cฬ_1^k = ฮ_1^*[Mโ8 ln T/(ฮ_1^min)^2โ + M(1+ฯ^2/3)], which is concluded from Lemmaย <ref>. By the law of total expectation,
๐ผ[R_T-ฮฝ_1^k] โค๐ผ[R_T-ฮฝ_1^k|โฑ_1^k โฉ๐ฏ_1^k] + Tฮ_opt^max(1 - P(โฑ_1^k โฉ๐ฏ_1^k))
= ๐ผ[R_T-ฮฝ_1^k|โฑ_1^k โฉ๐ฏ_1^k] + Tฮ_opt^max P(โฑ_1^k โฉ๐ฏ_1^k)
โค๐ผ[R_T-ฮฝ_1^k|โฑ_1^k โฉ๐ฏ_1^k] + Tฮ_opt^max(M ฯ + ฮด).
Inequality (<ref>) results from Lemmaย <ref>. Furthermore,
๐ผ[R_T-ฮฝ_1^k|โฑ_1^k โฉ๐ฏ_1^k] = ๐ผ[R_T-ฮฝ_1^k|๐_1^k].
By combining the previous steps, we have
โ_T^k โค๐ผ[R_T-ฮฝ_1^k|๐_1^k] + Cฬ_1^k + ฮ_1^*pฮฝ_1 + Tฮ_opt^max(2M ฯ + ฮด).
Similarly,
๐ผ[R_T-ฮฝ_1^k|๐_1^k] โค๐ผ[R_T-ฮฝ_1^k๐(โฑ_2^k)|๐_1^k] + Tฮ_opt^maxP(โฑ_2^k|๐_1^k)
โค๐ผ [R_ฮฝ_2-ฮฝ_1^k๐(โฑ_2^k)|๐_1^k] +๐ผ[R_T-ฮฝ_2^k|๐_1^k] + Tฮ_opt^maxMฯ
โคCฬ_2^k + ฮ_2^*p(ฮฝ_2-ฮฝ_1) + ๐ผ[R_T-ฮฝ_2^k|๐_1^k] + Tฮ_opt^maxMฯ.
Furthermore, we have
๐ผ[R_T-ฮฝ_2^k|๐_1^k]
โค๐ผ[R_T-ฮฝ_2^k|โฑ_2^k โฉ๐ฏ_2^k โฉ๐_1^k] + Tฮ_opt^max(1-P(โฑ_2^k โฉ๐ฏ_2^k|๐_1^k))
โค๐ผ[R_T-ฮฝ_2^k|๐_2^k] + Tฮ_opt^max(Mฯ +ฮด).
Wrapping up previous steps, we arrive at
โ_T^k โค๐ผ[R_T-ฮฝ_2^k|๐_2^k]+ Cฬ_1^k+Cฬ_2^k + ฮ_1^*pฮฝ_1 + ฮ_2^*p(ฮฝ_2-ฮฝ_1)
+ Tฮ_opt^max(4Mฯ +2ฮด).
Recursively, we can bound ๐ผ[R_T-ฮฝ_2^k|๐_2^k] by applying the same method as before. The regret upper bound then yields
โ_T^k โคโ_n=1^NCฬ_n^k + ฮ_opt^maxT(p+2MNฯ + Mฮด),
where Cฬ_n^k = ฮ_n^*[Mโ8 ฮพlog T/(ฮ_n^min)^2โ + M(1+ฯ^2/3)].
ยง.ยง Proof of Corollaryย <ref>
Based on Theoremย <ref>, the regret is upper bound by โ_T^k โคโ_n=1^NCฬ_n^k + ฮ_opt^maxT(p+2MNฯ + Mฮด), let ฮด = 1/T and p = โ(Mlog T/T),
R_T โคโ_k=1^K[โ_n=1^NCฬ_n^k + ฮ_opt^maxT (p+2MNฯ + Mฮด)]
โค KN ฮ_n^*[Mโ8 ฮพlog T/(ฮ_n^min)^2โ + M(1+ฯ^2/3)]
+ Kฮ_opt^max [โ(MTlog T) + (2MN+M)].
IEEEtran
|
http://arxiv.org/abs/2306.05458v2
|
20230608180002
|
The BHL-BCL crossover: from nonlinear to linear quantum amplification
|
[
"Juan Ramรณn Muรฑoz de Nova",
"Fernando Sols"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas",
"gr-qc",
"nlin.PS",
"quant-ph"
] |
[email protected]
Departamento de Fรญsica de Materiales, Universidad Complutense de
Madrid, E-28040 Madrid, Spain
Departamento de Fรญsica de Materiales, Universidad Complutense de
Madrid, E-28040 Madrid, Spain
The black-hole laser (BHL) effect is the self-amplification of Hawking radiation in the presence of a pair of horizons which act as a resonant cavity. In a flowing atomic condensate, the BHL effect arises in a finite supersonic region, where Bogoliubov-Cherenkov-Landau (BCL) radiation is resonantly excited by any static perturbation. Thus, experimental attempts to produce a BHL unavoidably deal with the presence of a strong BCL background, making the observation of the BHL effect still a major challenge in the analogue gravity field. Here, we perform a theoretical study of the BHL-BCL crossover using an idealized model where both phenomena can be unambiguously isolated. By drawing an analogy with an unstable pendulum, we distinguish three main regimes according to the interplay between quantum fluctuations and classical stimulation: quantum BHL, classical BHL, and BCL. Based on quite general scaling arguments, the nonlinear amplification of quantum fluctuations until saturation is identified as the most robust trait of a quantum BHL. A classical BHL behaves instead as a linear quantum amplifier, where the output is proportional to the input. The BCL regime also acts as a linear quantum amplifier, but its gain is exponentially smaller as compared to a classical BHL. In addition, we find that a nonmonotonic dependence of the growth rate with respect to the background parameters is another signature of black-hole lasing. We also identify interesting analogue phenomena such as Hawking-stimulated white-hole radiation or quantum BCL-stimulated Hawking radiation. The results of this work not only are of interest for analogue gravity, where they help to distinguish each phenomenon and to design experimental schemes for a clear observation of the BHL effect, but they also open the prospect of finding applications of analogue concepts in quantum technologies.
The BHL-BCL crossover: from nonlinear to linear quantum amplification
Fernando Sols
July 31, 2023
=====================================================================
ยง INTRODUCTION
As noted by Unruh <cit.>, the equations of motion governing the perturbations around an ideal potential flow are formally analogue to those of a massless scalar field in a curved spacetime described by the so-called acoustic metric. This discovery gave rise to the field of analogue gravity, in which inaccessible gravitational phenomena can be modeled using tabletop experiments such as atomic Bose-Einstein condensates <cit.>, water waves <cit.>, nonlinear optical fibers <cit.>, ion rings <cit.>, quantum fluids of light <cit.>, or even superconducting transmon qubits <cit.>. As a result, analogues of the dynamical Casimir effect <cit.>, Sakharov oscillations <cit.>, superradiance <cit.>, inflation <cit.>, Hawking radiation <cit.>, Unruh effect <cit.>, quasinormal ringdown <cit.>, backreaction <cit.> or cosmological particle creation <cit.> have been observed in the laboratory. Quantum field simulators of curved spacetimes are already available in the laboratory <cit.>.
In this context, another remarkable phenomenon is the black-hole laser (BHL) effect <cit.>, i.e., the self-amplification of Hawking radiation due to successive reflections between a pair of horizons, leading to the emergence of dynamical instabilities in the excitation spectrum. A BHL requires a superluminal dispersion relation, as that of an atomic condensate <cit.>, so the radiation reflected at the inner horizon can travel back to the outer one. Other analogue setups have been also proposed to observe the BHL effect <cit.>.
In a condensate, a BHL configuration arises in a finite supersonic region. The Landau criterion establishes that a supersonic flow is energetically unstable, and any static perturbation will resonantly produce Bogoliubov-Cherenkov-Landau (BCL) radiation <cit.>, which is the analogue of the undulation in hydraulic setups <cit.>. Furthermore, the BHL modes are expected to contain similar wavevectors and frequencies to those of the BCL wave, as the latter is also stimulated by the scattering of Hawking radiation at the inner horizon. Hence, experimental attempts to isolate the BHL effect will be hindered by a background BCL signal. Indeed, the first reported observation of the BHL effect in 2014 <cit.> was later explained in terms of experimental BCL fluctuations <cit.>. A mechanism of BCL-stimulated Hawking radiation was identified in 2021 <cit.>, whose role in the 2014 experiment was recently confirmed <cit.>. Hence, the observation of the BHL effect still remains a major challenge in the analogue field.
So far, the literature has mostly addressed the BHL-BCL problem by directly simulating realistic setups resembling the actual experiments <cit.>. In this work, we adopt an alternative approach and present a detailed study of the main features of the BHL and BCL mechanisms using a simple yet highly idealized model, namely the flat-profile model <cit.>. There, the coupling constant and external potential are perfectly matched so that the background condensate flow is homogeneous while the speed of sound is tunable at will. Albeit quite unfeasible from an experimental point of view, the theoretical simplicity of this model has allowed to clearly identify key features of several phenomena such as Hawking correlations, resonant Hawking radiation, or black-hole lasers <cit.>, helping in this way to understand the underlying physics in more realistic scenarios of higher complexity <cit.>.
Within this simplified model, we report an intensive campaign of numerical simulations using the Truncated Wigner approximation <cit.> to compute the dynamics of the quantum fluctuations. In particular, we focus on evaluating density-based observables, especially the density-density correlation function <cit.>, the main tool in actual experiments <cit.>
According to the interplay between quantum fluctuations and Cherenkov stimulation, and drawing an analogy with an unstable pendulum since a BHL behaves as an unstable harmonic oscillator <cit.>, we identify three main regimes: quantum BHL, classical BHL, and BCL. In a quantum BHL, the dynamics is fully driven by quantum fluctuations of the unstable lasing modes. In a classical BHL, the lasing instability is given a well-defined classical amplitude, excited by the background Cherenkov wave. In the BCL regime, the Cherenkov stimulation is large enough to completely dominate the dynamics, overshadowing the BHL effect.
We find that the most characteristic signature of a quantum BHL is that it behaves as a nonlinear quantum amplifier, increasing the initial quantum fluctuations until saturation. A classical BHL corresponds instead to a linear quantum amplifier. The BCL regime also provides linear quantum amplification, but its gain is exponentially smaller as compared to a classical BHL because there is no microscopic mechanism of amplification. Our analysis relies on general scaling arguments that are quite independent of the details of the model.
Complementarily, we propose the strongly nonmonotonic dependence of the lasing growth rate with the background mean-field parameters (e.g., the cavity length or the flow speed) as another criterion to distinguish classical BHL from BCL stimulation, since the latter is expected to depend smoothly on the properties of the background flow.
Remarkably, in the process of analyzing the BHL-BCL correlation patterns, we also identify interesting phenomena such as Hawking-stimulated white-hole <cit.> (HSWH) radiation at the beginning of the black-hole lasing process, and BCL-stimulated Hawking radiation <cit.> of a quantum origin, which can be understood as spontaneous resonant Hawking radiation <cit.> above a nonlinear BCL undulation that acts as a resonator.
Apart from their intrinsic interest for the analogue field, where they help to isolate the distinctive signatures of each phenomenon and design experimental setups leading to a clear demonstration of the BHL effect, the results of this work may also have a potential impact on atomtronics <cit.> and on the more general field of quantum technologies.
The article is arranged as follows. Section <ref> introduces the model considered in this work while Sec. <ref> discusses the main tools used for the analysis. The numerical results are presented in Sec. <ref>. A global discussion on the physical significance of the findings of the paper is presented in Sec. <ref>. Conclusions and future perspectives are drawn in Sec. <ref>.
ยง THE MODEL
In order to study the the BHL-BCL crossover, we use the flat-profile model as a testing ground <cit.>, where both phenomena can be controlled and isolated. For times t<0, we consider a stationary one-dimensional homogeneous quasicondensate <cit.> flowing from left to right, described by a Gross-Pitaevskii (GP) wave function ฮจ_0(x)=โ(n_0)e^iqx. The corresponding sound and flow speeds are c_0=โ(gn_0/m) and v=ฤง q/m, with g the coupling constant and m the mass of the atoms. Hereafter, we set ฤง=m=c_0=1, and rescale the GP wave function as ฮจ(x,t) โโ(n_0)ฮจ(x,t) so it become dimensionless. The interested reader may consult Refs. <cit.> and Refs. <cit.> for detailed discussions that follow the notation of this work on both Hawking radiation and black-hole lasers, respectively.
Quantum fluctuations of the condensate are described by the Bogoliubov-de Gennes (BdG) equations. For a homogeneous condensate, the BdG modes are given in terms of plane waves with wavevector k, and the corresponding dispersion relation is determined by the flow and sound speeds v,c as
ฯ=vkยฑฮฉ_k,ย ฮฉ_k=โ(c^2k^2+k^4/4),
with ฮฉ_k the usual Bogoliubov dispersion relation for a condensate at rest and vk the Doppler shift. The dispersion relation presents two qualitatively different regimes, subsonic (v<c) or supersonic (v>c), as shown in upper row of Fig. <ref>. In the subsonic case, Fig. <ref>a, for a given frequency ฯ>0 there are two modes with real wavevector (the other two ones are exponentially growing/decaying solutions), labeled as bยฑ, where the ยฑ indicates whether they propagate along/against the condensate flow, respectively. In the supersonic case, Fig. <ref>b, for any positive frequency below the cutoff ฯ_max all wavevectors are purely real. The p1ยฑ modes arise from the + branch in Eq. (<ref>), while the p2ยฑ modes arise from the - branch and are the conjugates of the negative energy modes in the + branch. The presence of these negative energy modes is a characteristic feature of a supersonic flow, revealing its energetic instability. Specifically, the BCL mode has a finite wavevector k_BCL=2โ(v^2-c^2) at zero frequency as computed by the Landau criterion vk_BCL=ฮฉ_k_BCL. As a result, any static perturbation in the flow will resonantly excite BCL radiation in the condensate <cit.>. This is because the BdG equations describe at the same time both linear collective motion, coherently imprinted on the condensate and accounted by linear perturbations ฮดฮจ of the GP wave function, and quantum quasiparticle excitations, accounted by the quantum fluctuations of the field operator ฮดฮจฬ around the mean-field condensate. Nevertheless, we recall that energetic instability (presence of modes with negative energy) is only a necessary condition for dynamical instability (presence of modes with blowing complex frequency), but not sufficient <cit.>.
In our configuration, we choose the initial condensate to be subsonic, v<1. Now, at t=0, we quench inhomogeneously both the external potential W(x) and the coupling constant g(x) so that
g(x)+W(x)=E_b,
with E_b some constant energy that can be subtracted from the Hamiltonian. In this way, ฮจ_0(x)=e^ivx remains a stationary solution of the GP equation. However, the BdG modes do experience nontrivial dynamics as the sound speed is now c(x)=โ(g(x)). Specifically, we choose g(x) as a piecewise homogeneous function so that the condensate remains unchanged for x<0 and becomes supersonic for x>0, with a sound speed c_2<v. Hence, we reach a black-hole configuration with the event horizon placed at x=0. This process mimics the formation of a black hole and the subsequent production of Hawking radiation in actual experiments <cit.>, which is characterized by the spontaneous emission of b-,p2+ modes from the event horizon into the subsonic/supersonic regions, respectively.
At t=t_BCL>0, an additional localized delta potential V(x)=Zฮด(x-L) is switched on at x=L>0. Since the barrier is placed in the supersonic region, it will stimulate the emission of BCL radiation, here of wavevector k_BCL=2โ(v^2-c_2^2), with a certain amplitude A_BCL. This is a simple model of the BCL stimulation that occurs in the experiment, which can be previous to the birth of a second horizon. Finally, at t=t_BHLโฅ t_BCL, we quench back the coupling constant according to Eq. (<ref>) so that the condensate recovers its subsonic character for x>L. Hence, a white-hole horizon forms at x=L, giving rise to a BHL configuration. This process mimics the formation of the inner horizon in the experiment <cit.>.
Quantitatively, a BHL is characterized by a discrete BdG spectrum of dynamical instabilities, computed for the flat-profile case as a function of the cavity length L in Fig. <ref>c (see Refs. <cit.> for the technical details). The critical lengths at which a new dynamical instability emerges (vertical solid lines) can be derived analytically <cit.>,
L_n=L_0
+nฯ/โ(v^2-c_2^2),ย L_0=arctanโ(1-v^2v^2-c_2^2)/โ(v^2-c_2^2),ย n=0,1โฆ
All these modes are born as degenerate unstable modes, i.e., modes with purely imaginary frequency, ฯ_n=-ฯ_n^*. For half-integer values n+1/2, the above equation yields the lengths L_n+1/2 at which the n-th unstable mode becomes oscillatory, developing a nonvanishing real part of the frequency (vertical dashed lines). The dominant mode is that with the largest growth rate, ฮ_n=Im ฯ_n, and determines the total growth rate ฮ of the lasing instability, ฮ=max_nฮ_n (thick solid black envelope in Fig. <ref>c). For short cavities, this is typically the mode with the largest value of n. However, as the cavity becomes larger and larger, the competition between the different unstable modes becomes stronger and stronger.
Qualitatively, we can understand these dynamical instabilities as Hawking p2+ modes reflected at the white-hole horizon and scattered into p2- modes that will bounce back towards the black hole, further stimulating the production of Hawking radiation and thus leading to a process of self-amplification <cit.>. This simple argument yields a good estimation for the order of magnitude of the growth rate of the instability, ฮโผ 1/ฯ_RT (green dashed line in Fig. <ref>c), with ฯ_RT the roundtrip time for a zero-frequency p2+ mode to travel back and forth between the horizons. This frequency choice is motivated by observing that the dominant mode has either zero or small real part of the frequency, which also implies that the p2- wavevector involved in the lasing instability will be close to the BCL wavector. Thus, BCL stimulation and BHL self-amplification share similar short wavelengths and low frequencies, something which strongly complicates their clear distinction in real setups <cit.>. Here emerges the importance of the flat-profile configuration: because of the fine-tuning condition (<ref>), the white hole does not further stimulate BCL radiation. Indeed, if the delta potential was not switched on, Z=0, there would not be any Cherenkov stimulation from the white hole, and we would have the flat-profile BHL of Ref. <cit.>. On the other hand, if the white hole was never switched on, t_BHL=โ, we would only have BCL stimulation. Hence, by comparing scenarios with and without white hole, and with and without Cherenkov stimulation, we can isolate the genuine BHL features from those of BCL.
Compactly, the model for the condensate dynamics developed in this section is encapsulated in the following GP equation:
iโ_tฮจ(x,t) = H_GP(x,t)ฮจ(x,t),ย ฮจ(x,0)=ฮจ_0(x)=e^ivx,
H_GP(x,t) = -1/2โ_x^2+g(x,t)[|ฮจ(x,t)|^2-1]+V(x,t),
V(x,t) = Zฮด(x-L)ฮธ(t-t_BCL),
g(x,t) = 1+(c_2-1)[ฮธ(x)ฮธ(t),
- ฮธ(t-t_BHL)ฮธ(x-L)],
with ฮธ the Heaviside function. A schematic summary of the model is presented in Fig. <ref>d.
ยง THE TOOLS
We simulate the time evolution of the condensate by numerically integrating the time-dependent GP equation (<ref>). Quantum fluctuations are added via the Truncated Wigner approximation <cit.>, which computes symmetric-ordered expectation values from ensemble averages of integrations of the GP equation using a stochastic initial condition
ฮจ_W(x,0)=[1+ฮดฮจ_W(x)]e^ivx.
In turn, ฮดฮจ_W(x) is given in terms of the BdG modes:
ฮดฮจ_W(x)=1/โ(N)โ_k ฮฑ_k u_ke^ikx+ฮฑ^*_k v^*_ke^-ikx,
where N is the total number of particles (we recall that in our units the density n_0 is factorized out from the GP wave function) and u_k,v_k are the usual Bogoliubov components
u_k=k^2/2+ฮฉ_k/โ(2k^2ฮฉ_k),ย
v_k=k^2/2-ฮฉ_k/โ(2k^2ฮฉ_k).
Thus, the quantum fluctuations ฮดฮจฬ(x,t) of the field operator ฮจฬ(x,t) around the condensate are accounted here by the initial condition ฮดฮจ_W(x), where the amplitudes ฮฑ_k, ฮฑ^*_k are stochastic variables that mimic the usual phonon annihilation and creation operators ฮฑฬ_k,ฮฑฬ^โ _k, [ฮฑฬ_k,ฮฑฬ^โ _k']=ฮด_kk'. These classical amplitudes are sampled from the Wigner distribution of the initial equilibrium state, assumed to be the T=0 ground state in the comoving frame of the condensate, which is a Gaussian distribution characterized by the first and second-order momenta
โจฮฑ_k| โฉ= โจฮฑฬ_k|=โฉ0,ย โจฮฑ_k'ฮฑ_k|=โฉโจฮฑฬ_k'ฮฑฬ_k|=โฉ0,
โจฮฑ^*_k'ฮฑ_k| โฉ= โจฮฑฬ^โ _k'ฮฑฬ_k+ฮฑฬ_kฮฑฬ^โ _k'|โฉ/2=ฮด_kk'/2.
By subtracting the constant factor ฮด_kk'/2 in the last equation, arising from the commutator between the annihilation and creation operators, one retrieves the more usual normal-ordered expectation value โจฮฑฬ_k'^โ ฮฑฬ_k|$โฉ. Our initial condition for the quantum state eliminates the vaccuum ambiguity arising in the presence of a black hole, and of dynamically unstable modes after the BHL onset <cit.>.
Regarding the observables of interest, we focus on computing the density and its correlations as they are measured in the laboratory through in-situ imaging after averaging over ensembles of repetitions of the experiment <cit.>. Specifically, we will compute the ensemble-averaged density from the diagonal of the first-order correlation function
โจnฬ(x,t)|=โฉG^(1)(x,t)โกโจฮจฬ^โ (x,t)ฮจฬ(x,t)|.โฉ
In order to characterize the quantum fluctuations, we use the relative second-order correlation function
g^(2)(x,x',t)=โจฮจฬ^โ (x,t)ฮจฬ^โ (x',t)ฮจฬ(x',t)ฮจฬ(x,t)|-โฉโจฮจฬ^โ (x,t)ฮจฬ(x,t)|โจ%s|%sโฉโฉฮจฬ^โ (x',t)ฮจฬ(x',t)/n^2_0,
where we have momentarily restored units to keep track of the different scalings. Apart from a trivial delta term, the second-order correlation function is directly given by the relative density-density correlations, measurable in experiments,
g^(2)(x,x',t)=โจฮดnฬ(x,t)ฮดnฬ(x',t)|-โฉโจnฬ(x,t)|ฮดโฉ(x-x')/n^2_0,
withฮดnฬ(x,t)=nฬ(x,t)-โจnฬ(x,t)|$โฉ the density fluctuations. Thus, for simplicity we will also refer to the second-order correlation function as the density-density correlation function.
By expanding the density operator in terms of ฮดฮจฬ(x,t) around the initial homogeneous flowing condensate,
nฬ(x,t) = n_0+ฮดnฬ^(1)(x,t)+ฮดnฬ^(2)(x,t),
ฮดnฬ^(1)(x,t) = โ(n_0)ฯฬ(x,t),ย ฯฬ(x,t)=ฮดฮจฬ(x,t)+ฮดฮจฬ^โ (x,t),
ฮดnฬ^(2)(x,t) = ฮดฮจฬ^โ (x,t)ฮดฮจฬ(x,t),
we find that to lowest order in the quantum fluctuations
g^(2)(x,x',t)โโจฯฬ(x,t)ฯฬ(x',t)|-โฉฮด(x-x')/n_0.
Since โจฯฬ(x,t)ฯฬ(x',t)|โผโฉฮพ^-1_0, with ฮพ_0=ฤง/mc_0 the healing length, the normalized density-density correlation function
G^(2)(x,x',t)โก n_0ฮพ_0 g^(2)(x,x',t)
is a dimensionless function that does not depend explicitly on the density n_0 in the BdG approximation (only implicitly through ฮพ_0). When switching back to our system of units, where ฮดฮจฬ is relative to the condensate amplitude โ(n_0), the above dimensional argument dictates that the typical amplitude of the quantum fluctuations A_QF is
ฮดฮจฬโผ A_QFโผ1/โ(n_0ฮพ_0)โช 1,
where the condition n_0ฮพ_0โซ 1 is required so that the relative density fluctuations are small, g^(2)โผ (n_0ฮพ_0)^-1โช 1, and our one-dimensional quasicondensate description is valid <cit.>.
From the previous considerations, one can anticipate three regimes for the dynamics after the BHL formation depending on the amplitudes of the background Cherenkov wave, A_BCL, and of the quantum fluctuations, A_QF:
* Quantum BHL: A_BCLโช A_QFโช 1. The BHL instability is triggered by quantum fluctuations (e.g., no barrier is placed, A_BCL=0), and the dynamics is driven by the zero-point motion of the quasiparticle vacuum.
* Classical BHL: A_QFโช A_BCLโช 1. The BHL amplification still dominates the dynamics but the seed of the instability is now the classical amplitude of the BCL wave in the condensate.
* BCL: A_QFโช A_BCLโผ 1. The amplitude of the BCL stimulation is highly nonlinear and dominates the dynamics towards the saturation regime.
Since a BHL behaves as an unstable harmonic oscillator <cit.>, we can qualitatively understand these regimes using an analogy with an unstable pendulum, Fig. <ref>. The unstable equilibrium position is equivalent to the GP wave function of the flat-profile BHL. However, due to zero-point motion, this configuration is unstable at the quantum level, and then the pendulum falls (Fig. <ref>a). This is akin to a quantum BHL. We can give a deterministic amplitude to the pendulum with a little hit (Fig. <ref>b). The pendulum then departs some small angle ฮธ from its equilibrium position and consequently falls following a well-defined classical trajectory. This is akin to a classical BHL, where the angle ฮธ plays here the role of the Cherenkov amplitude A_BCL which seeds the BHL instability. If a strong enough force is applied (Fig. <ref>c), it will drive the pendulum motion outside its unstable equilibrium position instead of gravity, as in the case where the BCL wave dominates the dynamics over the BHL mechanism.
ยง NUMERICAL RESULTS
In this section, we present the main results of the work, where we compute the time evolution of the condensate and its quantum fluctuations (see Refs. <cit.> for the technical details about the numerical techniques employed in this work). As reference mean-field BHL parameters, we choose for the moment those of Fig. <ref>d: v=0.6, c_2=0.2, and L=20. This cavity contains 4 unstable lasing modes, with the n=2 mode expected to be the dominant one, leading to a growth rate ฮ=ฮ_2โ 0.02. Regarding the specific parameters involved in the Truncated Wigner method, we take L_tโ 1885 as the total length of the numerical grid (periodic boundary conditions are imposed) and N=10^7 as the number of particles, with n_0=N/L_t the condensate density. This yields n_0ฮพ_0โ 5ยท 10^3โซ 1. The number of modes is N_m=3000โช N, corresponding to a cut-off in k space of |k|<5, much larger than the typical wavevectors involved in the dynamics. Ensemble averages are evaluated after 1000 Monte Carlo simulations, although good convergence is already found for the main BHL and BCL features after few hundreds of simulations. Finally, appropriated factors are subtracted to retrieve the required normal-ordered expectation values; their explicit form is discussed in Ref. <cit.>.
In the simulations, the Cherenkov amplitude A_BCL is directly controlled by the barrier strength Z. The initial amplitude of the quantum fluctuations A_QF can be adjusted by a dimensionless control parameter ฮป in the following way: we increase the initial density n_0 as n_0โฮป n_0 while changing the coupling constant so that gn_0 is kept constant. By doing so, the mean-field dynamics remains the same but now n_0ฮพ_0โฮป n_0ฮพ_0. Thinking in more physical terms, this is like
modifying the condensate density by changing the number of atoms while tuning the coupling constant using Fesbach resonances <cit.>. As a result, the amplitude of the quantum fluctuations in our units will scale as
A_QFโผ1/โ(ฮป),
so ฮป can be regarded as a control parameter of the strength of the quantum fluctuations. Thus, the interplay between the quantum and Cherenkov seeding will be explored via the parameters Z,ฮป, manipulating also the onset times t_BCL,t_BHL to further isolate the contribution of each effect.
Since we are mainly concerned about the conceptual differences between the BHL and BCL mechanisms, we will restrict the simulations to intermediate times when the system reaches the saturation regime. This saturation can be associated to a certain stationary GP solution of the nonlinear spectrum <cit.>, which in turn is also dynamically unstable and eventually will collapse <cit.>. For sufficiently long times, the system will either reach the true ground state or the CES (Continuous Emission of Solitons) state <cit.>. However, the study of such long-time dynamics is beyond the scope of this work.
ยง.ยง Correlation patterns
We begin by studying the time evolution of the density-density correlation function G^(2)(x,x',t) and the corresponding ensemble-averaged density G^(1)(x,t), displayed in Figs. <ref>-<ref> for different values of Z,ฮป. We will focus on the qualitative correlation patterns exhibited, leaving the quantitative analysis for the following subsections.
In Fig. <ref>, we present the case of a purely quantum BHL as there is no BCL stimulation (Z=0), so the dynamics is fully driven by quantum fluctuations. At t=0, Fig. <ref>a, the homogeneous condensate is at equilibrium and G^(2)(x,x') displays the usual antibunching line along the main diagonal x=x' resulting from the repulsive nature of the interactions. At that moment, the black hole is switched on and the emission of Hawking radiation begins, as revealed by the emergence of the celebrated Hawking moustache, stemming from the correlations between the Hawking b- and the partner p2+ mode <cit.> (marked by a blue line in Fig. <ref>c). Along them, we observe the correlations between the Hawking and the p1+ mode <cit.> (red line), and the correlations between the p1+ and the p2+ modes (green line), which represent the bosonic analogue of the Andreev reflection <cit.>. All these lines are predicted following the usual hydrodynamic approximation <cit.>. Since there is no BCL stimulation, the background condensate remains homogeneous, as shown by the second row of Fig. <ref>.
At t=t_BHL=100, the white hole is switched on and a BHL configuration is reached. At early times, Fig. <ref>d, the presence of an inner horizon is revealed by fringe patterns in the supersonic-upstream (blue circle) and supersonic-downstream (green circle) regions. They result from the correlation between the Hawking (blue) or Andreev (green) modes and the partner p2+ modes, now reflected as p1-,p2- modes at the inner horizon with a wavevector close to k_BCL. In turn, the correlation between the p1-,p2- modes gives rise to a checkerboard pattern in the lasing cavity (magenta circle), which is the white-hole analogue of the Andreev correlations since here the outgoing supersonic modes have large wavevector for low frequencies. Physically, we can understand all these features as white-hole radiation <cit.> stimulated by the scattering of the Hawking radiation emitted from the black hole. We denote this phenomenon as Hawking-stimulated white-hole radiation or, more compactly, HSWH radiation. At this stage, the continuous spectrum of spontaneous Hawking radiation contributes to the white-hole stimulation.
As time goes by, the discrete nature of the unstable lasing spectrum enters in place. In particular, the dominant mode begins to overshadow the remaining fluctuations and drives the dynamics, Figs. <ref>i,m. This can be seen by the emergence of a strong checkerboard pattern in the supersonic-supersonic correlations (magenta square of Fig. <ref>i), and of an incipient ripple in the supersonic density (magenta line of Fig. <ref>m; notice the change of scale).
Eventually, the dominant mode reaches a large amplitude where nonlinear effects are crucial, Figs. <ref>j-l, n-p. In the density-density correlation function, we observe the emergence of a highly enhanced moustache (blue circle) along with series of parallel stripes both upstream and downstream (green circles), resulting from the monochromatic character of the instability. The checkerboard pattern also reaches a large amplitude while the density profile shows a peaked structure whose amplitude is saturated, corresponding to the n=3 nonlinear GP solution <cit.>.
Therefore, the BHL dynamics can be mainly understood in terms of the dynamics of a single mode. In order to further validate this interpretation, we compute the contribution to the density-density correlation function from the dominant mode by using the corresponding BdG solution. The result is depicted in Fig. <ref>b, where we compare it with the correlation function at an intermediate time t=250, Fig. <ref>a, when the dominant mode stands well above the remaining lasing modes but the system is not fully yet in the saturation regime. A good qualitative agreement is found between both correlation patterns; most of the blurring in Fig. <ref>a can be attributed to nonlinear effects (see discussion in the following paragraphs).
In Fig. <ref>, we now switch on a weak delta barrier (Z=0.01) at t=t_BCL=50, which stimulates a small BCL amplitude. The dynamics for times t<t_BCL remains the same. For early times after the onset of BCL and BHL, Figs. <ref>a-h, we see that the only noticeable difference is the emergence of a small ripple in the density profile arising from the background BCL wave. On the opposite side, precisely because of its deterministic nature, the BCL wave does not show up in the correlation function.
The presence of an initial classical amplitude modifies the dynamics for longer times. The first effect is that the nonlinear regime is reached sooner, Figs. <ref>i,m. The second effect is that several features of the correlation function become distorted as compared to the purely quantum case, Figs. <ref>j-l, n-p.
More information can be inferred if we reduce the strength of quantum fluctuations by setting ฮป=1000, Fig. <ref>. The early dynamics, Figs. <ref>a-h, is almost identical. However, the nonlinear regime is dramatically altered, Figs. <ref>i-p. Both the density profile and the correlation function are less smoothed and more peaked, losing most of the quantum BHL features of Fig. <ref>. In addition, the density profile now clearly displays soliton emission (solid red and green lines), which was unobserved in the previous cases. These solitons also show up in the correlation function as sharped features since they carry a large density depletion. We can understand the disappearance of solitons for strong quantum fluctuations as follows. In each individual trajectory of the Truncated Wigner ensemble, solitons are always emitted <cit.>. However, for a quantum BHL, the dynamics is solely driven by quantum fluctuations, mainly by the amplitude of the dominant mode. The quantum nature of this amplitude leads to strong variations between different trajectories, and in particular between the positions of the emitted solitons. Since both G^(1),G^(2) are obtained from ensemble averages, solitons are washed out by the averaging process. In contrast, in the classical BHL of Fig. <ref>, the dominant mode has an initial well-defined amplitude, from which the instability develops. Thus, all the trajectories of the ensemble amount to small quantum fluctuations around the deterministic classical trajectory given by Eq. (<ref>). When averaging over the ensemble, we recover the mean-field trajectory, which neatly displays solitons. Figure <ref> represents a limiting case where both quantum and classical BHL are competing. The mean-field dynamics is driven deterministically by the BHL amplification of the classical BCL seed. Nonetheless, the BHL also amplifies the quantum fluctuations, which become sufficiently strong in the saturation regime to blur the sharp soliton features when computing ensemble averages. These arguments also explain the peaked checkerboard pattern within the lasing cavity in the saturation regime of Fig. <ref>, which now can be understood as fluctuations around the highly nonlinear ripple in the background mean-field density, in contrast to the blurred checkerboard for stronger quantum fluctuations in Figs. <ref>,<ref>.
In Fig. <ref>, instead of diluting quantum fluctuations, we introduce a strong potential barrier which generates a large BCL amplitude. This is already observed soon after the onset of the barrier, Figs. <ref>e,f. Moreover, the Cherenkov wave now does show up in the density-density correlation function as a checkerboard pattern, Figs. <ref>a,b, contradicting the naive intuition that it should not appear there. The answer to this apparent paradox is that here the large BCL amplitude cannot be regarded as a linear BdG mode on top of a uniform condensate, but instead it strongly backreacts on the mean-field background around which quantum fluctuations evolve. Similarly to the saturation regime of Fig. <ref>, the nonlinear density ripple is translated into a checkerboard pattern in the density-density correlation function. The white-hole onset does not alter the dynamics, except for the emission of trains of solitons into the downstream region.
We further confirm that the checkerboard structure arises due to the strong BCL modulation of the mean-field background in lower row of Fig. <ref>, where we compare our results with a simulation that only includes particle-number fluctuations <cit.>. A good agreement is observed between the correlation patterns in the cavity region, revealing its underlying mean-field BCL nature.
Finally, in Fig. <ref>, we analyze the same case of Fig. <ref> but without white hole, t_BHL=โ, so the dynamics is purely driven by BCL stimulation. We observe that the evolution is essentially the same, especially in the cavity and the upstream region. The only significant difference arises in the downstream region, which now is supersonic and does not support trains of solitons. Thus, we conclude that the main correlation patterns of Fig. <ref> can be unambiguously attributed to BCL stimulation.
Interestingly, due to the supersonic character of the downstream region, at late times we can observe monochromatic features arising from quantum BCL-stimulated Hawking radiation <cit.> (blue circles in Fig. <ref>l), where quantum fluctuations around the nonlinear mean-field BCL background stimulate Hawking radiation. Actually, since the BCL modulation acts as a resonant cavity, we can also understand this phenomenon as spontaneous resonant Hawking radiation <cit.>.
ยง.ยง Quantum versus classical BHL
Once we have identified the main correlation patterns, we proceed to perform a quantitative analysis. Specifically, we evaluate the Fourier transform of the correlation functions inside the lasing cavity (magenta lines in Figs. <ref>i,m). As figures of merit, we choose
G^(2)_peak(t) โก max_k,k' |G^(2)(k,k',t)|,
G^(1)_peak(t) โก max_k |G^(1)(k,t)-G^(1)(0,t)|,
where in the last line we are subtracting the background homogeneous contribution from the condensate. These observables are expected to capture the main dynamics of the BHL and BCL regimes because both involve modes with well-defined wavevector within the lasing cavity. In particular, G^(1)_peak represents the amplitude of the density ripple and G^(2)_peak that of the checkerboard pattern, which are characteristic features of both regimes from their very beginning.
We display the time evolution of G^(2)_peak (upper row) and G^(1)_peak (lower row) in Fig. <ref>, where each column corresponds to Z=0,0.001,0.01,0.1,0.75, respectively. Inside each panel, blue, red, orange, purple curves represent ฮป=1,10,100,1000. Vertical solid (dashed) lines signal the onset of the BCL (BHL) mechanisms.
We first focus on the case of a purely quantum BHL, Figs. <ref>a,f, where we note that the blue curve corresponds to the simulation of Fig. <ref>. We observe that, before the BHL onset, the density-density correlations remain essentially constant (note the logarithmic scale) and are independent of ฮป. This last property arises precisely because of the use of the normalized correlation function of Eq. (<ref>). On the other hand, the density perturbations around the homogeneous background do depend on ฮป since
โจnฬ(x,t)|-โฉn_0=โจฮดnฬ^(2)(x,t)|=โฉโจฮดฮจฬ^โ (x,t)ฮดฮจฬ(x,t)|โผโฉฮป^-1,
so G^(1)_peakโผฮป^-1.
After the BHL onset, both the average density and the density-density correlations exponentially increase. Specifically, since the dominant mode drives the dynamics, in this regime we expect the quantum fluctuations to be essentially described by
ฮดฮจฬ(x,t)โe^ฮ t/โ(ฮป n_0ฮพ_0)[u_I(x)e^-iฮณ tฮฑฬ_I+v^*_I(x)e^iฮณ tฮฑฬ^โ _I],
with the time t measured here starting from the BHL onset, u_I,v_I the BdG components of the dominant mode, ฮณ the real part of its frequency, and ฮฑฬ_I its amplitude. We recall that, since dynamically unstable modes have zero norm according to the BdG inner product, their amplitudes do not behave as annihilation operators but they instead commute with their conjugate, [ฮฑฬ_I,ฮฑฬ^โ _I]=0 <cit.>. In a quantum BHL, the phase of the amplitude of the dominant mode is expected to be random, and hence we can neglect โจฮฑฬ_Iฮฑฬ_I|โโฉ0. This assumption yields that G^(1)_peak,G^(2)_peak behave as
G^(1)_peak(t) โผ โจฮดnฬ^(2)|โผโฉe^2ฮ t/ฮป,
G^(2)_peak(t) โผ ฮปโจฮดnฬ^(1)ฮดnฬ^(1)|โผโฉe^2ฮ t.
Hence, for a quantum BHL,
ln G^(n)_peak(t)โผ 2ฮ t, ย n=1,2,
since G^(1)_peak(t) also scales quadratically in the field fluctuations because the โค_2 symmetry of a purely quantum BHL sets โจฮดฮจฬ(x,t)|=โฉ0 <cit.>. Using the pendulum picture, we can understand the โค_2 symmetry as that of the quantum fluctuations around the initial unstable position.
The prediction of Eq. (<ref>) is depicted in dashed green, dashed-dotted black lines in Figs. <ref>a,f, respectively, where ฮ is obtained from Fig. <ref>c. A good agreement with the numerical results is observed, extending the qualitative agreement for the spatial correlation pattern present in the upper row of Fig. <ref>. Nevertheless, this approximation necessarily fails at early times after the BHL onset, when the whole spectrum of Hawking radiation contributes to the checkerboard amplitude through WHSH radiation, and at late times, when nonlinear effects become crucial. Both limits give rise to deviations from the dashed green slope. In particular, the exponential growth ceases when the system reaches the saturation regime. There, both G^(1)_peak,G^(2)_peak saturate to the values G^(1)_sat,G^(2)_sat (latest times in Fig. <ref>). Using these values, we can define the saturation time t_sat as the time when G^(2)_peak reaches G^(2)_sat. Technically, we extract G^(1)_sat,G^(2)_sat from the simulations by averaging in time once in the saturation regime, and take t_sat as the time needed to reach the value G^(2)_peak(t=0)+0.9[G^(2)_sat-G^(2)_peak(t=0)]. We have checked that the main conclusions are quite insensitive to the specific details of these definitions.
To capture the underlying physics governing these processes, we use a crude model in which Eq. (<ref>) governs the whole lasing period until the saturation regime, estimating in this way t_sat.
For a quantum BHL, Fig. <ref> shows that G^(1)_sat does not depend on ฮป but G^(2)_sat does, which is precisely the opposite behavior to that of G^(1)_peak,G^(2)_peak before saturation. This is because in the saturation regime
ฮดฮจฬโผ 1.
Our crude model then predicts
G^(1)_sat โผ 1โผe^2ฮ t_sat/ฮป,
G^(2)_sat โผ ฮปโผ e^2ฮ t_sat,
so
ln G^(2)_satโผlnฮป,ย ย t_satโผlnฮป/2ฮ.
We represent both G^(2)_sat,t_sat as a function of lnฮป in Figs. <ref>a,b, finding good agreement with the expected scaling.
In the regime of classical BHL, the โค_2 symmetry is broken by the BCL stimulation since the Cherenkov wave is a coherent undulation above the homogeneous background, accounted by linear perturbations of the GP wave function โจฮดฮจฬ(x,t)|=โฉฮดฮจ(x,t)โ 0. This is translated into โจฮฑฬ_I|โผโฉA_BCLโ 0 and thus
G^(1)_peak(t)โผโจฮดnฬ^(1)|โผโฉA_BCLe^ฮ tcos(ฮณ t+ฮด),
with ฮด some phase. In the pendulum analogy, the amplitude A_BCL is the equivalent of the initial angle ฮธ which breaks the โค_2 symmetry of the unstable equilibrium position. As a result, a classical BHL can be understood as a well-defined classical trajectory plus small quantum fluctuations around it. Consequently, the average density does not depend here on ฮป.
Precisely because of its classical deterministic character, at the linear level the BCL amplitude does not show up in the density-density correlation function and G^(2)_peak(t) still follows Eq. (<ref>). Thus, the โค_2 symmetry-breaking implies now
ln G^(1)_peak(t)โผฮ t,ย ย ln G^(2)_peak(t)โผ 2ฮ t.
This prediction is depicted in dashed green, dashed-dotted black lines in Figs. <ref>b-c, g-h, respectively, where good agreement with the numerical results is again observed. The small oscillations observed in G^(1)_peak(t) result from the sinusoidal term in Eq. (<ref>).
The saturation regime is reached when the mean-field amplitude, given here by G^(1)_peak, grows up to the nonlinear stationary GP solution. In our crude model, the saturation regime is then determined by the condition
G^(1)_satโผ 1 โผ A_BCLe^ฮ t_sat.
Therefore, the saturation time is predicted to behave as
t_satโผ - ln A_BCL/ฮโผ - ln Z/ฮ,
since we are in the regime of small A_BCL where it is linear in Z, A_BCLโผ Z. Accordingly, the saturation amplitude for the quantum fluctuations will be G^(2)_satโผ e^2ฮ t_sat so
ln G^(2)_satโผ 2ฮ t_satโผ -2ln Z.
We represent both magnitudes as a function of the delta strength Z in Figs. <ref>c,d, finding good agreement with our predicted scalings in the regime of validity of small Z. The decrease of both magnitudes with Z is quite intuitive: for increasing initial BCL amplitude, the closer the system is to the saturation regime and less time is needed to reach it. Since the amplification of quantum fluctuations mainly occurs during the lasing time, a shorter saturation time is translated into a smaller amplification e^2ฮ t_sat.
ยง.ยง Classical BHL versus BCL
We now address the distinction between classical BHL and BCL, Figs. <ref>d-e, i-j. Due to the strong BCL stimulation, the results here are essentially independent of ฮป. We first notice strong qualitative differences between the fourth and fifth column of Fig. <ref>. In Fig. <ref>d, the density-density correlation function is quite insensitive to the rather large BCL amplitude, Fig. <ref>i. However, once the white hole is switched on, a large amplification of quantum fluctuations is observed again, although with a different growth rate as compared to the expected BHL one, dashed green lines in Figs. <ref>a-c. Two main factors contribute to this behavior: (i) The starting point for the BHL amplification in Fig. <ref>d is not anymore some linear BCL wave on top of a homogeneous stationary condensate, where the BdG spectrum of Fig. <ref>c is still expected to be valid; (ii) even if the linear BdG approximation was valid, the saturation time is so short that there is no room for the dominant mode to stand above. On the other hand, Fig. <ref>e shows some amplification of the quantum fluctuations even when the BCL mechanism is operating alone, and the BHL onset barely affects the dynamics.
In order to further understand these features, we play with the BCL and BHL onsets in Figs. <ref>a-d where, for the corresponding values of Z=0.1,0.3,0.5,0.75, we display the time evolution of G^(2)_peak. Solid blue, dashed red, and dashed-dotted green lines in each panel correspond to BHL onset times t_BHL=50,100,150, respectively, with fixed BCL onset time t_BCL=50. The dotted magenta line represents the case where there is no white hole, t_BHL=โ. The control parameter ฮป is fixed to ฮป=1000.
In Fig. <ref>a, the amplification of G^(2)_peak can be unambiguously attributed to the presence of a white hole. However, this effect is attenuated as we further increase Z. Quantitatively, for increasing Z, the BCL contribution to the checkerboard pattern increases while the saturation amplitude G^(2)_sat decreases. These trends are confirmed by Fig. <ref>c, where we depict the saturation amplitudes G^(2)_sat for t_BHL=100 (t_BHL=โ) as a function of Z using black dots (blue squares).
Both features are in fact in close relationship. We have seen in the previous section that, in the classical regime, saturation is determined by the moment in which the mean-field density reaches the saturation amplitude G^(1)_sat. Moreover, the second row of Fig. <ref> shows that the dependence of G^(1)_sat on Z is very mild, even in the highly nonlinear regime of Zโผ 1. Therefore, the saturation time t_sat is essentially fixed by the starting point of the mean-field dynamics, in turn given by the initial BCL amplitude. Hence, by extending the arguments leading to Eqs. (<ref>)-(<ref>) to the nonlinear regime, we explain the decrease of G^(2)_sat for increasing Z.
At the same time (see Figs. <ref>-<ref> and ensuing discussion), if the BCL undulation is highly nonlinear, we cannot longer understand it as a perturbation but rather as a new mean-field background over which fluctuations evolve. This results in the emergence of a checkerboard pattern in the correlation function, whose origin is completely different to that of the early times of a BHL, Figs. <ref>-<ref>. Indeed, the checkerboard amplitude depends very weakly on Z for pure BCL simulation (blue squares in Fig. <ref>c), and it is exponentially smaller than that of a classical BHL (black dots). The last points represented, where both saturation amplitudes converge, correspond the to red and magenta curves in Fig. <ref>d, which are precisely the simulations of Figs. <ref>,<ref>.
ยง.ยง Mean-field parameters
We analyze in this section the role of the mean-field parameters determining the BHL configuration. For the sake of clarity, we focus on variations of the cavity length L and the flow speed v, which can be both controlled in the experiment: the cavity length is related to the depth of the waterfall potential and the flow speed to its velocity <cit.>.
The spectrum of dynamical instabilities as a function of L is shown in Fig. <ref>c, where blue dots mark the numerical values considered in this section. Specifically, in Fig. <ref> we jointly depict the time evolution of G_peak^(1) (dashed red) and G_peak^(2) (solid blue), both normalized to their corresponding saturation values, where each column corresponds to L=5,11.4,15,20,30 and each row to Z=0,0.001,0.75, respectively. In this way, each row matches one of the three regimes discussed here: quantum BHL, classical BHL, and BCL. Dashed green and dashed-dotted black lines in the first two rows represent theoretical fits from the predictions of Sec. <ref>, in good agreement with the numerical data.
The first column of Fig. <ref> presents a short cavity where only one nondegenerate unstable mode is present. Short cavities are translated into short roundtrip times as ฯ^-1_RTโผ 1/L, and hence large growth rates can be expected. This is what is seen Fig. <ref>a, where the dominant mode governs the dynamics almost from the very beginning of the lasing period (note also the shorter span in the time axis with respect to the other columns). In the latest times of the simulation, the system eventually reaches the ground state as no other nonlinear solutions are present. Since the ground state is dynamically stable, the enhanced quantum fluctuations flow away from the cavity, reducing the magnitude of G_peak^(2). The nondegenerate character of the unstable mode is clearly shown in Fig. <ref>f, where neat oscillations in the density growth are observed, as predicted by Eq. (<ref>). In contrast, no density oscillation is present in Fig. <ref>a due to the โค_2 symmetry of a purely quantum BHL. Finally, the BCL regime presents large periodic oscillations in its saturation regime because the system reaches a CES state, induced by the strong potential barrier.
It must be noted that the roundtrip time only provides an estimation of the actual growth rate. From Fig. <ref>c, we actually expect a nonmonotonic behavior of ฮ. This is indeed observed in the second column of Fig. <ref>, where although the cavity is more than twice longer, the growth rate is even larger than in the first column. In contrast, the oscillation period of the density for classical BHL is smaller, as also predicted by Fig. <ref>c.
For sufficiently long cavities, the qualitative trends converge, as shown by the last three columns of Fig. <ref>. In the linear regime, this is because there is an increasing number of lasing modes and we are closer to the ideal long cavity limit where the usual WKB prescriptions become exact <cit.>. In particular, the nonmonotonic behavior of the growth rate is attenuated, and the real part of the frequency of the dominant mode tends to zero, which translates into an increasing period of the oscillations in the density growth, Figs. <ref>h-j. In the saturation regime, the trends converge because, once there, the system typically oscillates around the nonlinear GP solution labeled with the largest n available, which is highly metastable with a long lifetime <cit.>. The more regular behavior of the nonlinear regime suggests that the predicted scalings for the saturation regime, namely Fig. <ref>, are more accurate in the long cavity limit.
With respect to the flow speed, the spectrum of dynamical instabilities as a function of v is shown in Fig. <ref>, where blue dots mark again the numerical values considered. We see that now the roundtrip time is reduced for increasing v, and in the limit of large supersonic Mach number it scales as ฯ^-1_RTโ 2v/L, resulting in an increasing estimated growth rate. However, once more, nonmonotonic oscillations of the actual growth rate are observed, with increasing amplitude. In particular, the critical velocities at which a new unstable mode emerges are given by the adapted version of Eq. (<ref>):
L=arctanโ(1-v_n^2v_n^2-c_2^2)/โ(v_n^2-c_2^2)+nฯ/โ(v_n^2-c_2^2),ย n=0,1โฆ
For half-integer values n+1/2, the above equation yields the velocities v_n+1/2 at which the n-th unstable mode becomes nondegenerate.
In Fig. <ref>, we perform the same analysis of Fig. <ref> but now each column corresponds to v=0.4,0.5,0.6,0.7,0.8, respectively. Once more, a good agreement with the theoretical predictions for the growth rate is found. The oscillation period of the density in the classical BHL regime also follows the expected behavior. Perhaps the most interesting result is the high degree of nonmonotonicity revealed by the fourth column, v=0.7, where the growth rate is halved during the lasing regime as compared to the adjacent columns v=0.6,0.8, as correctly predicted by Fig. <ref>.
ยง DISCUSSION
We proceed here to critically discuss the results of the previous section from a global perspective. We first analyzed in Sec. <ref> the characteristic spatial patterns of each regime (quantum BHL, classical BHL, and BCL) in the ensemble average density G^(1) and the normalized density-density correlation function G^(2). For a quantum BHL, at early times we observe HSWH radiation that results from the scattering of the partner modes of the Hawking effect at the inner horizon. Due to the existence of low frequency and large wavevector modes in the lasing cavity, a checkerboard pattern emerges in the density-density correlations. At late times a monochromatic pattern is observed in both G^(1),ย G^(2), associated to the exponential growth and subsequent saturation of the dominant unstable mode. This is translated into an enhanced checkerboard pattern in the lasing cavity and parallel stripes in the subsonic-supersonic correlations, as well as a large density ripple in the ensemble average density.
The classical BHL regime differs from the quantum one (i) at early times, by the presence of a density ripple due to the background BCL stimulation; (ii) at late times, by the presence of sharp features, including soliton emission. The BCL regime is characterized by a strong ripple whose amplitude is quite insensitive to the BHL onset, displaying a checkerboard pattern from the very start of the BCL stimulation.
A note of caution is placed here: the use of any of the above signatures as a conclusive smoking gun in a real experiment can be very problematic. For instance, the neat presence of sharp features (e.g., soliton emission) is indeed an evidence of classical behavior, either BHL or BCL. However, their absence does not exclude these phenomena because run-to-run experimental variations can also wash out these features in the ensemble average. The presence of HSWH radiation is neither an unambiguous signature since it can perfectly arise from an inner horizon that will eventually stimulate enough BCL radition to overshadow the BHL amplification, as in actual experiments <cit.>.
Even more confusing can be the presence of a monochromatic pattern in the correlation function, which naively could be attributed to the underlying operation of a quantum BHL, since that can also be generated by experimental variations of BCL-stimulated Hawking radiation, as observed in the 2021 experiment <cit.>. Actually, even the presence of a checkerboard pattern, common to all phenomena, is of a very different nature depending on the dominant mechanism. In a BHL, it arises from the amplification of quantum fluctuations in the cavity. Once in the saturation regime, the blurring of the checkerboard is characteristic of quantum BHL. In contrast, in the BCL regime, the checkerboard is originated by the large Cherenkov amplitude, which becomes nonlinear and acts as a background modulation over which quantum fluctuations evolve. Nevertheless, experimental variations can also blur a neat checkerboard arising from classical BHL or BCL stimulation.
Sections <ref>, <ref> introduced a more robust characterization of each phenomenon, based on a quantitative analysis of the time evolution of the amplitude of the density ripple in G^(1) and of the checkerboard pattern in G^(2). In the context of the quantum-classical transition of a BHL, we analyzed and confirmed the role of the โค_2 symmetry, first predicted in Ref. <cit.>. However, this symmetry is very fragile and a minor BCL stimulation can break it, as seen in Fig. <ref>b (see also central row of Figs. <ref>, <ref>). In turn, this symmetry breaking gives an oscillatory character to the growth of the density ripple, in contrast with the BCL undulation, by definition a zero-frequency mode. Again, this difference is neither a conclusive distinguishability criterion because in experiments the BCL wave can acquire a finite frequency, in the black-hole rest frame, due to the Doppler shift induced by the recoil of the inner horizon <cit.>.
Nevertheless, even when the โค_2 symmetry is broken, signatures of quantum BHL can still be present, revealed as a dependence of the saturation amplitude G^(2)_sat on the initial strength of the quantum fluctuations; see blue and red curves in Fig. <ref>b and blue curve in Fig. <ref>c. In fact, the blue and purple curves in Fig. <ref>c correspond to the simulations of Figs. <ref> and <ref>, which displayed strong qualitative differences in the correlation patterns.
Based on the above discussion, we identify the amplification of the initial quantum fluctuations as the most robust trait of the BHL effect. Specifically, we characterize the BHL-BCL crossover using very general scaling arguments that allow us to quantify the efficiency of the system as a quantum amplifier. Retrieving units in the following for the sake of generality, we measure the amplitude of the quantum fluctuations through the relative density-density correlation function g^(2) of Eq. (<ref>). In the initial state,
g^(2)โผ1/n_0ฮพ_0โช 1.
The saturation of a quantum BHL is characterized by the condition
g_sat^(2)โผ 1.
This means that a quantum BHL behaves as a nonlinear quantum amplifier that amplifies the initial quantum fluctuations up to saturation, with an output g_sat^(2) which is not proportional to the input.
In the case of a classical BHL, the saturation regime is reached in a time t_sat determined by the mean-field dynamics, in turn governed by the exponential amplification of the initial BCL seed. Thus, remarkably, for a classical BHL, the saturation properties are independent of the initial amplitude of the quantum fluctuations. This is translated into a saturation value
g_sat^(2)โผe^2ฮ t_sat/n_0ฮพ_0.
Hence, a classical BHL is a linear quantum amplifier, with a gain factor
๐ขโก n_0ฮพ_0 g_sat^(2)= G_sat^(2)โผ e^2ฮ t_sat,
which is exponentially large in the saturation time t_sat. Thus, we can also regard t_sat as the amplification time during which the exponential amplification of the quantum fluctuations takes place. As a result, this time typically decreases with the BCL amplitude as the system then starts closer to saturation. Any significant deviation from linear amplification represents a signature of quantum BHL.
In the BCL regime, linear quantum amplification occurs since the saturation properties are also independent of the initial amplitude of the quantum fluctuations. However, the gain ๐ข=G_sat^(2)
is exponentially smaller as compared to that of a classical BHL, Fig. <ref>c. This results from the fact that there is no microscopic mechanism of exponential amplification and the enhancement of quantum fluctuations stems just from the BCL modulation of the mean-field background.
Further understanding on how each amplifier works can be extracted from Figs. <ref>, <ref>. Before saturation, we see that the dynamics of G_peak^(1),G_peak^(2) are strongly coupled for quantum BHL and BCL, while they become uncorrelated for classical BHL. The explanation behind this observation relies on the different mechanisms governing the dynamics of each magnitude. In a quantum BHL, the evolution of both G_peak^(1),G_peak^(2) is mainly determined by the same mechanism: the amplification of the quantum amplitude of the unstable mode. In a classical BHL, G_peak^(1) is determined by the growth of the initial classical BCL amplitude while G_peak^(2) still represents the amplification of the quantum fluctuations of the unstable mode. The coupling of both magnitudes is again retrieved in the BCL regime, where the nonlinear amplitude of the Cherenkov wave that saturates G_peak^(1) is also responsible for the quantum amplification of G_peak^(2).
This analysis yields an important conclusion: the BCL mechanism requires a large nonlinear amplitude to show up in the correlation function, in contrast to a classical BHL, where mean-field and quantum dynamics (characterized by G_peak^(1),G_peak^(2), respectively) are decoupled. This implies that, for a classical BHL, one can have a large checkerboard amplification which is not translated into a large ripple in the density profile, as shown in Figs. <ref>, <ref>. Actually, this statement also applies to a purely quantum BHL due to the โค_2-symmetry suppression of the ripple growth (see Fig. <ref>i).
Hence, a joint analysis of the behavior of G_peak^(1),G_peak^(2) and a quantitative characterization of the gain in the quantum amplification can help to distinguish BHL from BCL in experimental setups in a robust way, regardless of the specific experimental details of the configuration. Moreover, the analysis can be complemented by varying the background mean-field parameters: any nonmonotonic behavior of the growth rate of both G_peak^(1),G_peak^(2) further hints at the presence of the BHL effect because the BCL dependence should be smooth due to its zero-frequency nature. This general qualitative analysis can be always complemented with a more specific quantitative study based on the estimation of the roundtrip time and the expected BCL signal <cit.>.
ยง CONCLUSIONS AND OUTLOOK
In this work, we have studied the BHL-BCL crossover using the idealized flat-profile model, which allows to unambiguously distinguish the contribution of each mechanism to the dynamics. By drawing an analogy with an unstable pendulum, we have identified three main regimes depending on the interplay between quantum fluctuations and classical stimulation: quantum BHL, where the dynamics is purely driven by vacuum fluctuations; classical BHL, where the lasing instability has a well-defined coherent amplitude induced by the background Cherenkov wave; and BCL, where the Cherenkov wave governs the evolution until saturation.
General scaling arguments allow us to characterize each phenomenon according to their behavior as amplifiers of the initial quantum fluctuations. In this way, a quantum BHL is revealed as a nonlinear quantum amplifier, which takes the initial quantum fluctuations up to saturation. A classical BHL behaves instead as a linear quantum amplifier, where the gain is exponentially large in the lasing time. The BCL regime also acts as linear quantum amplifier, but its nature is very different since the amplification arises from the strong background modulation and not from a microscopic mechanism. This is translated into an exponentially smaller gain as compared to a classical BHL.
In order to clearly distinguish between the BHL and the BCL regimes in experiments, the measurement of the quantum amplification gain can be complemented with a detailed study of the density ripple and the checkerboard pattern, including their joint behavior and the monotonicity of their growth rate with respect to the background parameters.
From an analogue gravity perspective, our work neatly isolates the characteristic traits of both BHL and BCL, which can help to the unambiguous identification of the BHL effect in future experiments. Furthermore, our analysis suggests that the most reachable target is a classical BHL, where the background Cherenkov wave is sufficiently attenuated to become the seed of the BHL amplification instead of overshadowing it. At the same time, one can aim at optimizing the background mean-field configuration in order to maximize the lasing growth rate. Once a classical BHL is achieved, the next step is the observation of the quantum BHL effect by further reducing the BCL background while increasing the amplitude of the quantum fluctuations by approaching the interacting regime n_0ฮพ_0โณ 1. Nevertheless, a dedicated analysis of realistic experimental setups, including effects such as temperature, time dependence, or experimental variations, is left for future work.
The results of this study can be also useful for other analogue setups in which low-frequency undulations similar to the Cherenkov wave compete with the BHL effect. Moreover, our model provides an ideal testground for the study of quantum <cit.> and classical <cit.> backreaction within the quantum and classical BHL regimes here described, respectively. In addition, the identification of phenomena such as HSWH radiation or quantum BCL-stimulated Hawking radiation is also relevant for the analogue community.
From a more global perspective, the identification of a BHL as a quantum amplifier can be also of interest for atomtronics and quantum transport. In general, the achievement and control of a stationary regime of spontaneous emission of Hawking radiation <cit.>, the emergence of a spontaneous many-body Floquet state with the subsequent formation of a continuous time crystal in the long-time regime of a black-hole laser <cit.>, and the quantum amplification here identified, open the prospect of using gravitational analogues to also investigate condensed matter phenomena and potential applications in quantum technologies.
We thank D. Bermรบdez, S. Butera, I. Carusotto, Uwe R. Fischer, M. Jacquet, C. Lobo, S. Weinfurtner, and I. Zapata for valuable discussions. This project has received funding from European Union's Horizon 2020 research and innovation programme under the Marie Skลodowska-Curie grant agreement No 847635, from Spain's Agencia Estatal de Investigaciรณn through Grant No. FIS2017-84368-P, and from Universidad Complutense de Madrid through Grant No. FEI-EU-19-12.
|
http://arxiv.org/abs/2306.03963v1
|
20230606184548
|
Beyond Superoscillation: General Theory of Approximation with Bandlimited Functions
|
[
"Tathagata Karmakar",
"Andrew N. Jordan"
] |
math-ph
|
[
"math-ph",
"math.MP"
] | |
http://arxiv.org/abs/2306.08977v1
|
20230615091654
|
Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
|
[
"Zhuozhu Jian",
"Zejia Liu",
"Haoyu Shao",
"Xueqian Wang",
"Xinlei Chen",
"Bin Liang"
] |
cs.RO
|
[
"cs.RO"
] |
Discovery potential for axions in Hamburg
A. Ringwald
July 31, 2023
==========================================
Wheeled robot navigation has been widely used in urban environments, but little research has been conducted on its navigation in wild vegetation. External sensors (LiDAR, camera etc.) are often used to construct point cloud map of the surrounding environment, however, the supporting rigid ground used for travelling cannot be detected due to the occlusion of vegetation. This often causes unsafe or not smooth path during planning process. To address the drawback, we propose the PE-RRT* algorithm, which effectively combines a novel support plane estimation method and sampling algorithm to generate real-time feasible and safe path in vegetation environments. In order to accurately estimate the support plane, we combine external perception and proprioception, and use Multivariate Gaussian Processe Regression (MV-GPR) to estimate the terrain at the sampling nodes. We build a physical experimental platform and conduct experiments in different outdoor environments. Experimental results show that our method has high safety, robustness and generalization.
The source code is released for the reference of the community[Code: <https://github.com/jianzhuozhuTHU/PE-RRTstar>.].
ยง INTRODUCTION
Autonomous navigation technology for unmanned ground vehicle (UGV) has developed rapidly in recent years, and its application scenarios are gradually expanding from indoors<cit.><cit.> to outdoors<cit.><cit.>. But autonomous navigation on uneven vegetated terrain remains a very challenging task. Due to the presence of vegetation, the robot's perception of the environment becomes inaccurate and more time-consuming, and the generated path also deviates from the real situation of the surrounding environment.
Existing autonomous navigation methods usually take the vegetation as the obstacle, but this method is too conservative, because for the shorter penetrable vegetation, the wheeled robots have the ability to pass through vegetation. Also, traditional methods usually need to build a prior traversability map <cit.><cit.> for navigation, which takes a lot of time, especially when accurate support ground estimation is needed in vegetated environment. LiDAR is often used to generate point cloud map of the surrounding environment<cit.><cit.>. Thus, there are two challenges: 1) the point cloud of vegetation does not correspond to the rigid geometry of the support ground; 2) generating the explicit traversability map estimating the support ground of the surrounding environment is time-consuming.
This work presents a real-time and safe path generation method to support autonomous navigation on vegetated terrain for wheeled robots. To solve challenge 1), we design a hybrid vegetated terrain estimation method, which fuses proprioception and external perception to generate support plane. The support plane is used describe the local geometrically rigid terrain. To solve 2), the support plane estimation is integrated to sampling algorithm for path planning, which reduce the process time since the traversability map construction computation is skipped. In addition, the inflation radius is added to the sampling algorithm to enhance safety.
This work offers the following contributions:
* A novel approach to accurately estimate the support plane is proposed, in which Multivariate Gaussian Process Regression (MV-GPR) based proprioception and external perception are fused considering uncertainty weighting.
* PE-RRT* (Plane Estimation RRT*), a sampling-based global path generation method is proposed, which achieve feasible, collision-free, and asymptotically optimal path generation in vegetated environment without explicit map construction.
* We build the experimental platform and conduct real-world experiments. The effectiveness of our method is confirmed by comparison with existing methods.
ยง RELATED WORK
In vegetated terrain, support surface is often invisible to external sensors.
Therefore, the accurate perception of the support terrain is the premise of path planning. Some devices are designed to sense directly the ground. In <cit.>, authors use an array of miniature capacitive tactile sensors to measure ground reaction forces (GRF) to distinguish among hard, slippery, grassy and granular terrain types. <cit.> produces a self-supervised mechanism to train the trafficability prediction model based on resistance coefficients determined from the current and force, which is used to estimate the trafficability of regions of dirt, light vegetation, and heavy brush. However, the coupling of the above methods with motion planners has not been completed. Moreover, as the length of the trajectory increases and the terrain becomes more varied, the algorithm quality begins to degrade.
Some methods attempt to traverse vegetation based on external sensors. <cit.> assumes that ground heights smoothly vary and terrain classes tend to cluster, and uses Markov random fields to infer the supporting ground surface for navigation based on LiDAR points. <cit.> defines a regression problem which estimates predicted error between the realized odometry readings and the predicted trajectory. And they utilize machine learning techniques to predict model error associated with an RGB image. However, this method lacks robustness to environmental changes and cannot ensure safety.
Combining proprioception and external perception to improve robustness is considered to be a common and effective approach. <cit.> provides robustness of hexapod locomotion in high grass by switching between two locomotion modes based on proprioceptive and exteroceptive variance estimates. In <cit.>, the authors propose an attention-based recurrent encoder integrating proprioceptive and exteroceptive input. This approach is applied to quadrupeds and validated experimentally. And in <cit.>, the authors apply Gaussian process regression (GPR) to estimate support surface including the height of the penetrable layer.
However, the above work has to build a prior map first, and then analyze the travesability of each foothold, which can cause large computational expense and cost a lot of time. And for the more commonly used wheeled robots, autonomous navigation pays more attention to the overall properties of the ground beneath the robot.
In our work, we propose the PE-RRT* algorithm, which avoids the explicit maps by sampling to significantly reduce computational expense. We describe the ground as the set of circular planes, and fuse the height and slope of the planes generated by MV-GPR<cit.> by taking the variance as the weight.
ยง PROBLEM FORMULATION
Our objective is to to generate a global path on the rigid geometric surface based on point cloud representing the vegetated environment. In our work, we simplify the local geometric support terrain of a single point into a support plane (S-Plane) _S:={ x,y,z,r,p }, which contains the roll angle rโโ, pitch angle pโโ and the 3D coordinates [x,y,z]^Tโโ ^3 of plane center. We address the problem defined as follows: In the unknown vegetated terrain, given the initial and target state projection x_start, x_goalโโ ^2, search a feasible and optimal global path consisting of W nodes ={( _S,i) _i=1:W}. Alone path , the wheeled robot can move from x_start to x_goal. The path should satisfy: 1) the robot can pass safely along the path; 2) avoiding collision with obstacles along the path; 3) reduce time spent on the move; 4) minimizing the risk of the robot being unable to maintain a stable posture.
The workflow of our entire system is shown in Fig.<ref>. Our navigation algorithm is a two-layer structure including global and local planner. The global planner generates a safe and feasible global path in real time, which is the main content of our research. The global planer contains two parts: PE-RRT* which will be detailly described in Sec.<ref>, and S-Plane Estimation which will be detailly described in Sec.<ref>.
ยง IMPLEMENTATION
Commonly used path planning frameworks require building a priori or real-time explicit map. Traversability analysis of the map is performed before path planning, which costs too much time. To solve this problem, we propose PE-RRT*, a sampling-based path planning algorithm. In PE-RRT*, we sample and analyze directly on the point cloud, avoiding building an explicit traversability map. PE-RRT* algorithm will be described in detail in <ref>. For each node, proprioception and external perception are performed in subsection <ref> and <ref>, and parameter gets estimated in real time in subsection <ref>. The fusion process of proprioception and external perception to generate S-Plane is in subsection <ref>. For ease of understanding, we first introduce the relevant mathematical basis in <ref>.
ยง.ยง MV-GPR
Gaussian process (GP) regression has been proven to be effective in robot navigation<cit.><cit.>. However, the classical GP can't deal with the multi-response problem because of its definition on โ. As a result, the correlation between multiple tasks cannot be taken into consideration. To overcome this drawback, <cit.> proposes multivariate Gaussian process regression (MV-GPR) to perform multi-output prediction. Its precise definition based on Gaussian measures and the existence proof is introduced in <cit.>.
f represents a multivariate Gaussian process with its mean function u:๐ณโฆโ ^d, kernel k:๐ณร๐ณโฆโ and positive semi-definite parameter matrix โโ ^dร d.
And Multivariate Gaussian Process (GP) can be denoted as fโผโณ๐ข๐ซ( u,k,). For n pairs of observations {( x_i,y_i ) } _i=1^n,x_iโโ ^p,y_iโโ ^1ร d, we assume the following model:
fโผโณ๐ข๐ซ( u,k',)
Different from conventional GPR method, MV-GPR adpots the noise-free regression model, thus y_i=f( x_i ) for i = 1,โฏ ,n. And the noise variance term ฯ _n^2 is added into the kernel k^'=k( x_i,x_j ) +ฮด _ijฯ _n^2, in which ฮด _ij=1 if i=j, otherwise ฮด _ij=0.
With matrix form [ f( x_1 ) ,โฏ ,f( x_n ) ] ^Tโโ ^nร d,
the joint matrix-variate Gaussian distribution <cit.> can be represented as:
[ f( x_1 )^T ,โฏ ,f( x_n )^T] ^Tโผโณ๐ฉ( M, ,)
where mean matrix Mโโ ^nร d, covariance matrix โโ ^nร n, โโ ^dร d and X=[ x_1,โฏ ,x_n ] ^T represents the location of training set.
To predict variable f_*=[ f_*,1,โฏ ,f_*,m] ^T with the location X_*=[ x_n+1,โฏ ,x_n+m] ^T where m represents the test set number, the joint distribution of
the training observations Y=[ y_1^T,โฏ ,y_n^T] ^T and f_* is
[ [ Y; f_*; ]] โผโณ๐ฉ( 0,[
K^'( X,X ) K^'( X_*,X ) ^T
K^'( X_*,X ) K^'( X_*,X_* )
] ,)
where K^' is the covariance matrix of which the (i, j)-th element [ K^'] _ij=k^'( x_i,x_j ). Based on marginalization and conditional distribution theorem<cit.><cit.>, the predictive distribution is derived as
p( f_*|X,Y,X_* ) =โณ๐ฉ( Mฬ,,)
where
Mฬ=K^'( X_*,X ) ^TK^'( X,X ) ^-1Y
= K^'( X_*,X_* )
-K^'( X_*,X ) ^T K^'( X,X ) ^-1K^'( X_*,X )
=
According to the above formulas, the expectation and variance are respectively ๐ผ[ f_* ] =Mฬ and cov( vec( f_*^T) ) =โ. When the dimension of the output variable d=1 and covariance matrix =I, it means the process transitions from multivariate to Univariate.
ยง.ยง PE-RRT*
Fig.<ref> shows the structure of PE-RRT*. When a 2D node containing the x and y coordinates is obtained by "Sample" and "Steer" operations, plane estimation module (shown in the blue box) try to find its corresponding S-Plane.
In this module, Surface Plane (Surf-Plane), Proprioception Support Plane (Pro-Plane) and Externel Perception Support Plane (EP-Plane) are proposed to calculate S-Plane. In the above planes, "Height" means the z-coordinate of the plane center, "Roll" and "Pitch" means the orientation. Surf-Plane is first fitted from point cloud based on RANSAC method<cit.>.
Using Prev-Trajectory traversed by robot as the training set, MV-GPR is applied to generate the height and orientation of Pro-Plane. Vegetation depth of new node is generated by single response MV-GPR, where the output of training set is obtained by Surf-Plane height subtracting Prev-Trajectory height.
The EP-Plane height can be obtained by Surf-Plane height subtracting the vegetation depth, and its orientation is obtained directly from Surf-Plane.
Combining the height, roll and pitch from Pro-Plane and EP-Plane based on variable weights calculated from uncertainty, we can get S-Plane, on which traversability (including uncertainty, vegetation height, slope) can be evaluated. After the 'Obstacle' and 'Inflation' check and 'PurnBranch' operation, 'Connect' and 'Optimize' operations are performed. Thus a new 3D node is obtained and RRT tree can be expanded.
PE-RRT* algorithm is based on informed-RRT* algorithm <cit.> which is widely used in the field of path planning, efficiently integrates the process of S-Plane estimation into the RRT tree expansion. Especially, we introduce an inflation radius during the sampling process to enhance safety. The flow of the PE-RRT* is shown in the Alg.<ref>. Some new subfunctions presented in Alg.<ref> are described as follows while subfunctions common to the informed-RRT* algorithm can be found in <cit.><cit.>.
Surf-Plane _Surf, Pro-Plane _Pro, EP-Plane _EP and S-Plane _S all consist of a 3D plane center point, roll angle and pitch angle.
* Proprioception(ฮถ,x_new): Given the robot's Prev-Trajectory ฮถ and the node's 2D coordinate x_new, the Pro-Plane is returned. The implementation will be discussed in detail in part <ref>.
* ExPerception(_Surf,ฮถ,x_new): Given the robot's Prev-Trajectory ฮถ, the Surf-Plane _S and the node's 2D coordinate x_new, the EP-Plane is returned. The implementation will be discussed in detail in part <ref>.
* SupportFuse(_EP,_Pro): Given the Pro-Plane _Pro and EP-Plane _EP, the fused S-Plane is returned. The implementation will be discussed in detail in part <ref>.
* ObsCheck(_Surf, _S): Given the S-Plane _S and Surf-Plane _Surf, we use the height h of vegetation as the criterion for judging obstacles, and it can be obtained by h=z_Surf-z_S, where z_S represents the height of _S and z_Surf represents the height of _Surf. When the vegetation in an area is too high, there are usually rigid trees, which can cause collisions. So we define a threshold value h_crit, when h>h_crit, the node is considered to be obstacle, the function returns "True", otherwise "False".
* InflationCheck( _S, _Obs): Given the S-Plane _S and obstacle set _Obs, we define an inflation radius r as shown in Fig.<ref>, if for any element in _Obs, its Euclidean distance in 2D x-y space from _S is greater than r, then the function returns "True", otherwise "False".
* PurnBranch(T, _S): Given the RRT tree T and S-Plane _S, for each node in T, if its Euclidean distance in 2D x-y space from _S is smaller than r, the node and its branch will be deleted.
* TraEvaluation(_S): Given the S-Plane _S, the traversability is obtained from the slope and the uncertainty of _S. The implementation will be discussed in detail in part <ref>.
Forfordo
ruled
ยง.ยง S-Plane Estimation
In order to generate S-Plane, we perform proprioception and external perception on the node to generate Pro-Plane and EP-Plane respectively.
ยง.ยง.ยง Proprioception
The Proprioception of the robot usually depends on the sensors of the robot itself (wheel speedometer, IMU, etc.), but usually causes cumulative errors. In our experiments, FAST-LIO2.0<cit.> is adopted as an odometer, in which the information of IMU and LiDAR is fused to improve the positioning accuracy.
In this module, MV-GPR is used for estimate Pro-Plane _Pro of new node. To reduce the computational expense, the training size has to be limited<cit.>. We record the position { x_i,y_i,z_Pro,i} _i=1:N and pose { r_Pro,i} _i=1:N, { p_Pro,i} _i=1:N of the previous N steps of the robot. The training input data comprises the horizontal position of the prev-trajectory X=[ [ x_1,y_1 ] ^T,โฏ ,[ x_N,y_N ] ^T] ^Tโโ ^Nร 2, while the output data is defined as Y_Pro=[ [ z_Pro,1,r_Pro,1,p_Pro,1] ^T,โฏ ,[ z_Pro,N,r_Pro,N,p_Pro,N] ^T] ^Tโโ ^Nร 3. Note that in order to ensure that the yaw angle make no difference to the slope, we extract roll r and pitch p from rotation matrix R^i, which can be obtained from odometer. And Y_Pro,i=[ z_Pro,i,r_Pro,i,p_Pro,i] ^T for i=1โฏ N, where
p_Pro,i^=atan 2( R_31,i^,โ(( R_32,i^) ^2+( R_33,i^) ^2))
r_Pro,i^=atan 2( -R_32,i/cos( p_Pro,i),R_33,i/cos( p_Pro,i))
.
Quantifying uncertainty is crucial for assessing the accuracy of plane estimations, which will be discribed in detail in <ref>.
For proprioception, the uncertainty ฯ _n,Pro^2 of the training set is from TF, which is set to be a constant value in our experiment. In MV-GPR, the covariance matrix depends on inputs and the kernel function k. Compared to other kernel functions (such as linear, rational quadratic and Matern<cit.>), squared exponential (SE) kernel is more commonly used due to its simple form and many properties such as smoothness and integrability with other functions. The kernel is defined as:
k_SE( x,x^') =s_f^2exp( - x-x^' _2^2/2l^2)
where s_f^2 is overall variance and l is kernel length scale. Due to the properties of SE kernel, when the distance between inputs (Euclidean distance) is farther, the variable z, r and p variance becomes larger, which means that the Pro-Plane estimated by proprioception becomes more uncertain.
We take the 2D coordinates x_new=[x_*,y_*] of a single node as the input of the test set, and {X,Y_Pro} as the training set, according to formula <ref> <ref> <ref>, we can get the prediction of height แบ_Pro,* and pose rฬ_Pro,*, pฬ_Pro,* of the node. Thus, we can get the estimation of Pro-Plane is _Pro={x_*,y_*,แบ_Pro,* ,rฬ_Pro,*,pฬ_Pro,*}. The height variance ฯ _Pro,z,*^2, roll variance ฯ _Pro,r,*^2 and pitch variance ฯ _Pro,p,*^2 can be obtained from the Kronecker product of and .
ยง.ยง.ยง Externel Perception
Compared with proprioception, external perception relies on point cloud map generated by LiDAR. To get a new EP-Plane _EP, we first fit the Surf-Plane _Surf corresponding to the 2D node. Compared to the SVD method used in PF-RRT*<cit.>, we adopt RANSAC method to fit a plane, which can avoid the influence of tall rigid obstacles (such as tall trees, large stones) on the slope of the fitted Surf-Plane.
For slope estimation of EP-Plane, we consider roll and pitch of EP-Plane and Surf-Plane to be the same: r_EP,*=r_Surf,*, p_EP,*=p_Surf,*, due to the assumption of uniformity and continuity of penetrable vegetation. And so is the corresponding variance: ฯ _EP,r,*^2= ฯ _Surf,r,*^2, ฯ _EP,p,*^2= ฯ _Surf,p,*^2. The variance of ฯ _Surf,r,*^2 and ฯ _Surf,p,*^2 obtained by the empirical formula:
ฯ _Surf,r,*^2=ฮบ _r _k=1^K[ nยท( x_ ,*^k-x_Surf,*) ] ^2/K-1
ฯ _Surf,p,*^2=ฮบ _p _k=1^K[ nยท( x_ ,*^k-x_Surf,*) ] ^2/K-1
where the Surf-Plane envelops K points on the point cloud map, the k-th point's 3D coordinate is x_ ,*^kโโ ^3, and the plane center is x_Surf,*โโ ^3. n represents the normal vector of Surf-Plane. ฮบ _r and ฮบ _p are constant coefficient.
And the estimation of z_EP is more complicated. Vegetation depth H is introduced as an intermediate variable for estimating z_EP. Take Prev-Trajectory X=[ [ x_1,y_1 ] ^T,โฏ ,[ x_N,y_N ] ^T as the inputs and corresponding vegetation depth Y=[ H_1,โฏ ,H_N ] ^T as the outputs. For the i-th vegetation depth H_i, it can be obtained as H_i=z_Surf,i-z_Pro,i, where z_Surf,i is the Surf-Plane height. Its uncertainty ฯ _H,i^2=ฯ _Pro,z,i^2+ฯ _Surf,z,i^2 contains the uncertainty ฯ _Pro,z,i^2 from TF and the uncertainty ฯ _Surf,z,i^2 from Surf-Plane due to the independence assumption. And ฯ _Surf,z,i^2 is defined as:
ฯ _Surf,z,i^2=โ_k=1^K( z_ ,i^k-z_Surf,i) ^2/K-1
where the height of the k-th point is z_ ,i,k, and the height of the plane center is z_Surf ,i for i-th Surf-Plane.
Thus the vegetation depth ฤค_* of a new node and its variance ฯ _H,*^2 can be obtained based on the equation <ref> <ref> <ref>. And for the new node, the height of EP-Plane is แบ_EP,*=z_Surf,*- ฤค_*, and its variance ฯ _EP,*^2=ฯ _H,*^2+ฯ _Surf,z,*^2 consists of two parts: the covariance generated from GPR ฯ _H,*^2; the covariance of Surf-Plane ฯ _Surf,z,*^2. Note that the ฯ _Surf,z,*^2 calculation of Surf-Plane is consistent with formula <ref>.
ยง.ยง.ยง Parameter Estimation
In the process of the robot moving forward, in order to ensure the accuracy of MV-GPR, it is necessary to estimate its parameters in real time. For proprioception which contains a 3-variate Gaussian process, the estimated parameters include kernel matrix parameters s_f^2, l^2, covariance matrix = ^T, where for ฯ _11,ฯ _22,ฯ _33,ฯ _31,ฯ _21,ฯ _32โโ:
=[
e^ฯ _11 0 0
ฯ _21 e^ฯ _22 0
ฯ _31 ฯ _32 e^ฯ _33
]
to ensure the positive definiteness of the matrix.
We use the maximum likelihood method to estimate the parameters. For negative log marginal likelihood
โ = nd/2ln( 2ฯ) +nd/2ln( K+ฯ _n^2) +d/2ln( )
+1/2tr( ( K+ฯ _n^2) ^-1Y ^-1Y^T)
The derivatives of the negative log marginal likelihood with respect to parameter s_f^2, l^2, ฯ_ii and ฯ_ij can be obtained. Formula derivation reference <cit.>.
For external perception witch is a Univariate Gaussian process, we only need to estimate the kernel s_f^2, l^2.
ยง.ยง.ยง Plane Fusion
The vegetation height varies in different environments. On the grassland, the vegetation is usually short, and the point cloud returned by the LiDAR is relatively smooth; while in the bushes, the vegetation is usually high and uneven, and the point cloud is rougher; And for tall trees, it is considered to be impassable. As shown in Fig.<ref>, for Pro-Plane, the source of variance is mainly the Euclidean distance, it can more accurately estimate the terrain conditions of the nearby area, but has a poor estimation for the far terrain; for EP-Plane, the source of variance includes both the distance and the the surface condition of the point cloud. In order to accurately estimate the support ground in different environments, the variance is used as a weight to fuse Pro-Plane and EP-Plane. We define the weight as follows:
w_[ยท]=ฯ _EP,[ยท],*^2/( ฯ _EP,[ยท],*^2+ฯ _Pro,[ยท],*^2)
where the symbol [ยท] here is to refer to r, p and z for simplifying the formula. Thus the estimation of S-Plane _S,*={ x_*,y_*,แบ_S,*,rฬ_S,*,pฬ_S,*} can be obtained as:
[ฬยทฬ]ฬ_S,*=w_[ยท][ฬยทฬ]ฬ_Pro,*+( 1-w_[ยท]) [ฬยทฬ]ฬ_EP,*
When the point cloud in the area where the robot is driving is relatively cluttered, w_z, w_r and w_p will become larger, and the robot will trust proprioception more; otherwise, the robot will trust external perception more, as shown in Fig.<ref>.
Note that when the vegetation height exceeds the threshold h_crit, it is considered as an obstacle, and the RRT tree will delete the node and nearby nodes to ensure the safety of the robot during driving.
In the process of RRT tree generation, proper introduction of traversability can improve the safety and stability of the path. When the vehicle is driving, we often pay less attention to the road conditions in small areas (pebbles, clods, etc.). Instead, we are more concerned about the information of the slope s, the uncertainty ฮต and the vegetation height h. The robot travels on terrain with shallow vegetation and small slope, so it is less likely to slip. The slope s can be obtained from roll and pitch of the S-Plane:
s=arccos( cos( rฬ_S,*) cos( pฬ_S,*) )
And ฮต can be obtained from ฯ _S,z,*^2, ฯ _S,r,*^2,ฯ _S,p,*^2:
ฮต =ฯ _S,z,*^2+ฮผ *( ฯ _S,r,*^2+ฯ _S,p,*^2)
where ฮผ is a constant coefficient. Thus, the traversability ฯ can be described as:
ฯ =ฮฑ_1s/s_crit+ฮฑ_2ฮต/ฮต_crit+ฮฑ_3h/h_crit
where ฮฑ_1, ฮฑ_2, and ฮฑ_3 are weights which sum to 1. s_crit, ฮต_crit, and h_crit, which represent the maximum allowable slope, uncertainty, and vegetation height respectively, are critical values that may cause collision or rollover. In PE-RRT*, cost includes Euclidean distance d from parent node and traversability: Cost=d/( 1-ฯ). When the RRT tree is expanded, the nodes with lower cost will be selected first. With the increase of sampling points, the generated path will gradually tends to be optimal.
ยง EXPERIMENTS
In the real scenarios, we conduct experiments to verify the effectiveness of our work utilizing the physical platform illustrated in Fig.<ref>. Our algorithm works under ROS Melodic operating system, generating the global path at 2Hz and local path at 10hz by NMPC method using CasADI. The resolution of global map is set to 2cm, the radius for plane estimation is 15cm, and the inflation radius is set to 25cm. Note that the starting point is the origin of the map, i.e x_start=[ 0,0 ] ^T. Experiments are conducted in three different scenarios: an untended hillside, a lushly planted garden and a regularly maintained park.
In the first scenario, the target point is set to be x_goal=[ 11.5,2.7 ] ^T. The robot traverses an incline section populated with weeds, featuring grass of varying heights ranging from 0.1m to 0.2m, before proceeding to a relatively flat crest area. Finally, the robot navigates an uphill and reaches the designated target point. In this given scenario, the S-Plane exhibits significant variation in both height and slope. Closer to the robot, the uncertainty of proprioception is relatively small, resulting in a higher weight. In contrast, in regions further away from the robot, the uncertainty of proprioception increases substantially due to MV-GPR, resulting in a decrease in weight and higher reliance on external perception.
The screenshots of our algorithm in the main scenario are shown in Fig <ref>. The robot chooses to generate path where the Vegetation Height is smaller, as shown in Fig.<ref>(a). If the robot detects an obstacle (long red bar), it navigates avoiding the obstacle and continues moving forward, as depicted in Fig.<ref>(b). Once the robot enters a safe area with little grass cover, it engages in longer-range global path planning preferring gentle slopes of the supporting plane, as illustrated in Fig<ref>(c). In the end, the robot reaches the target point and stops, as depicted in Fig<ref>(d).
In order to evaluate the proposed method, we compare the it with 3 baseline approaches:
* PF-RRT*<cit.>: RRT* in which each node fits the plane directly on the point cloud map.
* Pro-RRT*: RRT* in which each node estimates the S-Plane directly based on the Prev-trajectory.
* RRT*+PrevMap: Estimate the S-Plane based on Gaussian Process Regression to generate the previous traversability map. Based on the map, RRT* is used to obtain the global path.
Each algorithm generates trajectories with different colors is shown in Fig <ref>. PF-RRT* canโt generate the optimal global paths by traversability with inaccurate prediction of the height and slope of the sampled points due to the difference between the S-plane and the Surf-plane. RRT*+PrevMap fails and collides with the tree since it canโt use the point cloud information to avoid obstacles. RRT*+PrevMap occurs with several lags because it has to build an explicit traversability map, which is time consuming. Ours efficiently and accurately estimates the height and slope of the node, ensuring the asymptotic optimality of global path generation and smooth obstacle avoidance. Thanks to the precise estimation, our algorithm avoids densely distributed contour lines which means steep slopes and chooses gentler ones, which is not possible with other algorithms.
To intuitively compare the performance of different algorithms, we adopt the following indicators to compare the four algorithms:
โ Path len: length of the path from the start to the end.
โ Safety deg: minimum distance to the obstacle.
โ Cons time: consuming time from start to goal.
โ Comp time: computation time to generate a global path.
โ Speed dev: speed deviation of the robot, reflecting the stability of the robot during navigation.
And the results of the evaluation are presented in Table <ref>.
It shows that our algorithm can efficiently give the feasible and safe path for the global planning while avoiding collisions and passing through regions with high Vegetation Height. Due to the combined sampling algorithm, the PE-RRT* algorithm saves a lot of time compared to RRT*+PrevMap method.
In order to demonstrate the generalizability of our algorithm, we conduct experiments in other scenarios,as shown in Fig <ref>.
In the garden scenario, the robot sets off from the bare ground, avoiding the bushes outside the inflation radius, and finally reaches the target point. In the park scenario, the robot prefers paths on which the grass is shorter while avoiding trees. Here, the height and slope of the support plane are almost constant with occasional small changes, resulting in low uncertainty and high weight of the proprioception. More details can be available at [Video: <https://youtu.be/EeZ-JXaiXuw>.].
ยง CONCLUSION
This paper proposes a novel path planning method (PE-RRT*) on vegetated terrain for wheeled robots based on sampling tree and support plane estimation. Also inflation radius is add into RRT tree to avoid collision. Proprioception and external perception are fused to generate support plane, in which MV-GPR is used to predict the roll, pitch and height of the plane. We integrate the PE-RRT*, NMPC and SLAM modules into a complete system system for safe autonomous navigation. In addition, we compare our method with three baselines (PF-RRT*, Pro-RRT* and RRT*+PrevMap) in real scenarios and conduct experiments in different scenarios. The experimental results show that our method is safer and more efficient than other methods in global path planning.
IEEEtran
|
http://arxiv.org/abs/2306.02760v2
|
20230605102853
|
A2B: Anchor to Barycentric Coordinate for Robust Correspondence
|
[
"Weiyue Zhao",
"Hao Lu",
"Zhiguo Cao",
"Xin Li"
] |
cs.CV
|
[
"cs.CV"
] | |
http://arxiv.org/abs/2306.03890v1
|
20230606174820
|
Bosonic excitation spectra of superconducting $\mathrm{Bi_2Sr_2CaCu_2O_{8+ฮด}}$ and $\mathrm{YBa_2Cu_3O_{6+x}}$ extracted from scanning tunneling spectra
|
[
"Thomas Gozlinski",
"Mirjam Henn",
"Thomas Wolf",
"Matthieu Le Tacon",
"Jรถrg Schmalian",
"Wulf Wulfhekel"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con"
] |
Physikalischesย Institut,ย Karlsruheย Instituteย ofย Technology,ย 76131 Karlsruhe,ย Germany
Physikalischesย Institut,ย Karlsruheย Instituteย ofย Technology,ย 76131 Karlsruhe,ย Germany
Instituteย forย Quantumย Materialsย andย Technologies,ย Karlsruheย Instituteย ofย Technology,ย 76344ย Eggenstein-Leopoldshafen,ย Germany
Instituteย forย Quantumย Materialsย andย Technologies,ย Karlsruheย Instituteย ofย Technology,ย 76344ย Eggenstein-Leopoldshafen,ย Germany
Instituteย forย Quantumย Materialsย andย Technologies,ย Karlsruheย Instituteย ofย Technology,ย 76344ย Eggenstein-Leopoldshafen,ย Germany
Instituteย forย Theoryย ofย Condensedย Matter,ย Karlsruheย Instituteย ofย Technology,ย 76131ย Karlsruhe,ย Germany
Physikalischesย Institut,ย Karlsruheย Instituteย ofย Technology,ย 76131 Karlsruhe,ย Germany
Instituteย forย Quantumย Materialsย andย Technologies,ย Karlsruheย Instituteย ofย Technology,ย 76344ย Eggenstein-Leopoldshafen,ย Germany
A detailed interpretation of scanning tunneling spectra obtained on unconventional superconductors enables one to gain information on the pairing boson. Decisive for this approach are inelastic tunneling events. Due to the lack of momentum conservation in tunneling from or to the sharp tip, those are enhanced in the geometry of a scanning tunneling microscope compared to planar tunnel junctions. This work extends the method of obtaining the bosonic excitation spectrum by deconvolution from tunneling spectra to nodal d-wave superconductors. In particular, scanning tunneling spectra of slightly underdoped Bi_2Sr_2CaCu_2O_8+ฮด with a of 82 and optimally doped YBa_2Cu_3O_6+x with a T_c of 92 reveal a resonance mode in their bosonic excitation spectrum at ฮฉ_resโ63 e and ฮฉ_resโ61 e respectively. In both cases, the overall shape of the bosonic excitation spectrum is indicative of predominant spin scattering with a resonant mode at ฮฉ_res<2ฮ and overdamped spin fluctuations for energies larger than 2ฮ. To perform the deconvolution of the experimental data, we implemented an efficient iterative algorithm that significantly enhances the reliability of our analysis.
Bosonic excitation spectra of superconducting Bi_2Sr_2CaCu_2O_8+ฮด and YBa_2Cu_3O_6+x extracted from scanning tunneling spectra
Wulf Wulfhekel
July 31, 2023
==============================================================================================================================
ยง INTRODUCTION
With the intention to unravel the unconventional pairing mechanism in high-temperature superconductors, extensive effort has been put into extracting the spectral density of the pairing boson from experimental data <cit.>. An ever-present contender for this โbosonic glueโ are antiferromagnetic spin fluctuations which have been extensively studied in the family of cuprate superconductors <cit.>. Such an electronic pairing mechanism leads to a heavy renormalization of the boson spectrum when entering the superconducting state. In the normal state, overdamped spin excitations form a broad and gapless continuum. In the superconducting state, they develop a spin gap of 2ฮ, the minimum energy needed to create a particle-hole excitation, plus a rather long-lived resonance mode at ฮฉ_res<2ฮ inside the spin gap <cit.>. This resonance is made possible by the sign-changing (d-wave) symmetry of the superconducting gap and identified as a spin exciton <cit.>. The above mentioned behavior of the spin excitation spectrum has been directly observed in inelastic neutron scattering (INS) experiments <cit.> yielding strong evidence for spin-fluctuation mediated pairing. Since signatures of this resonance mode are also expected to be visible in optical, photoemission and tunneling spectra, a considerable number of studies tried to complete the picture using these techniques <cit.>, all probing a slightly different boson spectrum and facing complicated inversion techniques. Recently, machine learning algorithms entered the scene and their application to angle-resolved photoemission (ARPES) data proved to be a powerful concept to reverse-model the spin-spectrum, but this happens at the cost of a number of free parameters which cannot be easily mapped onto physical quantities <cit.>. In this work, we extracted the bosonic spectrum from the inelastic part of scanning tunneling spectra which we obtained on the cuprate superconductors Bi_2Sr_2CaCu_2O_8+ฮด (Bi2212) and YBa_2Cu_3O_6+x (Y123). In contrast to previous scanning tunneling spectroscopy (STS) <cit.> and break junction experiments <cit.> that focused on Bi2212, we obtain the boson spectrum without a functional prescription and over a wide energy range for both materials. It naturally exhibits the sharp resonance mode and overdamped continuum that are characteristic for the spin spectrum measured in INS.
Inelastic electron tunneling spectroscopy experiments using the tip of a scanning tunneling microscope (IETS-STM) have proven to be a powerful tool in the study of bosonic excitations of vibrational <cit.>, magnetic <cit.> or plasmonic <cit.> character in metals, single molecules and also superconductors. Due to the spatial confinement of the electrons in the apex of the STM tip, the wave vector of the tunneling electrons is widely spread and the local density of states (LDOS) of the tip becomes rather flat and featureless. Consequently, the generally momentum dependent tunnel matrix element can be considered momentum independent in the STM geometry <cit.>. As a result, the elastic contribution to the tunneling conductance ฯ^el becomes directly proportional to the LDOS of the sample, as has been shown by Tersoff and Hamann <cit.>. Similarly, the inelastic contribution ฯ^inel to the tunneling conductance is given by a momentum integrated scattering probability of tunneling electrons sharing their initial state energy with a final state electron and a bosonic excitation. The absence of strict momentum conservation opens the phase space for the excited boson and as a consequence, the inelastic contributions to the tunneling current can be a magnitude larger than in planar junctions <cit.>, in which the lateral momentum is conserved.
Previous IETS-STM experiments used this effect to determine the Eliashberg function ฮฑ^2F(ฮฉ) of the strong-coupling conventional superconductor Pb <cit.>, which contains the momentum integrated spectral density of the pairing phonon F(ฮฉ), as well as the electron-phonon coupling (EPC) constant ฮฑ(ฮฉ) <cit.>. While for conventional superconductors, the Migdal theorem <cit.> allows to treat the electronic and phononic degrees of freedom to lowest order separately, largely simplifying the analysis of IETS spectra, the situation is less clear for unconventional superconductors with electronic pairing mechanism. Nevertheless, the theoretical description of IETS-STM spectra could be extended to the fully gapped Fe-based unconventional superconductors of s^ยฑ character <cit.>.
Strong coupling of electrons and spin fluctuations manifests in IETS-STM spectra as a characteristically lower differential conductance in the superconducting state compared to the normal state for energies slightly below 3ฮ <cit.>. Also for these systems, the boson spectrum could be reconstructed from IETS-STM spectra by deconvolution <cit.>. In this work, we investigate, in how far this method can be extended to nodal d-wave superconductors. To do this, we follow the path of a deconvolution of scanning tunneling data, using a priori band structure for the normal state model and the inelastic scanning tunneling theory of unconventional superconductors derived by Hlobil et al. <cit.>.
ยง OUTLINE OF THE EXTRACTION PROCEDURE
The total tunneling conductance ฯ^tot between a normal conducting tip and a superconductor is comprised of the elastic tunneling contributions ฯ^el, but also significant inelastic contributions ฯ^inel <cit.>. While the second derivative of the tunneling current d^2I/dU^2 obtained on conventional superconductors in the normal state is directly proportional to the Eliashberg function ฮฑ^2F(ฮฉ) <cit.>, the bosonic glue in unconventional superconductors is drastically renormalized upon entering the superconducting phase.
In the presence of strong inelastic contributions to the tunneling current an inversion procedure ร la McMillan and Rowell <cit.> cannot be used to extract the Eliashberg function from the superconducting spectrum. As was shown in Ref. <cit.>, the function g^2ฯ^''(ฮฉ) acts as the โgeneralized glue functionโ and analog to the Eliashberg function in superconductors driven by electronic interactions. As both, phonons and spin fluctuations, may couple to the tunneling quasiparticles, we define the bosonic spectrum
B(ฮฉ) โฮฑ^2F(ฮฉ) + g^2ฯ^''(ฮฉ)
where g is the spin-fermion coupling constant and ฯ^'' is the dimensionless, momentum integrated spin spectrum. The inelastic differential conductance for positive voltage at zero temperature is given by
ฯ^inel(eU) โโซ_0^eUdฮฉ ฮฝ_s(eU-ฮฉ) B(ฮฉ)
=([ฮฝ_sยทฮ]*[Bยทฮ]) (eU) .
While the explicit momentum dependence of the bosonic spectrum is lost in this form, it still contains the spin resonance at the antiferromagnetic ordering vector if antinodal points on the Fermi surface contribute significantly to the tunneling spectrum. As can be seen from Eq.ย (<ref>), B is the source function, the DOS in the superconducting state ฮฝ_s is the kernel and the inelastic tunneling conductance ฯ^inel is the signal. ฮ denotes the Heaviside step function. The general aim in this work is to extract the function B(ฮฉ) as accurately as we can from scanning tunneling spectra, which we do by deconvolution of Eq.ย (<ref>). We follow the following step-by-step procedure:
* Determination of the superconducting density of states ฮฝ_s
* Determination of the inelastic tunneling conductance ฯ^inel
* Extraction of B(ฮฉ) by deconvolution of Eq.ย (<ref>)
Assumptions and limitations of our extraction procedure are discussed in Sectionย <ref>.
ยง.ยง Step 1: Determining ฮฝ_s
From a scanning tunneling spectrum below T_c we obtain the differential conductance dI/dU(eU)โกฯ^tot(eU). This function consists of the purely elastic part ฯ^el and the inelastic part ฯ^inel. The elastic part is directly proportional to the superconducting density of states in the sample ฮฝ_s. In this step, we determine the functional form of the elastic contribution by fitting a model function to the low bias region of the dI/dU spectrum that keeps the complexity as low as possible while still capturing the relevant features of the band structure and pairing strength. We opt for a generalized Dynes function <cit.> with a momentum dependent gap:
ฯ^el_s(ฯ) =ฯ_0/๐ฉโ_ฮป C_ฮปโซ^ฯ/2_0 dฯฮฝ^F_n,ฮป(ฯ) ร
ร|(ฯ+ฮ_ฮป(ฯ)/โ((ฯ + ฮ_ฮป(ฯ))^2-ฮ_ฮป(ฯ)^2))|
Here, ๐ฉ=โ_ฮปโซdฯฮฝ^F_n,ฮป(k_F,ฯ) is a normalization factor, ฮ_ฮป(ฯ)=ฮ_0,ฮปcos(2ฯ) is the d-wave pairing potential, ฮ_ฮป(ฯ) = ฮณ_ฮปยท |ฮ_ฮป(ฯ)| the quasiparticle scattering rate, ฮฝ^F_n(ฯ) is the angle-dependent DOS at the Fermi energy in the normal conducting phase, ฮป the band index and C_ฮป the relative tunneling sensitivity for the band. The function ฮฝ^F_n(ฯ) weighs gap distributions for different (k_x,k_y) by their abundance along the Fermi surface (FS). It is derived from a microscopic tight-binding approach that models the dispersion relation. In the case of Bi2212 we used a single-band model whereas for Y123 we considered two CuO_2 plane bands and one CuO chain band (see Appendixย <ref>). It should be noted that, unlike in fully gapped superconductors, the inelastic spectrum can be non-zero down to vanishing bias voltage because ฮ(k) has a nodal structure. This prevents us from directly assigning the differential conductance for e|U|ฮ to the purely elastic tunneling contribution as was possible for the s_ยฑ superconductor monolayer FeSe <cit.>. We will, however, start from here and refine ฯ^el in the next step.
ยง.ยง Step 2: Determining ฯ^inel
We use the fact that ฯ^tot(ฯ)=ฯ^el(ฯ)+ฯ^inel(ฯ) and the physical constraints ฯ^inel (0) = 0 and ฯ^inel(e|U|>0)>0. For a slowly varying bosonic function, which we expect in the range 0<e|U|<ฮ due to the quick reopening of the gap near the nodal parts of the Fermi surface, ฯ^inel is essentially given by the elastic contribution ฮฝ_s(ฯ) times some scalar, real factor. We thus assume, that, in the range 0<e|U|<ฮ, ฯ^el is well guessed by our Dynes fit times a factor ฮท < 1. We approximated ฮท using a boundary condition for the total number of states (see Appendixย <ref>) and in order to keep the condition ฯ^el(โ) = ฯ_0, we scale up our experimental curve by 1/ฮท instead of scaling down our fitted curve. We took care that the choice of this numerical factor, that simply helps to perform our deconvolution algorithm and paint a more realistic picture of ฯ^el, does not influence the extracted boson spectrum in a qualitative manner (see Appendixย <ref>).
ยง.ยง Step 3: Extracting B(ฮฉ)
We compare two methods by which the boson spectrum was determined: The direct deconvolution in Fourier space and the iterative Gold algorithm <cit.> to perform the deconvolution. The advantage of the Gold algorithm is that for positive kernel and signal, the result of this deconvolution method is always positive. This is in agreement with our physical constraint that the bosonic excitation spectrum is strictly positive. We used the implementation of the one-fold Gold algorithm in the TSpectrum class of ROOT system <cit.> in C language wrapped in a small python module.
The bosonic function from direct deconvolution in Fourier space is obtained from
B(ฮฉ > 0) = โฑ^-1(โฑ(ฯ^inel)(t)/โ(2ฯ)โฑ(ฮฝ_sยทฮ)(t))
where โฑ denotes a Fourier Transform and โฑ^-1 the inverse transform.
The abrupt change in elastic conductance at zero energy (multiplication with Heaviside distribution in Eq.ย (<ref>)) leads to heavy oscillations in the Fourier components. Therefore the result of this deconvolution procedure can contain non-zero contributions for E<0 and negative contributions for 0<E<ฮ. They are exact solutions to Eq.ย (<ref>) but from a subset of non-physical solutions that we are not interested in. Because the solutions obtained in this way are highly oscillatory we show the result after Gaussian smoothing. In order to obtain a positive solution to Eq.ย (<ref>), the result of the direct deconvolution method is used as a first guess to the Gold algorithm. The results shown in this work are obtained after 2,000,000 iterations at which point convergence has been reached.
ยง.ยง Assumptions and limitations
In order for our extraction method to be applicable, several simplifying assumptions were made:
* Quasiparticles with energy ฯ couple to bosonic excitations of energy ฮฉ and effective integrated density of states B(ฮฉ). The k-dependence of the interaction is thus disregarded.
* ฯ^inel has a simple relation to ฯ^el for 0<e|U|<ฮ (see Sectionย <ref>) which is generally oversimplified for d-wave superconductors, especially near ฮ.
In general, retrieving the source function from a convolution integral is an ill-posed problem which means that we only obtain one solution from a large set of valid solutions to the convolution equation.
Additional aspects that complicate the problem are the following:
* The kernel function ฮฝ_s(ฯ) is a guess which is very dependent on the modeling of the superconducting density of states and is further questioned by lack of energetic regions of purely elastic processes in the scanning tunneling spectrum. We are in fact on the verge of a necessity for blind deconvolution algorithms.
* Strong electron-boson coupling leads to spectral features of ฯ^el outside the gap that are neglected here as the contribution of ฯ^inel is expected to be much larger. They can in principle be reconstructed within an Eliashberg theory using the extracted boson spectrum. Using this refined ฯ^el and repeating the procedure until B(ฮฉ) leads to the correct ฯ^el and ฯ^inel could further improve our result.
* Electronic noise in the recorded spectra is ignored and consequently ends up in either ฮฝ_s or B(ฮฉ)
* With B(ฮฉ) we obtain only an โeffective tunnel Eliashberg functionโ which includes all bosonic excitations that are accessible to the tunneling quasiparticles. Hence, no disentanglement into lattice and spin degrees of freedom is possible.
* The k-dependence of B(ฮฉ) is inaccessible
ยง EXPERIMENTAL METHODS
We performed inelastic tunneling spectroscopy on a slightly underdoped Bi2212 sample with a T_c of 82 (UD82) and an optimally doped Y123 sample with a T_c of 92 (OP92) using a home-built STM with Joule-Thomson cooling <cit.>. The samples were cleaved at a temperature of 78 in ultra-high vacuum (UHV) and immediately transferred into the STM. All spectra were recorded with a tungsten tip. The set-up also allows to vary the temperature of the STM in order to record spectra above T_c. Due to the large inhomogeneity of scanning tunneling spectra on Bi2212 <cit.>, we show averaged spectra recorded at positions, where the dip-hump feature can be clearly seen. In the case of Y123, the overall spectral inhomogeneity was lower (see Appendixย <ref>) and we show spectra which are averaged over a 50ร50^2 area where the dip-hump feature was ubiquitous.
ยง THE CASE FOR BI_2SR_2CACU_2O_8+ฮ
ยง.ยง Experimental results
Fig.ย <ref> shows experimental dI/dU and d^2I/dU^2 spectra recorded at 0.7 and 84. In order to remove a tilt in the spectra that stems from a slope in the density of states (DOS) due to hole doping <cit.>, we followed the standard procedure and symmetrized the spectra in Fig.ย <ref>.
The dI/dU spectrum for superconducting Bi2212 in Fig.ย <ref>(a) shows a single but smeared gap with residual zero-bias conductance due to the nodal d_x^2-y^2 gap symmetry. Similarly, the coherence peaks are smeared due to the gap symmetry and possibly also due to short quasiparticle lifetimes. This is typical for the underdoped regime and may be caused by its proximity to the insulating phase <cit.>. Outside the gap, a clear dip of the superconducting spectrum below the normal conducting spectrum, followed by a hump reapproaching it, are visible. The V-shaped conductance in the normal state hints towards strong inelastic contributions to the tunneling current from overdamped electronic excitations that become partly gapped in the superconducting state as discussed in Ref. <cit.>. The hump shows as a peak in the second derivative of the tunneling current that exceeds the curve of the normal state at โ120 in Fig.ย <ref>(b). The relatively round shape of the superconducting gap in Bi2212 is atypical for a classic d-wave superconductor, in which the naive expectation is a V-shaped conductance minimum. As will be shown later on, the round shape of the gap can be generated without admixture of an s-wave pairing term by respecting the anisotropy of the Fermi surface in the normal state. The Fermi surface and gap anisotropy are summarized schematically in Fig.ย <ref>(a).
ยง.ยง Extraction of the bosonic spectrum
We followed the step-by-step extraction procedure outlined in Sectionย <ref> starting from the determination of the superconducting density of states ฮฝ_s. The optimal Dynes fit to our experimental spectrum in the superconducting state is shown in Fig.ย <ref>(b) with ฮ_0 = 63.31 e (ฮ_max = 59.14 e and ฮฬ
=49.60 e) and ฮณ = 0.19. The resulting (in)elastic contribution is shown in red(grey) in Fig.ย <ref>(c). Here, a numerical scaling factor of ฮท = 0.6 was used. The value for ฮ lies within the range of previously reported gap values on the Bi2212 surface <cit.>, especially in the slightly underdoped regime, where variations of the local gap from the average gap tend to be larger <cit.>.
The regularized bosonic function from direct deconvolution in Fourier space is shown in Fig.ย <ref>(d) in orange. The contributions at low energies are an artefact from the scaling with factor ฮท. Despite our uncompromising simplifications, the bosonic spectrum recovers well the tendency of the total conductance in the forward convolution (Fig.ย <ref>(c) orange) and shows the expected behaviour for coupling to spin degrees of freedom at medium and high energies, i.e. a resonance mode at ฮ < E < 2ฮ and approach of the normal state bosonic function B for E 3ฮ <cit.>, that, in contrast to the Eliashberg function in the case of phonon-mediated pairing, remains finite for energies well above 2ฮ. While the long-lived resonance mode is associated with a spin resonance due to the sign-changing gap function, the broad high-energy tail of the bosonic spectrum is due to the coupling to overdamped spin fluctuations, or paramagnons <cit.>. Resonant inelastic x-ray scattering (RIXS) studies have shown that these paramagnons dominate the bosonic spectrum for energies larger than โ100 e in several families of cuprates as almost all other contributors, e.g. phonons, lie lower in energy <cit.>.
By application of the Gold algorithm we obtained the bosonic spectrum shown in green in Fig.ย <ref>(d). Again, the high value at E=0 is a consequence of the scaling with factor ฮท. The sharp peak at 10 e is due to inadequacies of our elastic fit in the region of the coherence peak. It is e.g. not present in our analysis of Y123 (see Sectionย <ref>) and vanishes once one takes an elastic DOS with a larger gap (here 65 e, not least-square minimum) as we show in the thin dark green line in Fig.ย <ref>(d). We could get rid of negative contributions and find a bosonic function that recovers well the total conductance (Fig.ย <ref>(c) green), especially the dip-hump structure, and shows a very clear resonance at ฮฉ_resโ63 eโ 1.0 ฮ_0 โ 1.1 ฮ_maxโ 1.3 ฮฬ
.
The resonance mode extracted in this work is higher in energy than reported in inelastic neutron scattering experiments (ฮฉ_resโ43 e at the antiferromagnetic ordering vector) <cit.> and closer to the resonance determined by optical scattering (ฮฉ_resโ60 e) <cit.>. Due to the loss of momentum information in tunneling, the centre of the resonance is expected to be shifted to higher energies compared to the INS results <cit.>.
Due to the large inhomogeneity of ฮ on the surface of Bi2212 <cit.>, it is more instructive to compare the ratio ฮฉ_res/ฮ to other works rather than the absolute value of ฮฉ_res. The ratio ฮฉ_res/ฮฬ
=1.3 lies within the current range of error of ฮฉ_res/ฮ=1.28ยฑ 8 by Yu et al. <cit.>. In most other extraction methods of the bosonic mode energy, the normal state DOS is not respected, which is why, depending on the method, the ฮ_0 used there is most similar to what is here called ฮ_max or ฮฬ
. ฮ_max is the largest gap value that contributes to the elastic conductance spectrum and ฮฬ
the momentum averaged and DOS weighted gap.
ยง THE CASE FOR YBA_2CU_3O_6+X
ยง.ยง Experimental results
The dI/dU spectrum for superconducting Y123 in Fig.ย <ref>(a) is qualitatively in excellent agreement with previous STM measurements <cit.> and shows three low-energy features: (i) a superconducting coherence peak at โ25 e that is sharper than in Bi2212, (ii) a high-energy shoulder of the coherence peak and (iii) a low-energy peak at โ10 e. The high-energy shoulder as well as the sub-gap peak are believed to arise from the proximity-induced superconductivity in BaO planes and CuO chains <cit.>. This would certainly account for the fact that these states are missing in the Bi-based compounds and that the sub-gap peak shows a direction-dependent dispersion in ARPES data <cit.>.
At energies larger than ฮ, we again find a clear dip-hump feature, similar as in Bi2212. The hump lies at โ60 e. The V-shaped background conductance in the high-energy regime of the superconducting spectrum is in agreement with the predicted inelastic contribution by magnetic scattering in the spin-fermion model <cit.>.
ยง.ยง Extraction of the bosonic spectrum
We proceeded as in the case for Bi2212, but incorporated the one-dimensional band from the CuO chains as well as the bonding and anti-bonding band from the CuO_2 planes into the calculation of the normal state DOS to remodel the sub-gap peak and coherence peak shoulder in the estimated ฯ^el of the superconducting state. The optimal Dynes fit with gaps ฮ^AB_0 = 20.62 e for the anti-bonding (AB), ฮ^BB_0=25.77 e for the bonding (BB) and ฮ^CHSS_0=5.66 e for the chain band is shown in red in Fig.ย <ref>(a). Because vacuum-cleaved surfaces favour tunneling into states of the CuO chain plane <cit.>, the sub-gap peak is pronounced and the contribution to the total DOS of the CH_SS band is, in our analysis, roughly five times higher than for the AB and BB band.
The size of ฮ^BB is in good agreement with other scanning tunneling spectroscopy results spanning around 20 experiments, in which the extracted gap value lies between ฮ = 18-30 e for optimally doped samples <cit.>. For comparison: From Raman spectra, ฮ_0, i.e. the gap in the antinodal direction, is frequently found to be 34 e for optimally doped Y123 samples <cit.>. It should be noted that vacuum cleaved surfaces of Y123 tend to be overdoped <cit.> which goes hand in hand with a steep decline of ฮ. The reason for discrepancy between the gap measured in STS and ARPES <cit.> (also yielding ฮ_0 โผ34 e) is expected due to two factors: (i) Although less influenced by a local gap variation than Bi2212, the gap of Y123 is expected to be inhomogeneous on a wider scale of > 100 <cit.>. While ARPES yields an average gap over several of these domains, STS yields a more local gap. (ii) The measurement of a k-averaged gap value in STS naturally tends to give smaller values for a d-wave superconductor than the maximum gap size measured in ARPES. We try to eliminate this last effect by respecting the k-dependence of ฮ and ฮฝ_n^F in our fit. Nevertheless, despite the large T_c, the spectroscopic results on Y123 in this work do not support an effective gap value of >30 e because the total conductivity is already on the decrease at this energy.
Analogous to the case of Bi2212, the elastic part was, as a first guess, approximated by the Dynes fit to the total conductance times a scalar factor ฮท. Here, ฮท = 0.9 was chosen in order to secure the constraint ฯ^inel(e|U|>0)>0. The (in)elastic parts to the total conductance are shown in red(grey) in Fig.ย <ref>(b).
We compare the extracted bosonic DOS obtained from direct deconvolution and Gold algorithm for Y123 in Fig.ย <ref>(c). The resonance mode at ฮฉ_resโ61 e is significantly higher in energy than experimentally found by INS in (nearly) optimally doped samples with ฮฉ_resโ41 e <cit.> and even lies at the onset of the spin scattering continuum at ฮฉ_c โ60 e <cit.>. Apart from the k-space integration, which shifts the peak centre to higher energies, several other factors can play a key role: (i) The well-studied 41 e odd-parity mode is paired with an even-parity mode at ฮฉ^e_resโ 53-55 e <cit.> which may be of the same origin as it vanishes at T_c. This mode appears with a โ 3-20 times lower intensity in INS than the odd-parity mode, but this does not necessarily have to hold for a tunneling experiment. (ii) The bosonic spectrum extracted here is essentially poisoned by phononic contributions from every k-space angle. A disentanglement of phononic and electronic contributions to the total bosonic function by non-equilibrium optical spectroscopy showed that for ฮฉ>100 e the bosonic function is purely electronic, yet in the energy range of the spin resonance, the contribution of strong-coupling phonons is almost equal to that of electronic origin <cit.>. (iii) Apart from physical arguments, there can also be made sceptical remarks on the deconvolution procedure: Evidently, it heavily depends on the guess of the elastic tunneling conductance, which in this case does not contain strong-coupling features from an Eliashberg theory. (iv) The pronounced contribution of the CuO chains to the total conductance essentially causes the resonance mode to appear at roughly ฯ_hump-ฮ^CHSS instead of ฯ_hump-ฮ^BB. Correcting for the 20e difference between the two gaps, it is likely that without sensitivity to the CuO chain gap, our extraction procedure will yield ฮฉ_resโ41 eโ 1.6 ฮ^BB_0 โ 1.8 ฮ^BB_maxโ 2.4 ฮฬ
^BB.
ยง CONCLUSION
We recorded scanning tunneling spectra on superconducting Bi2212 (UD82) and Y123 (OP92) at 0.7 and revealed a clear dip-hump structure outside the superconducting gap in both cases. The origin of this spectral feature can be traced back to a sharp resonance in the effective tunnel Eliashberg function. A careful separation of elastic and inelastic tunneling contributions enabled us to extract the bosonic excitation spectrum including this resonance.
Comparing the obtained bosonic spectrum with inelastic neutron scattering data yields good agreement with the observed resonance mode and supports that magnetic fluctuations play an important role in the pairing mechanism of the cuprate superconductors.
Our extraction method of the bosonic spectrum from scanning tunneling spectra paves a way to complement glue functions determined from optical spectroscopy or ARPES and has several advantageous features: The usage of scanning tunneling spectra yields the option for atomic resolution of the bosonic modes on the superconductor surface <cit.> as well as easy access of both occupied and unoccupied quasiparticle states with the high energy resolution of cryogenic STM setups.
ยง ACKNOWLEDGEMENTS
The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) through CRCย TRRย 288ย -ย 422213477 โElastoQMatโ, project A07, B03 and B06.
ยง APPENDIX
ยง.ยง Calculated normal density of states
For Bi2212 and Y123, the in-plane dispersion of the CuO_2 planes is described by a tight binding model of the form
ฯต(k_x,k_y) = t_1/2(cos(k_x)+cos(k_y)) + t_2cos(k_x)cos(k_y)
+t_3/2(cos(2k_x)+cos(2k_y)) +t_4/2(cos(2k_x)cos(k_y)
+cos(k_x)cos(2k_y))+t_5cos(2k_x)cos(2k_y)
-ฮผ
with chemical potential ฮผ and hopping parameters t_i as proposed in Ref.ย <cit.>. For Bi2212, we used the set of parameters from Ref.ย <cit.> for a near optimally doped crystal and for Y123 we started from the parameters proposed in Ref.ย <cit.> for the optimally doped case and adjusted chemical potential, as well as t_2 to fit recently obtained Fermi surface contours measured by ARPES <cit.>. While we only consider the binding band (BB) for Bi2212, for Y123, we take the binding (BB), anti-binding (AB) and the chain band (CH_SS) into consideration. The latter is modeled by a dispersion of the form
ฯต(k_y,k_y) = 2t_acos(k_x)+2t_bcos(k_y)-ฮผ.
The used tight binding parameters are summarized in Tab.ย <ref>. The calculated Fermi surface in the first Brillouin zone (BZ) is shown in Fig.ย <ref>(a,c) for Bi2212 and Y123. An analytic expression for the Fermi wave vector k_F(ฯ) is retrieved from the solution of ฯต(k,ฯ)=0 where ฯต(k,ฯ) is the polar representation of Eq.ย (<ref>). The normal DOS is then given by
ฮฝ^F_n = โฎd^2k/|โ_kฯต|โโซ_l dฯ/โ_k,ฯฯต(l(ฯ))l^'(ฯ) ฯโ M_ฯ
where l(ฯ) = (k_F(ฯ)cos(ฯ), k_F(ฯ)sin(ฯ))^T is a parametrization of the path along the Fermi surface, ยท is the euclidean norm and M_ฯ = {ฯ|k_F(ฯ)โ1.BZ}. It is shown as a function of the polar angle ฯ in Fig.ย <ref>(b,d) for Bi2212 and Y123 respectively.
ยง.ยง The numerical scaling factor ฮท
We used the boundary condition
โซ_-โ^โdฯ ฯ_n^el=โซ_-โ^โdฯ ฯ_s^el
to approximate ฮท, i.e. the total number of electronic states is conserved in the phase transition from the normal to the superconducting phase. This procedure is depicted in Fig.ย <ref>(a).
In order to make sure that the introduction of the numerical scaling factor ฮท has no poisoning effect on our extracted bosonic spectrum, the deconvolution of the Bi2212 spectrum by Gold's algorithm was performed for four different values of ฮท. The results shown in Fig.ย <ref>(b) are comforting in the sense that the overall shape of the bosonic spectrum is unchanged. The only major difference lies in the magnitude of the zero-energy peak which is to be expected from a scalar multiplication, but since this peak is anyhow out of the bounds of physical contributions it does not harm the analysis.
ยง.ยง LDOS inhomogeneity
As reported by Fischer et al., Bi2212 tends to show a large inhomegeneity of its LDOS in the superconducting state <cit.>. This can be confirmed in our experiment by direct comparison of the conductance inhomegeneity measured on Bi2212 and Y123, shown in Fig.ย <ref>. The heat maps of the conductance variation
ฮดฯ / ฯฬ
(x,y) =1/Nโ_U=-U_t^U_tฯ(eU,x,y)-ฯฬ
(eU)/ฯฬ
(eU)
show that it is about three times higher on the Bi2212 surface than on the Y123 surface. As a consequence, a position averaged spectrum over a 50ร50^2 area can preserve detailed gap features better for Y123 than for Bi2212. Especially the dip-hump (dip marked by blue, hump marked by red arrows in Fig.ย <ref>) feature is still clearly visible in the position averaged spectrum of Y123 at ฯตโ60 e but is invisible in Bi2212. The preservation of this feature in the spectrum is crucial for our ITS analysis. Therefore, in the case of Bi2212, an average spectrum at one specific location, at which the dip-hump spectral feature was clearly visible, was chosen for this study. For Y123, the position averaged spectrum was chosen.
|
http://arxiv.org/abs/2306.03087v1
|
20230605175815
|
Signatures of primordial black holes in gravitational wave clustering
|
[
"Sarah Libanore",
"Michele Liguori",
"Alvise Raccanelli"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
Imaging the Meissner effect and flux trapping in a hydride superconductor at megabar pressures using a nanoscale quantum sensor
P.ย Bhattacharyya,^1,2,โ
W.ย Chen,^3,โ
X.ย Huang,^3,โ
S.ย Chatterjee,^1,4
B.ย Huang,^5
B.ย Kobrin,^1,2
Y.ย Lyu,^1
T.ย J.ย Smart,^1,6
M.ย Block,^7
E.ย Wang,^8
Z.ย Wang,^7
W.ย Wu,^7
S.ย Hsieh,^1,2
H.ย Ma,^5
S.ย Mandyam,^7
B.ย Chen,^7
E.ย Davis,^1
Z.ย M.ย Geballe,^9
C.ย Zu,^10
V.ย Struzhkin,^11
R.ย Jeanloz,^6
J.ย E.ย Moore,^1,2
T.ย Cui,^3,12
G.ย Galli,^5,13,14
B.ย I.ย Halperin,^7
C.ย R.ย Laumann,^15
N.ย Y.ย Yao^1,2,7โ
^1Department of Physics, University of California, Berkeley, CA 94720, USA
^2Materials Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
^3State Key Laboratory of Superhard Materials,
College of Physics, Jilin University, Changchun 130012, China
^4 Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA
^5 Department of Chemistry, University of Chicago, Chicago, IL 60637, USA
^6 Department of Earth and Planetary Science, University of California, Berkeley, CA 94720, USA
^7 Department of Physics, Harvard University, Cambridge, MA 02135, USA
^8 Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA 02135, USA
^9 Earth and Planets Laboratory, Carnegie Institution of Washington, Washington DC 20015, USA
^10 Department of Physics, Washington University in St. Louis, St. Louis, MO 63130, USA
^11 Center for High Pressure Science and Technology Advanced Research, Shanghai 201203, China
^12 School of Physical Science and Technology, Ningbo University, Ningbo 315211, China
^13 Materials Science Division and Center for Molecular Engineering,
Argonne National Laboratory, Lemont, IL 60439, USA
^14 Pritzker School of Molecular Engineering, University of Chicago, Chicago, IL 60637, USA
^15 Department of Physics, Boston University, Boston, MA 02215, USA
^*These authors contributed equally to this work.
^โ To whom correspondence should be addressed; E-mail: [email protected].
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
ยง INTRODUCTION
We are on the verge of the fourth gravitational wave (GW) observational run by the LIGO-Virgo-Kagra Collaboration (LVK) and the increasing number of GW detections (e.g.,ย <cit.>) foresees the capability of using this observable for statistical studies, in astrophysics and cosmology.
Many works recently showed that GW provide a valuable tool to study the large scale structures (LSS) of the Universe and their clustering properties, being complementary to galaxy surveys in mapping themย <cit.>.
Next generation GW detectors from the ground, such as the Einstein Telescope (ET)ย <cit.>, Cosmic Explorer (CE)ย <cit.>, or from space e.g.,ย LISAย <cit.> and DECIGOย <cit.>, will fully exploit this possibility.
Previous worksย (e.g.,ย <cit.>) investigated the predicted constraining power of GW surveys alone or through cross-correlations with other LSS surveys, to measure cosmological parameters and the bias of the hosts of binaries that source GWs. Cosmological studies that use GWs, however, have to deal with large uncertainties in the sky localization of merger events, as well as the lack of redshift information, as only the luminosity distance can be directly estimated from the detected waveform. Different solutions exist: either electromagnetic counterparts or host galaxies are observed (see e.g.,ย <cit.>), or cross-correlations can be used to statistically associate an estimated redshift and position to the GW eventย <cit.>. A powerful and promising alternative is to directly use luminosity distance from the GW signal as the radial coordinate: by mapping sources in luminosity distance space, one can make use of GW surveys alone, without the need of external datasets or assumptions. The use of luminosity distance space however requires to compute how LSS affect the estimates of the observed position of the sources i.e.,ย distortions due to peculiar velocities and relativistic effectsย <cit.>. Alternatively, if the goal is not to constrain cosmological parameters, the luminosity distance of the observed GW events can be transformed into redshift by assuming the fiducial ฮCDM cosmology, provided that large enough bins are used to account for uncertainties in the conversion, in the same fashion as photometric galaxy surveys.
Inย <cit.> it was shown that the use of luminosity distance space in combination with the large volumes probed and the high luminosity distance resolution of future GW detectors will put tight constraints on the bias parameters of the hosts of GW sources. This will be reached through the tomographic analysis of their angular power spectra even in the case of poor sky localization.
The analysis of the clustering of GW events and the estimation of the bias of their hosts present a very interesting application, related to the study of the merger progenitors' formation channels. In fact, the different processes through which binaries can form lead to different dependecies of their clustering with respect to the underlying DM field.
Focusing on black hole binary (BBH) mergers, the work ofย <cit.> firstly investigated the use of the bias as a tool to disentangle between astrophysical black hole (ABH) binaries and
primordial black hole (PBH) binaries. While the black hole components of the former descend from stellar evolution (see sectionย <ref>), the ones of the latter are part of the DM content of the Universe, being formed from perturbations in the very early Universe and bound together across cosmic time (see sectionย <ref> andย <ref>). Different models have been proposed for PBH formation, which predict different abundances and spread a wide mass range. Constraints on them, therefore, can be derived through several techniques and observables, such as e.g.,ย lensing, CMB distortions, and others (seeย <cit.> and references therein for a review).
GW observations can also be used as a probe for PBHs: being formed in a completely different way with respect to ABHs, the merger of their binaries would trace the LSS in a different way. Even if, considering ABHs and PBHs having comparable masses (i.e.,ย โผ 5-100 M_โ,) the GW signal produced by their merger would be completely indistinguishable, surveys dominated either by ABHs or PBHs should have different merger ratesย <cit.> and show different tomographic angular power spectra. The work ofย <cit.> and other follow-up analysesย <cit.> showed how the cross-correlation between GW and galaxy surveys changes depending on ABH/PBH relative abundance.
In this work we refined previous analyses and we propose a complementary way to understand whether future GW surveys will be able to test the existence of PBHs or not. We perform our analysis in redshift space by making use of 5 redshift bins large enough to take into account the uncertainties in the conversion from observed luminosity distance to redshift under the fiducial Planck 2018 cosmologyย <cit.>. We consider how the different bias behaviors of ABHs and PBHs will combine in an overall effective bias in future GW datasets. Its functional form depends on the relative abundance of the two, as well as on the presence of early and late binary subpopulations in the PBH component and on their modelling. We adopt a phenomenological prescription to understand how this propagates in the angular power spectra that will be observed by future GW surveys alone or in cross-correlation with galaxy surveys. Depending on the PBH abundance, the estimate of the bias will be suitable to perform model selection analyses aimed at distinguishing the scenario in which only ABHs exist, to the ones where also PBH mergers contribute to the observed GW events. We finally convert our phenomenological results to constraints on the dark matter (DM) fraction possibly made by PBHs. Being based on a statistical analysis, differently from other recent resultsย <cit.>, our approach does not require precise measurements or assumptions on the very high redshift, neither for the merger rate nor for the luminosity distance of single merger events. Moreover, our method is sensitive to PBHs abundances that leave the overall local merger rate compatible with LVK constraints.
The paper is organized as follows. In sectionย <ref> we revise the most important features regarding PBH binary formation and mergers detection, while in sectionย <ref> we describe the ABH and PBH distributions and biases we adopted. The analysis setup and the statistical tool we adopted are collected in sectionย <ref>, while sectionย <ref> presents our results. In particular, in sectionsย <ref> andย <ref> we perform SNR and Fisher matrix forecasts for the effective bias detection based on our phenomenological assumption, while in sectionย <ref> andย <ref> we move to model-dependent Bayesian model selection forecasts for PBH abundance as DM component. All the analyses are performed with respect to third generation GW surveys, i.e.,ย Einstein Telescope, alone or combined with two Cosmic Explorer instruments, and then in cross-correlation with galaxy surveys, for which we assume two example of a wide and deep surveys, modeled roughly with the specifications of SPHEREx - likeย <cit.> and ATLAS Explorer - likeย <cit.>.
We finally draw our conclusions in sectionย <ref>.
ยง PRIMORDIAL BLACK HOLE BINARIES
The hypothesis of the existence of PBH dates back to the '70sย <cit.>: before the matter-radiation equivalence, they can form from high density perturbations which collapse under the effect of gravity, overcoming pressure forces and the cosmic expansion acting at the timeย <cit.>. Nowadays, different theoretical models exist to explain their possible formation in the early Universe and several probes constrain their abundance (a complete description is beyond the scope of this work; see e.g.,ย <cit.> for some reviews).
Formation mechanisms are mostly related to the existence of large curvature perturbations, due e.g.,ย to blue tilt in the primordial power spectrum
(e.g.,ย <cit.>), inflationary potentials that create peaks on small scales (e.g.,ย <cit.>), curvature perturbations (e.g.,ย <cit.>), non-Gaussianity (e.g.,ย <cit.>) or primordial running spectral index (e.g.,ย <cit.>). Other explanations for PBH existence involve e.g.,ย bubble collisions in phase transitionsย <cit.> or cosmic stringsย <cit.>.
It is customary to define:
f_ PBH = ฮฉ_ PBH/ฮฉ_ DM,
as the DM fraction composed by PBH (ฮฉ_ PBH,DM are the dimensionless energy densities of PBH and DM, respectively).
The value of f_ PBH is currently constrained using different techniques and observations for different PBH mass functions (see e.g.,ย <cit.> for a summary): assuming a monochromatic mass distribution, they seem to exclude the possibility of f_ PBH = 1, with the exception of the range M_ PBHโ [10^-16,10^-10] M_โ (asteroid-sublunar masses).
Lower values of f_ PBH could still be allowed in other mass ranges, between which of particular interest are the windowsย M_ PBHโ [10^-7,10^-5] M_โ and M_ PBHโ [5,100] M_โ. By considering a broad mass function, f_ PBH = 1 could be recovered as well.
The M_ PBHโ[5,100] M_โ case is particularly interesting: PBH of such masses could be part of the progenitors of the merger events observed by the LIGO-Virgo-Kagra Collaboration (see e.g.,ย <cit.>).
In fact, if such objects exist, they can form binaries and merge, contributing to the signals observed by current and future interferometers. Provided the uncertainty on the mass function, the GW emitted could fall in different frequency ranges.
To form PBH binaries that merge within a Hubble time, there are two possible formation channels to account for, which we call Early PBH binaries (EPBH) and Late PBH binaries (LPBH), and the two can coexist. Both of them, after formation, evolve via GW emission: the energy released by their quadrupole moment shrinks the orbit until the merger.
Early PBH binaries were firstly theorized byย <cit.> as systems that bound together and decouple from the Universe expansion in the radiation dominated era.
After formation, binaries are affected by tidal forces due to the rest of the DM field, being this made by PBH or other componentsย <cit.>. This enhances the angular momentum of the PBHs in the binary, increasing the time they require to inspiral and retarding the merger, making it observable today. Three - body interactions can lead to binary disruption as well.
Since early binaries randomly form wherever PBHs are located, they follow closely the DM distribution both at their formation and throughout cosmic time. For this reason, at first approximation, their bias can be modeled as unity, as PBHs in this case would directly trace the DM field:
b^E = 1 .
Late PBH binaries instead form for dynamical capture, when two PBHs approach each other and, after losing energy through GW emission, get boundย <cit.>. Even if single PBHs closely follow the DM distribution, dynamical captures take place when DM halos already formed: therefore, the cross-section of this process depends on the velocity dispersion inside the halos, which in turn depends on the halo mass M_h.[Dynamical capture can also be a formation channel for ABH-PBH binaries. Their contribution to the overall merger rate depends on the environment where the binary is formed (mainly relative velocity and number densities of the two populations) and is in general subdominantย <cit.>.] Asย <cit.> firstly highlighted, this implies that late PBH binaries are mainly hosted by halos of mass M_h < 10^6 M_โย <cit.>. The bias of late binaries can be approximated through the bias of their hosts, which in this M_h range isย <cit.>
b^L โ 0.5 .
In this work we focus on future GW experiments, as the number or mergers detected with current and near future detectors (first and second generation) does not allow a powerful enough statistical analysis.
The main proposed future observatories from the ground are the Einstein Telescope (ET, <cit.>) and Cosmic Explorer (CE, <cit.>): despite their different shape and technology, these two experiments will have comparable sensitivity on the same frequency range. The number of binary mergers observed and the distance reached will be considerably larger than what will be obtained with second generation detectors. Moreover, combining detections from the two instruments or from a ET2CE configuration will improve the sky localization of GW events (see e.g.,ย <cit.> for a recent analysis).
For this reason, the merger event catalogs they will provide will be ideal to pursue statistical analyses.
ยง SOURCE DISTRIBUTION AND BIAS
The progenitors of the BBH mergers observed with GW interferometers are either deriving from stellar evolution and astrophysical processes (which we will call ABH, see e.g.,ย <cit.> and references therein) or they date back to primordial origin (PBH, see previous sections and e.g.,ย <cit.>).
Assuming that both ABH and PBH masses lie in the range โผ 5-100 M_โ, the GW signals produced during the merger of their binaries are in principle indistinguishable.
While specific parameters, e.g.,ย redshift range or mass and spin distributions can indicate a preferential astrophysical or primordial origin, in principle any GW survey detects both ABH and PBH mergers as events in luminosity distance space, and is therefore blind to their origin.
A very interesting fact, that is at the base of our work, is that the distribution and clustering properties of ABH binaries are different from the ones of PBH binaries, as they trace the LSS differently. As firstly hinted inย <cit.>, on one hand, ABHs are the endpoint of stellar evolution and for this reason they are found in galaxies, which in the standard hierarchical scenario mainly form within massive DM halos.
On the other hand, PBH binaries, as we saw, can form through more than one channel, each tracing the LSS differently: Poissonian distributed early binaries, in which PBH bound together right after their formation, and late binaries, formed through dynamical capture mainly in small DM halos.
This leads to different redshift distributions and bias properties of mergers in different scenarios.
ยง.ยง Astrophysical black holes
To model the ABH number distribution, we assume that the merger rate evolution in redshfit is consistent with the star formation rate described by the Madau-Dickinson modelย <cit.>. Followingย <cit.>, we model the ABH normalized merger rate as
โฬ^A(z) = (1+z)^ฮฑ_1^A/1+[(1+z)/(1+ฮฑ^A_3)]^ฮฑ_1^A+ฮฑ_2^A ,
where ฮฑ_1^A = 2.7, ฮฑ_2^A = 3, ฮฑ_3^A = 2.
As for the bias, we use the fit thatย <cit.> estimates from clustering measurements on hydrodynamical cosmological simulations combined with population synthesis modelsย <cit.>. We therefore parameterize the bias of the ABH population as
b^A(z) = A(z+1)^D ,
where A = 1.2, D = 0.59. According to results inย <cit.>, the errorbars on the measurements from which the fit is obtained are โฒ 50% for zโ [1,6].
ยง.ยง Primordial black holes
We parameterize the normalized merger rates of E/L PBH binaries through the power law
โฬ^E,L(z) = (1+z)^ฮฑ_E,L .
For early binaries, we interpolate the results ofย <cit.> and get ฮฑ_E = 1.25, while for late binaries we consider ฮฑ_L = 0, since at first approximation the gravitational capture process in small DM halos (which formed earlier than the z_ max we adopt) can be considered as redshift independent.
ยง.ยง Observed redshift distribution and bias
In order to account for the different types of progenitors, in this work we consider the total binary black hole merger rate in a GW survey to be
โ^ tot(z) = โ^A(z) + โ^P(z) = โ^A(z) + โ^E(z) + โ^L(z) ,
where A = ABH and P = PBH of E,L= early/late type. We re-express the merger rates of the different binary types by the means of the phenomenological parameters {f^E,f^L}, defined as the ratio between the early or late PBH and the total local merger rates, namely
f^E = โ_0^E/โ_0^ tot , f^L = โ_0^L/โ_0^ tot .
By doing so, the redshift evolution of the total merger rate is written as
โ^ tot(z) = โ^ tot_0{[1-(f^E+f^L)]โฬ^A(z) + f^Eโฬ^E(z) + f^Lโฬ^L(z)} ,
where the normalized merger rates โฬ^A,E,L(z) are defined in eqs.ย (<ref>) andย (<ref>); note that the relative ABH, EPBH, LPBH abundances evolve in z depending on the values of {f^E, f^L}. In our setup, the merger rate is model dependent i.e.,ย it varies because of the relative abundance not only of ABH and PBH binaries, but also of early and late PBH. Using eq.ย (<ref>), the observed number distribution of events per redshift bin z and solid angle ฮฉ observed by a GW survey can be estimated as
d^2N^ GW/dzdฮฉ = 1/๐ฉ[c ฯ^2(z)/(1+z)H(z)T_ obsโ^ tot(z)ฮ(z_ max-z)] ,
where c is the speed of light, ฯ(z) is the comoving distance, H(z) is the Hubble parameter, T_ obs is the survey observation time, ฮ the Heaviside function and z_ max the detector horizon (see sectionย <ref> for further details); the factor ๐ฉ is defined as
๐ฉ = N^ GWT_ obs[ฮฉ_ full skyโซ_z_ min^z_ maxdz d^2N^ GW/dzdฮฉ]^-1 ,
where ฮฉ_ full skyโ 40000 deg^2 is the solid angle corresponding to the full sky and N^ GW the total number of binary black hole mergers the GW survey observes each year. The factor ๐ฉ is used to rescale the number distribution, so to fix the total number of observed events N^ GW to a customary value. This is equivalent to define an effective merger rate, which at z=0 is anchored[Substituting eqs.ย (<ref>),ย (<ref>) in eq.ย (<ref>), at z =0 we obtain
โ^ tot(z = 0) = โ_0^ tot/(1+ฮฑ_3^A)^ฮฑ_1^A+ฮฑ^A_2+(f^E+f^L)(1-โ_0^ tot/(1+ฮฑ_3^A)^ฮฑ_1^A+ฮฑ^A_2) โผโ_0^ tot,
where the last equality depends on 1-1/(1+ฮฑ_3^A)^ฮฑ_1^A+ฮฑ^A_2โผ๐ช(10^-3). When ๐ฉ = 1, this result is consistent with LVK constraints for any {f^E,f^L}. Using ๐ฉ>0 allows us to select a fraction of the observed events.
] at โ_0^ tot/๐ฉ, while it evolves in z depending on {f^E,f^L}.
Being the survey blind to the origin of the merger progenitors, the effective black hole merger bias estimated from it can be computed by weighing the ABH, EPBH, LPBH biases by their relative merger rates at each z, namely
b_ eff(z) = โ^A(z)/โ^ tot(z)b^A(z) + โ^E(z)/โ^ tot(z)b^E(z) + โ^L(z)/โ^ tot(z)b^L(z)
= โ_0^ tot/โ^ tot(z)[[1-(f^E+f_0^L)]โฬ^A(z)b^A(z) + f^Eโฬ^E(z)b^E(z) + f^Lโฬ^L(z)b^L(z)] .
The ABH, EPBH, LPBH merger rate functional forms โฬ^A,E,L(z) and biases b^A,E,L(z), are computed following the prescriptions in sectionsย <ref>,ย <ref>. Note that we indicate with โฬ(z) the redshift evolution of the merger rate normalized by the local value, namely โ(z) = โ_0^ totโฬ(z).
We stress that eq.ย (<ref>) does not describe the intrinsic bias of one population, but it represents an effective quantity. The slope of its redshift evolution, thus, can also decrease in z since it averages the bias models of ABH, early PBH and late PBH binaries weighted by their relative merger rates; in the rest of the paper we will omit the _ eff notation for simplicity.
The observed number distribution and effective bias are shown in figureย <ref>, where in grey we also show their values for the three populations as if they would comprise the entirety of the observed mergers.
In our analysis, we explore different {f^E,f^L} fractions and detectors setup; for the sake of clarity, we show results for f^E = 0.2 and our fiducial ET2CE, in which the local, overall merger rate
is set to โ_0^ tot = 27, the number of merger events observed each year is N^ GW = 1.1ร 10^5 (following the prescriptions ofย <cit.> and in agreement with the latest LVKย O3 constraints at 95% CLย <cit.>; further details in sectionย <ref>) and the observation time is T_ obs = 10 yr. We choose the value f^E = 0.2 to illustrate our results since it describes an intermediate scenario for the EPBH abundance,
while at the same time it allows us to span LPBH over a wide range. A further reason why such EPBH fraction is interesting is thatย <cit.> claimed that the presence of โผ 30% EPBH in the observed events of the second LVK run is statistically favoured by hierarchical Bayesian analysis. Even if such result strongly depends on the assumptions in the astrophysical modelling, it drives the attention to such regime.
In Appendixย <ref> we show results for other values of f^E.
The number distribution in figureย <ref> shows all the merger events observed by the detector. As we will discuss in sectionย <ref>, only some of these events enter our analysis, depending on the strategy we adopt to treat the poor sky localization for GW events. This does not change the effective bias, since we assume ABH and PBH are similarly affected. Thus, from figureย <ref> the goal of our analysis is already evident: if future GW surveys will be able to well constrain the mergers' effective bias, the existence of PBHs will be detected by looking at its flattening or decreasing at high z. Our goal is then to understand under which conditions this statistical analysis will have enough constraining power to reliably claim a PBH detection.
ยง ANALYSIS
Here we describe the setup of the analysis, whose results are collected in the next section. We initially rely on the phenomenological parameters {f^E, f^L} to compute signal-to-noise ratio and Fisher forecasts. To cover the full parameter space, we consider the combination of 6 uniformly spaced values, namely [0, 0.2, 0.4, 0.6, 0.8, 1]. For each pair of values, we set the condition f^E+f^L โค 1 (i.e.,ย we analyze 21 different models). We then convert our results into constraints on f_ PBH for the EPBH and LPBH models generally adopted in the literature. Finally, we perform a Bayesian model selection forecast to understand under which conditions future surveys will be able to discriminate between the only-ABH and ABH+PBH scenarios.
A small disclaimer before we start.
The most natural radial coordinate to map GW surveys is the luminosity distance D_L, since it can be directly estimated from data, while the redshift is degenerate with the chirp mass. The observed D_L-space can be transformed into z-space once that the values of the cosmological parameters is set and the space distortions and general relativistic effects are properly accounted for in the two coordinate systemsย <cit.>. At the time ET and ET2CE will operate, cosmological parameters will be extremely well constrained with respect to the other quantities in our analysis; moreover, corrections between D_L-space and z-space contributions to the number counts angular power spectrum have been showed to be small. Given that the measurement of these quantities is not our goal, we can thus safely work in z-space by assuming large enough z-bins; under this assumption, the GW survey is somehow analogous to a photometric survey. Throughout the analysis, we fix the cosmological parameters to Planck 2018 resultsย <cit.>.
Moreover, depending on parameters such as progenitor masses, sky position, orbit inclination, etc., future gravitational wave detectors will provide different luminosity distance and sky localization uncertainties {ฮ D_L /D_L, ฮฮฉ} for each of the merger events they will observe. However, since here we are interested in the statistical properties of the overall distribution, we make general assumptions on the measurement uncertainties.
ยง.ยง Setup
We characterize the GW survey by assuming that it observes a sky fraction f_ sky = 1 for a time T_ obs = 10 yr, up to the detector horizon z_ max=6. To evaluate the statistical luminosity distance uncertainty ฮ D_L /D_L and average sky localization uncertainty ฮฮฉ, we consider ET alone and ET2CE.
The sky localization is related to the luminosity distance uncertainty; the analysis inย <cit.> suggests an almost linear dependence between log_10(ฮ D_L/D_L)โ [-3, 1] and log_10(ฮฮฉ [ deg^2])โ [-3, 4], with a โผlog_10[๐ช(10)] dispersion and a decreasing accuracy with increasing redshift.
We can estimate that โผ 15% events with z โค 6 will have uncertainties ฮ D_L /D_L โฒ 1 and ฮฮฉโฒ 100 deg^2 in the single ET case, while โผ 65% events with z โค 6 will have ฮ D_L /D_L โฒ 1 and ฮฮฉโฒ 10 deg^2 for ET2CE. The total number of events used in the analysis is then N^ GW obs = 1.20ร 10^4/yr for single ET; N^ GW obs = 7.15ร 10^4/yr for ET2CE.
For the purposes of our statistical analysis, we consider only these subsets of events, which we divide into z_i,j bins, having amplitude larger than ฮ D_L /D_L if converted to D_L-space.
Our estimator is the angular power spectrum C_โ(z_i,z_j), which we compute in 5 Gaussian tomographic bins, centered in z_i โผ [0.4, 0.8, 1.4, 2.4, 4.3] with ฮ z โ [0.4, 0.4, 0.6, 1.0, 1.9] and r.m.s. either equal to 1/2 or 1/4 of the width. The former case, more conservative, has a reduced constraining power due to the overlap between adjacent bins, which is accounted for in the computation of the covariance matrix (see next section and, e.g.,ย <cit.>). The latter, instead, relies on a better redshift determination of the observed sources. Depending on the capability next generation GW detectors will reach in determining z, results will be somewhere in between these two cases.
To estimate the smallest scale accessible by the GW detector due to its lack of sky localization, we further assume to observe mergers in Gaussian beams of average size ฮฮฉฬ = 50 deg^2 for the single ET configuration and ฮฮฉฬ = 1 deg^2 for ET2CE, so to compensate between the constraining power of events with good and bad sensitivities.[The sky localization we consider for, e.g.,ย ET2CE, are distributed between 10 deg^2 and very good (up to 10^-3 deg^2) values. Thus, by adopting ฮฮฉฬ, we analyze optimistically the events with 1 deg^2โฒฮฮฉโฒ 10 deg^2, while worsening the ones with 10^-3 deg^2โฒฮฮฉโฒ 1 deg^2. The smearing we introduce on the angular power spectrum further reduces the constraining power, making our final results conservative.] This allows us to treat the GW maps analogously to e.g.,ย CMB and intensity mapping studies, and it implies the angular power spectrum gets smoothed by Cฬ_โ(z_i,z_j) = B_โ^2 C_โ(z^i,z^j), where
Bฬ_โ(z_i,z_j) = exp[- โ(โ+1)/2ฮฮฉฬ_ std/8 log(2)] ,
with ฮฮฉฬ_ std = ฮฮฉฬ_ deg^2(ฯ/180)^2.
Appendixย <ref> recalls the main equations and shows C_โ, Cฬ_โ.
ยง.ยง Cross-correlation with galaxy surveys
In this work, we also investigate cross-correlations between future GW and galaxy surveys (see e.g.,ย <cit.>); to do that, we take as an example two mock surveys, modeled around the forthcoming wide area SPHERExย <cit.> and the proposed deep survey ATLAS Explorerย <cit.>. We do this in order to explore different regimes and scenarios, trying to understand what is the ideal setup for this kind of analyses. A detailed investigation of the optimal galaxy survey strategy and experiment-specific forecasts are beyond the scope of this paper.
Equations describing the cross-angular power spectrum Cฬ_โ^XY(z_i,z_j) can be found in appendixย <ref>. The surveys are characterized as follows.
* Wide survey, SPHEREx-like. We consider f_ sky^w = 0.75 and number density and bias from the ฯ_z /(1+ z) = 0.2 sample,[<https://github.com/SPHEREx/Public-products>] since the resolution is close to our GW survey. We then compute the source number distribution using bins with ฮ z = 0.2, as
d^2N^w/dzdฮฉ = 3.26ร 10^8/4ฯฮ z(z/0.301)^0.829exp[-(z/0.301)^0.95] .
* Deep survey, ATLAS Explorer-like. We consider a configuration covering sky area f^d_ sky= 0.2, and we consider
d^2N^d/dzdฮฉ = 3.26ร 10^8/A^d_ stdฮ z 0.16exp(-0.25 z^2) ,
b^d(z) = 0.84D(z)/D(z=0) ,
D(z) being the growth factor and ฮ z= 0.2 in this case as well.
The sky area prevents us observing the largest scales; for this, we set the minimum multipole to โ_ min^dโผ2ฯ/ฮธโผ 10, where ฮธ = arccos[1-A^d_ std/2ฯ].
We show the number density and bias of the galaxies observed in the wide and deep survey
in figureย <ref>, where we also compare them to the ABH cases observed by ET2CE and ET. Interestingly, the bias of ABH and of the galaxies observed by the wide survey
are similar at low redshift, while at high redshift (zโณ 3) they deviate from each other, with the galaxy bias being systematically higher than the ABH's host ones.
This is due to the fact that at high redshift the wide survey
will mainly observe the brightest galaxies, ultimately hosted by the more massive halos (differently happens for the deep survey,
sensitive to fainter and smaller galaxies). GW mergers, instead, at first approximation can be observed irrespective from the properties of their host galaxy; therefore, they trace halos of different masses (seeย <cit.> for analysis on the relevance of the galaxy and host halo properties in the ABH clustering).
In the analysis we perform in the following section, we adopt the same z binning both for auto- and cross-spectra, meaning we are dividing the galaxy catalogs in the same bins as the GW events.
When computing galaxy auto-spectra, we consider โ_ max = ฯ(z)k_nl,0, where k_nl,0 = 0.12 Mpc^-1.
For cross-spectra with GW and GW auto-spectra, instead, the smoothing due to the GW sky localization uncertainty has to be accounted for, see eqs.ย (<ref>) andย (<ref>).
ยง RESULTS
Our aim is to understand what confidence level will be reached by future surveys in detecting the presence of a primordial components in the GW catalogs. For both GW auto-spectra and GWรgalaxy cross-spectra, we initially perform two different and complementary analyses.
First of all, we compute the signal-to-noise Ratio (SNR) for the presence of PBH mergers in the GW catalogs. Then, we perform a Fisher analysis to constrain the marginalized errors on the mergers' bias. Both the analyses assume as fiducial the case where only ABHs are present. Finally, after converting our constraints to f_ PBH, we use the Bayes factor formalism to forecast model selection between different scenarios.
ยง.ยง Signal-to-Noise Ratio
The first statistical test we perform on our forecast is to understand whether, if PBH contribute to the mergers, the signal observed through the auto- (X = Y = GW) and cross- (X = GW, Y = galaxies or viceversa) spectra
Cฬ_โ,ij^XY[AP]=Cฬ_โ^XY [AP](z_i,z_j) will be distinguishable from the theoretical assumption in which only ABH exists. This is done by computing:
SNR^2 = f_ skyโ_โ(๐ฬ_โ-๐ฬ_โ^ AP)^Tโณ_โ^-1(๐ฬ_โ-๐ฬ^ AP_โ)
where we sum all the multipoles
to estimate the signal-to-noise ratio of the survey.
In the previous equation, we defined the vector containing the angular power spectra as
๐ฬ_โ = [ Cฬ_โ,11^XX Cฬ_โ,12^XX ... Cฬ_โ,N_ binN_ bin^XX Cฬ_โ,11^XY ... Cฬ_โ,N_ binN_ bin^XY Cฬ_โ,11^YY ... Cฬ_โ,N_ binN_ bin^YY , ]
and analogously for the case ๐ฬ_โ^ AP in which PBH are included. The elements of the covariance matrix โณ_โ are defined asย <cit.>
โณ_โ,IJ = 1/2โ+1[(Cฬ^ XX_โ,ip+ฮด_ip/nฬ
_ X,i)(Cฬ^ YY_โ,jq+ฮด_jq/nฬ
_ Y,j)+(Cฬ^ XY_โ,iq+ฮด_iqฮด_XY/nฬ
_ X,i)(Cฬ^ XY_โ,jp+ฮด_jpฮด_XY/nฬ
_X,j)]
where IJ = (X,ij)(Y,pq) are the indexes associated with the entries of the ๐ฬ_โ vector, each of which refers to a specific source (X or Y) and to a specific pair of bins (ij or pq). Finally, nฬ
_X,i^-1 is the shot noise of the X source in the i-th bin.
We compute the SNR as in eq.ย (<ref>) to study the capability of our GW survey, both alone and in cross-correlation with galaxies, to detect the presence of PBH mergers in the observed catalog. As we show in appendixย <ref>, the bulk of the information comes from intermediate redshift zโผ 2, making our analysis complementary to the use of high redshift merger rate for PBH detectionย <cit.>.
Figureย <ref> summarizes our results for ET2CE alone or in cross-correlation with galaxy surveys; the SNR values obtained, as well as results for ET, are in appendixย <ref>.
The cumulative signal we obtain by summing over โ- and z-bin pairs allows us to potentially detect the presence of PBH for some models. The first thing we note is that, given a certain value for f^E+f^L, the SNR changes depending on the relative abundance of EPBH and LPBH. When f^E+f^L is low, having a larger number of EPBH leads to a higher SNR, while when f^E+f^L is high, the SNR increases when the number of LPBH is larger. When f^E+f^Lโผ 0.8, the SNR is almost constant independently from the relative abundance of EPBH and LPBH. This may seems surprising at first glance: since b^L < b^E, we would expect the same f^E+f^L to have higher SNR with a larger number of LPBH.
However, the effective bias we are using weighs each contribution by its merger rate: as it can be seen in figureย <ref>, this is higher for EPBH, particularly at high redshift.
The balance between these two effects is evident in figureย <ref>: a large value of f^E produces a large enough deviation on the effective bias at high z to facilitate the detection when f^E+f^L is small, while a large value of f^L requires an overall higher number of PBH to be effective.
In the case of only ET2CE, our results show that at least a 2ฯ detection could be claimed when PBH binaries constitute more than โผ 60% - 80% of the observed number of events in the GW catalog, in the case of non-overlapping bins.
As appendixย <ref> shows, using a single ET instead leads to no constraining power in the single tracer case, because of the lack of small scales.
When including cross-correlations with galaxy catalogs, the SNR gets larger even with a smaller amount of PBH binaries in the catalog and we get to 2 - 3ฯ whenever PBH โณ 40%. Cross-correlations between ET and galaxy surveys
instead reach 1ฯ detection only when the number of PBH is very large (f^E+f^Lโณ 80%, see appendixย <ref>).
When overlapping redshift bins are used, all results degrade by a factor of almost two on average, meaning that similar PBH abundances lead to 1 - 2ฯ detection in the case of ET2CE. Cross-correlations still allow a 3ฯ detection when f^E+f^L โฅ 60%, reaching โผ 2ฯ for f^E+f^L โผ 40% instead.
As we will confirm with the Fisher matrix analysis in sectionย <ref>, even if both the galaxy surveys lead to an improvement in the capability of distinguish the effective bias from the only-ABH bias, a wide survey allows to better measure the bias. On the other hand,
even if both the surveys probe high redshifts, the deep survey
observes a larger number of sources at z > 2 (compare with figureย <ref>) and its cross-power spectra have lower shot noise in the last bin (zโ[2.4,6.2]). For the wide survey,
the larger f_ sky allows us to access larger โs, which explains its larger SNR.
Even if the SNR for the cross-correlation with the wide survey
is 50%-100% times larger than the deep one, the two analyses present a similar trend with respect to the relative abundances of EPBHs and LPBHs. This is because we use only 5 redshift bins and the analysis can not capture the details in the redshift evolution of the source number distribution or bias. Even when the PBH contribution to the mergers can be detected, the relative EPBH and LPBH abundances remain unconstrained with this method.
ยง.ยง Bias forecasts
The analysis studied in the previous section does not take into account degeneracies between parameters that influence the amplitude of the angular power spectra.
To strengthen the reliability of our results, we study forecasts constraining power of the different surveys in measuring the effective GW bias parameters. We perform a Fisher matrix analysis
F_ฮฑฮฒ = f_ skyโ_โ2โ+1/2Tr[ (๐_โ)^-1โ_ฮฑ๐_โ(๐_โ)^-1โ_ฮฒ๐_โ],
with respect to the parameters ฮธ_ฮฑ, ฮฒ = {b_1,b_2,b_3,b_4,b_5}, each of which represents the effective bias inside one of the redshift bins. Their fiducial values equal the ABH bias from eq.ย (<ref>) in the central point of the bins.
The covariance matrix ๐_โ (and analogously the derivative matrix โ_ฮฑ๐_โ) for the GW survey alone is built as
๐_โ^ GWGW = [ Cฬ_โ,11^ GWGW+nฬ
_ GW,1^-1 ... Cฬ_โ,0N_ bin^ GWGW; ... ... ...; Cฬ_โ,N_ bin1^ GWGW ... Cฬ_โ,N_ binN_ bin^ GWGW+nฬ
_ GW,N_ bin^-1; ] ,
being N_ bin=5 the overall number of bins used and nฬ
_ GW,i^-1 the shot noise in the i-th bin.
In the multi-tracer scenario instead, the computation of the Fisher matrix requires to update eq.ย (<ref>) by using the block-matrix covariance
๐_โ^ cross = [ ๐_โ^ gg ๐_โ^ gGW; ๐_โ^ GWg ๐_โ^ GWGW; ]
and the derivative matrix defined analogously.
Moreover, in the multi-tracer case the parameter set used to compute the Fisher matrix also includes the galaxy bias parameters {b_g,1,b_g,2,b_g,3,b_g,4,b_g,5}, which we marginalize before getting our final results on the GW biases.
Note that in the previous section the constraints were intergrated over z to obtain the overall SNR, while here we separate the information to constrain the bias in different bins.
Tableย <ref> shows the relative marginalized errors on the bias parameters ฯ_b_i, both when considering only the GW survey and when including the cross-correlation with galaxies (see appendixย <ref> for results on ET). Once again, the fiducial for the bias parameters is set to the case where only ABH contributes to the signal.
Our main results assume an uninformative prior on both the merger and galaxy bias; we also consider the possibility of a 50% Gaussian prior on the ABH bias based on the results inย <cit.>. For the cross-correlation analysis, we separately run the Fisher forecast on the only-galaxy surveys, so to estimate their (optimistic) constraining power on the galaxy bias. We checked that using these results as priors for the multi tracer analysis the results do not largely improve, confirming the stability of our analysis.
Figureย <ref> shows predicted errors on the ABH bias for ET2CE in our fiducial case f^E = 0.2 (see appendixย <ref> for the other cases and for ET alone). Coherently with results in the previous section, for non-overlapping bins the observed PBH-induced effective bias can be distinguished from the ABH bias when f^E+f^Lโณ 0.6-0.8 with GWs only, โณ 0.4 with galaxy cross-correlations (i.e.,ย โผ 60% - 80% and 40% of the observed mergers are due to primordial black holes), while not being very sensitive to EPBH and LPBH relative abundances. Marginal errors computed using overlapping bins are โผ 1.5 - 2 times larger than non-overlapping cases.
Because of the redshift evolution of ABH number density in eq.ย (<ref>), the shot noise allows us to have better constraints on ABH bias at intermediate-high redshifts, namely between zโผ 1 and โผ 3 (consistent with the previous section). This is therefore the range in which our technique is more sensitive to deviations due to PBH contributions and makes our analysis complementary to the use of high z merger detections to test the PBH existenceย <cit.>. Moreover, since measurements at higher z are more challenging, intermediate redshift probes can be seen as more reliable. In the case of bias measurements, a strong improvement at high z can be obtained via cross-correlation with the deep survey, thanks to the large number of sources this instrument measures at z> 3, as figureย <ref> shows. On the other side, cross-correlations with the wide survey are more effective to improve constraints at zโฒ 3.
Since f^E and f^L are almost degenerate in our analysis, we can describe the effective bias b(z) for each value of f^E+f^L using the mean bฬ
(z) and the standard deviation of the values it assumes in the different {f^E,f^L} cases. To visualize the final results of our Fisher analysis, therefore, in figureย <ref> we compare bฬ
(z)/b^A(z) for each value of f^E+f^L with the 1ฯ confidence level of the fiducial only-ABH scenario. The setup we adopted maximizes the sensitivity to deviations from the only-ABH scenario in the intermediate redshift bins, where the ABH shot noise is smaller. When only ET2CE is adopted, the effective bias deviates โผ 1ฯ from the ABH bias when f^E+f^L โณ 0.8, namely when PBH mergers constitute at least 80% of the totality. Cross-correlations with galaxies provide deviation from only-ABH for f^E+f^L โณ 0.4, namely when at least 40% of the overall merger events observed have primordial origin.
ยง.ยง Constraints on f_ PBH
In the previous sections we showed that GW clustering can be a tool to constrain the existence of PBHs. We parameterized EPBH, LPBH contributions to future GW catalogs with as few model assumptions as possible (maintaining robustness).
From the cosmological point of view, the quantity of interest is the parameter f_ PBH, introduced in eq.ย (<ref>). This describes how much dark matter is comprised of PBHs, either bounded in binaries or isolated.
To convert the results from sectionsย <ref>,ย <ref> into constraints on f_ PBH, we need to assume a specific model for early and late binary formation, provided the uncertainties that still exist in the literature.
The goal of this section is to relate the local merger rates โ_0^E,L=f^E,Lโ_0^ tot to f_ PBH. We recall that in this work we consider black holes with masses M_ PBHโผ 5-100 M_โ, which will be observed by ET and CEย <cit.>.
For early binaries, we followย <cit.> and roughly estimate the local merger rate for a monochromatic PBH mass function as
โ_0^E = 1/2f_ PBH/0.85ฯ_m/M_ PBHdP/dt|_t_0
โ
5.10ร 10^5 Gpc^-3yr^-1(f_ PBH/0.85)^2(M_ PBH/30 M_โ)^-32/37[(f_ PBH/0.85)^2+ฯ_ eq^2]^-21/74,
where ฯ_m is the matter density at present time t_0 and dP/dt the probability distribution of the time of merger, which can be computed as a function of M_ PBH, f_ PBH and the variance of DM density perturbations due to non-PBH components at matter-radiation equality, ฯ_ eqโผ 0.005.
In our work, โ_0^E = f^Eโ_0^ tot, therefore we can numerically solve the previous equation to get f_( PBH|E), namely the overall DM fraction in the form of EPBH.
For late binaries, instead, we adopt the formalism inย <cit.>, which provides
โ_0^L = (f_ PBH)^53/21โซ dM_hdn/dM_hโ_h(M_h)
โ โ_0^ tot (f_ PBH)^53/21(M_h/400 M_โ)^-11/21 ,
where โ_h(M_h) is the merger rate in an halo of mass M_h and dn/dM_h the halo mass function. The mass dependence is negligible and we normalize the result to the merger rate observed by the detector.
Using โ^L_0 = f^Lโ_0^ tot, we invert the equation to get f_( PBH|L) = (f^L)^21/53.
Clearly, the constraints on f_ PBH obtained through this procedure will be model dependent; in particular, for late binaries they will be affected by assumptions on the host halo properties, while uncertainties for early binaries arise from effects that could alter the binary formation process and merger time (primordial non-Gaussianityย <cit.>, non-linearitiesย <cit.>, nearest and next-to-nearest neighbor interactionsย <cit.>, etc.). Since model uncertainties are larger for early binary scenarios, we collect all the uncertainties in the parameter ๐ฐ = โ_0^E/โ_0^E,nom where โ_0^E,nom is computed via eq.ย (<ref>) in the nominal scenario, i.e.,ย relying on assumptions inย <cit.>. Constraints on f_ PBH are then computed as a function of ๐ฐ, which can be seen as the early mergers' reduction factor due to, e.g.,ย third-body interactions, or any other effect that could disrupt early binaries or delay mergers. It is important to note that different effects that could impact the EPBH merger rate may become relevant in different parts of the parameter space. For example, there are indications that the nearest and next-to nearest neighbor interactions reduce the merger rate more for larger values of f_ PBH, so that the final merger rate corrections could be a mix of different values of ๐ฐ.
From eqs.ย (<ref>) andย (<ref>), choosing a value for f_ PBH sets both {f^E, f^L} and determines the observed local merger rate. We are interested in values comparable with LVK constraints, namely โ_0^E,Lโ[0.01,100] Gpc^-3yr^-1: as figureย <ref> shows, this implies that in the nominal case we would be probing values of f_ PBH of the order of โผ๐ช(10^-4) for EPBH, since larger values of f_ PBH in the nominal scenario would lead to values of โ_0^E that are too large to be accepted. In this range, the number of LPBH is negligible.
When ๐ฐ decreases (i.e.,ย the EPBH merger rate is smaller than whatย <cit.> estimates), larger values of f_ PBH can be taken into account; when ๐ฐโผ๐ช(10^-7), EPBH contributions to the overall number of merger events can be neglected.
To cover all the parameter space, we consider the cases ๐ฐ = {1,10^-3, 10^-5, 10^-7} and we convert constraints (with non-overlapping bins) from sectionย <ref> into constraints for f_ PBH.
We showed in sectionย <ref> andย <ref> that the effective bias can be distinguished from the ABH-only case if f^E+f^L โณ 0.6 for ET2CE and โณ 0.4 when cross-correlating with galaxy surveys. When ๐ฐ = 1, this implies that LPBH are negligible and our technique can detect (or rule out) f^Eโ 0.4-0.6, which corresponds to f_ PBHโ 1ร 10^-3; when ๐ฐ = 10^-7, f^Eโผ 0 instead and constraints on f^L can be used to constrain f_ PBHโผ 0.7-0.8. Figureย <ref> summarizes our results, for ET2CE alone and in cross correlation with galaxy surveys. Our constraints are compared to current, tentative constraints coming from microlensingย <cit.>, accretion contributions to Galactic radio and X-ray emissionsย <cit.>, CMB spectral distortionsย <cit.>, 21cm signalย <cit.>; dynamical constraints from dwarf galaxiesย <cit.>, wide binariesย <cit.>; NANOGRAVย <cit.>; supernova lensingย <cit.>, Lyman-ฮฑย <cit.>, and LIGO GW dataย <cit.>. Other techniques have been used to forecast PBH constraints in a variety of mass ranges: e.g.,ย <cit.> considers the combination of CMB distortions and the pulsar timing array measurements,ย <cit.> studies how the merger rate changes depending on the early and late abundances and gravity and dark energy models, while inย <cit.> the high redshift merger rate is considered.
Note that all these constraints are model-dependent as well; the ones based on GW measurements from binary mergers, in particular, share the same uncertainties our model has on EPBH and LPBH formation processes.
In our nominal case, the clustering technique we perform is competitive with constraints currently available. Our analysis provides a complementary constraint to the one obtained from merger detections at high redshiftย <cit.>. Both techniques suffer of some limitations: to constrain PBHs via clustering measurements we need a large number of events with a good sky localization, while the use of the merger rate will require to observe events at high redshifts with a small distance uncertainty. A clear detection is challenging in both cases; for this reason, using both approaches will be fundamental in order to obtain solid limits or detections. Moreover, as stressed above, the conversion from observed (that being low-bias or high-z) primordial mergers to f_ PBH is model dependent and subject to several assumptions and uncertainties, making it even more useful to use both techniques on the same catalog.
ยง.ยง Model selection
As a final remark for our work, we perform a Bayesian model selection forecast on the capability future GW surveys will have to distinguish scenarios with and without PBHs.
It is well known that a Bayesian analysis of this kind is based on the prior choice for the underlying model; therefore, results of this section have to rely on one specific choice for the models described in the previous section. We rely on the nominal case ๐ฐ = 1, i.e.,ย we assume the models inย <cit.> andย <cit.> to be the correct ones for early and late binaries respectively, thus we can safely set f^Lโผ 0. The same analysis can be repeated for other early merger models.
In our Bayesian model selection forecast, we compare the following two models:
* ABH-only model (A): In the GW survey there are only ABH mergers. The parameter set that characterizes this model is ฮธ = {A}, which describes the redshift evolution of the ABH bias (see eq.ย (<ref>)).
We will also refer to this case as the โsimple modelโ.
* ABH-PBH model (AP): The GW survey contains both mergers from ABH and PBH early binaries. The parameter set to take into account is now ฮธ = {A, f^E}. We will refer to this as the โcomplex modelโ.
Since this is a nested model scenario, we can apply the model selection forecasting procedure described inย <cit.>. In this approach, we work in Laplace approximation i.e.,ย we assume that the expected likelihoods are multivariate Gaussians. Moreover, we model the precision matrices using the Fisher matrices F_A, F_AP.[Note that F_A,F_AP in this section differ from the Fisher analysis in the previous section, because of the different parameter choice. Results can be transformed through change of coordinates.] This leads to the following expression for the ensemble average of the Bayes factor, in the assumption that the likelihoods are narrowly peaked around the fiducial values:
<B> = 1/(2ฯ)^(n_AP-n_A)/2โ( F_AP)/โ( F_A)exp[-1/2ฮดฮธ_2^ฮฑF_AP,ฮฑฮฒ ฮดฮธ_2^ฮฒ]โ_q=1^n_AP-n_Aฮฮธ_AP^n_AP-n_A+q .
In the previous formula, n_AP is the number of parameters in the complex model whereas n_A is the number of parameters in the simple model. Therefore, n_AP-n_A is the number of extra-parameters in the complex scenarios, while ฮฮธ_AP^n_A+1,... n_AP are their prior ranges.
Note that, in the ABH-PBH model, the shot noise will include PBH contributions, thus depending on the parameter f^E.
If we consider the ABH-only model, the associated Fisher matrix is just the submatrix of F_AP, obtained by fixing the extra parameter to f^E = 0; in other words, we are conditioning the value of f^E in the multivariate Gaussian likelihood describing the ABH-PBH model, in order to reproduce the simple ABH-only scenario.
This conditioning procedure also leads to shifts in the best-fitting values of all other parameters, according to the following formula (seeย <cit.> for a detailed explanation)
ฮดฮธ^ฮฑ =
ฮธ_A^ฮฑ-ฮธฬ
_AP^ฮฑ if ฮฑ > n_A (i.e.,ย extra-parameters)
-(F_A^-1)^ฮฑฮณ G^ฮณฮถ ฮดฮธ_AP^ฮถ if ฮฑ < n_A (i.e.,ย common parameters)
In the above,
G^ฮณฮถ is the n_Aร (n_AP-n_A) subset of F_AP, obtained by considering the ฮณ = 1,... n_A rows related to the common parameters and the ฮถ = n_A+ ... (n_AP-n_A) columns related to the extra parameters in the complex model.
To sum up, in our case:
* In the ABH-PBH model, we have n_AP = 2 parameters, namely {A, f^E}. Since f^E spans from 0 to 1 depending on the specific model adopted, we can safely assume a uniform prior on the extra parameter ฮฮธ_AP^extra = ฮ f^E = 1.
* In the ABH-only model we have n_A = 1 parameter, namely the slope of the ABH bias A. The extra-parameter f^E is set to 0 to recover this model. In this case, the Fisher matrix collapses to a single element, namely F_A = f_ skyโ_โ (2โ+1)[(Cฬ_โ)^-1โ_ACฬ_โ(Cฬ_โ)^-1โ_ACฬ_โ]/2, where the power spectra are computed in the ABH-only condition.
Under these conditions, we apply eq.ย (<ref>) to compute the shifts in the fiducial parameters and then we insert them in eq.ย (<ref>) to estimate the Bayes factor.
In the multi tracer case, the only difference is that we build up the Fisher matrix considering the galaxy-galaxy and cross-spectra as well; note that the galaxy power spectra are independent on both the A and f^E parameters, therefore the only extra signal we gain comes from cross-correlations. On the other side, adding the galaxy bias parameters b_g to the analysis can introduce extra degeneracies, for which the Laplace approximation cannot be trusted: this happens e.g.,ย when considering ET alone, even in correlation with galaxy surveys.
Moreover, the cases f^E = {0, 1} can not be analysed with this method since they present singular Fisher matrices: in the former case, f^E can not be constrained since there are no PBH in the computation (โ_f^ECฬ_โ = 0 in F_AP and the complex model collapses on the simple one), while in the latter the same happens with A since there are no ABH (F_AP^AA = 0).
Figureย <ref> shows our Bayes factor results in terms of f^E, compared with the Jeffrey's values for detectionย <cit.> as described byย <cit.>:
* -1 โคโจln Bโฉ < 1 : inconclusive test;
* -2.5 โคโจln Bโฉ < -1 : substantial evidence for complex model AP (โผ 1ฯ detection);
* -5 โคโจln Bโฉ < -2.5 : strong evidence for complex model AP (โผ 2ฯ detection);
* โจln Bโฉ < -5 : decisive evidence for complex model AP (โผ 3ฯ detection).
We only show results for ET2CE and its cross-correlations with the wide and deep surveys,
since the cases related with a single ET have large shot noise and break the Laplace approximation: this is consistent with the large uncertainties obtained from the Fisher matrix analysis.
Our results for the Bayes factor are consistent with the ones in the previous sections (for non-overlapping bins): evidence for the ABH-PBH model is reached when f^E โณ 40% of the mergers observed by future GW detectors, when the cross-correlation with galaxies is taken into account. Above this value, we obtain a strong or decisive evidence, namely a โณ 2ฯ detection of early PBH binary contribution to the effective bias. Using eq.ย (<ref>), this translates into evidence for f_ PBHโณ 9ร 10^-4 for monochromatic PBH mass distribution M_ PBHโผ 30 M_โ.
ยง CONCLUSIONS
The physics acting during the early stages of the Universe is still largely unknown; in particular, there are no constraints on the primordial power spectrum on scales smaller than what is probed by the CMB.
Regarding the physics relevant for this work, this leaves the window open to the possible existence of large curvature perturbations, which might be responsible for the formation of primordial black holes. Independently on their formation model, if PBHs exist, they could be (or become) bound in binaries and eventually merge, producing gravitational waves. If their masses are comparable to black holes produced via stellar evolution, namely ๐ช(5-100) M_โ, the GW signals produced by their mergers would be in principle indistinguishable from the signal produced by astrophysical binary mergers.
Among the probes to distinguish between different BBH origin, in this work we chose to focus on the study of the clustering of BBHs mergers, making predictions for future GW surveys. Previous studies already demonstrated the capability of this tool, which comes from the fact that PBH and ABH binaries trace the underlying matter distribution differently. Gravitational wave surveys are blind to the origin of the mergers' progenitors, thus the angular power spectrum estimated from their catalogs will be characterized by an effective bias due to the weighted average contribution of ABHs and PBHs. If the latter can be formed both in radiation dominated era and via dynamical captures in the late Universe, the relative abundances of the two sub-populations is needed to determine the deviation of the effective bias from the standard bias of astrophysical mergers.
In this paper, we investigated for the first time the possibility to use GW clustering alone to search for signatures of PBH mergers in catalogs that should be observed by future experiments such as the Einstein Telescope and Cosmic Explorer, alone or in combination. The measured effective bias of the mergers' hosts will be sensitive to the presence of a primordial component, in ways that depend on the formation channel of primordial binaries.
Our results showed that interferometers of this type could detect (or rule out) the presence of primordial mergers if those make up at least 60%-80% of the detected mergers, when sources are divided into non-overlapping redshift bins.
We then used updated limits on merger rates for the different BBH populations to produce new limits from the cross-correlation between GW and galaxy catalogs, investigating wide and deep future surveys. We found that the fraction of primordial mergers these cross-correlations could be sensitive to lowers to โผ 40%. A poor redshift determination leads to overlapping redshift bins, with a consequent degradation of our results of about a factor of two.
For the first time in this type of analyses, we connected constraints on the fraction of primordial mergers in the observed catalogs to the fraction of dark matter in PBHs. This connection is model-dependent, as the predicted merger rate for early PBH mergers is still very uncertain and depends on a variety of parameters and assumptions.
For this reason, we computed our constraints as a function of a fudge factor that encapsulates such uncertainties, allowing the merger rate of early PBH mergers to vary by several orders of magnitude. We found that, due to the large uncertainty in the modeling of early PBH mergers, constraints on f_ PBH from this observable could range from some 10^-4 to โผ 85% of dark matter in PBHs. Clearly, this methodology could become potentially one of the best ones to constrain PBH abundance in the LVK mass range, provided the physics of early binary formation and mergers gets clarified in future work, possibly using simulations accounting for a variety of effects.
Finally, we performed a new type of analysis for the GW clustering observable, in the form of a forecast Bayesian model selection technique aimed at understanding in what cases we could be able to claim evidence for the presence of PBH mergers in the catalogs. The analysis compared the model in which a parameter accounting for the PBH presence in the effective bias model is included, from the one in which only ABH are taken into account.
GW clustering alone can reach substantial evidence, and most likely go beyond that in the case when the vast majority of detected mergers is of primordial origin.
When looking at cross-correlations with galaxy catalogs, strong and even decisive evidence can be reached for certain values of PBH mergers. Results from all methodologies are consistent with each other.
In parallel to the development of this work, another article investigated the potential of the cross-correlation between gravitational waves from binary black hole mergers and galaxy maps, in order to constrain modified gravity and dark energy models, and withing those theories, PBH abundanceย <cit.>. In that work, the authors focus on slightly different situations and use GW catalogs specifically generated; in the limits where the two analyses can overlap and be compared, the results agree and present a complementary point of view.
The bulk of constraining power for these clustering analyses comes from intermediate redshifts zโผ [1,2.5], showing that our tool is complementary to other tests based on high-z merger rate. Given the relevance of the topic and the numerous uncertainties arising when trying to rule out (or detect) PBHs in the stellar mass range, we advocate for a combination of different techniques and observables, which could potentially provide robust results.
ยง ACKNOWLEDGEMENTS
We especially thank Nicolaย Bellomo, Micheleย Bosi and Andreaย Ravenni for insightful comments on the manuscript. The authors also thank Josรฉย L.ย Bernal, Elyย D.ย Kovetz, Sabinoย Matarrese, Canerย รnal, Lorenzoย Valbusa Dall'Armi, Eleonora Vanzan, Yun Wang and the Padova Cosmology group for interesting and fruitful discussions. SLย acknowledges funds from the Fondazione Ing. Aldo Gini scholarship and thanks the Azrieli Foundation for the support.
We acknowledge support by the project โCombining Cosmic Microwave Background and Large Scale Structure data: an Integrated Approach for Addressing Fundamental Questions in Cosmology", funded by the MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 - grant 2017YJYZAH.
AR acknowledges funding from the Italian Ministry of University and Research (MIUR) through the โDipartimenti di eccellenzaโ project โScience of the Universeโ.
We acknowledge the use of the public libraryย <https://zenodo.org/record/3538999> to realize the plots in figureย <ref>.
ยง MODEL DEPENDENCE ON SUBPOPULATION ABUNDANCES
We show how the observed number distribution d^2N^ GW/dzdฮฉ from eq.ย (<ref>) and the effective bias b(z) from eq.ย (<ref>) change accordingly to variations in the EPBH fraction f^E. All the plots in this appendix are analogous to figureย <ref>; the condition f^E+f^Lโฅ 1 reduces the cases of f^L available when f^E grows. The plots also show the degenerate cases {f^E,f^L} = {0,0}, only ABH (blue line in figureย <ref>), {f^E,f^L} = {0,1}, only LPBH (black line in figureย <ref>), {f^E,f^L} = {1,0} = only EPBH (blue line in figureย <ref>).
We here consider the detector setup ET2CE, in which the fiducial case is โ^ tot_0 = 27, N^ GW = 1.1ร 10^5 and T_ obs = 10 yr.
In the single ET case, instead, the overall observed number of events each year decreases to N^ GW = 8ร 10^4, therefore the overall normalization in the d^2N^ GW/dzdฮฉ plots is lower. The other modifications we introduce when dealing with this detector (namely, the different average beam size and the fraction of observed events described in figureย <ref>) do affect neither the fiducial d^2N^ GW/dzdฮฉ nor b(z).
ยง ANGULAR POWER SPECTRUM FORMALISM
Throughout this work, our observable is the angular power spectrum
Cฬ_โ(z_i,z_j) = B_โ^2 C_โ = exp[- โ(โ+1)/2ฮฮฉฬ_[ std]/8 log(2)]C_โ(z_i,z_j) ,
C_โ(z_i,z_j) = 2/ฯโซ dk k^2 P(k) ฮ_โ(z_i,k)ฮ_โ(z_j,k) ,
where B_โ is the smoothing factor due to the Gaussian beam from eq.ย (<ref>), that exponentially decreases the power, and C_โ(z_i,z_j) is the standard angular power spectrum; P(k) is the matter power spectrum, while the observed transfer functions ฮ_โ(z_i,j,k) depend on the source number distribution, the window function and the theoretical transfer functions, which contain information on the density perturbations and the redshift space distortions (see e.g.,ย <cit.> for the full expression and derivation). As we describe in sectionย <ref>, we compute the angular power spectrum in 5 tomographic bins (including both i=j and i โ j components); to do so, we use and modify the public code ย <cit.>,[<https://github.com/nbellomo/Multi_CLASS>] taking into account the ABH and PBH number distributions described in sectionย <ref> andย <ref>.
Figureย <ref> compares the theoretical C_โ with the smoothed Cฬ_โ that we use as observable.
Eq.ย (<ref>) is used to compute the angular power spectrum for a single tracer, being this GW or galaxies. For cross-angular power spectrum, we use the general formulation
Cฬ_โ^ GWg(z_i,z_j) = 2/ฯโซ dk k^2P(k)[B_โโซ_z_i-ฮ z^z_i+ฮ zb(z)dN^ GW/dzฮฬ_โ^i][โซ_z_j-ฮ z^z_j+ฮ zb_g(z)dN^ g/dzฮฬ_โ^j] ,
where b(z), b_g(z) are respectively the GW effective bias and the galaxy bias, dN^ GW/dz, dN^ g/dz the GW and galaxy number density per redshift bin and the transfer functions ฮ^i,j include the density and redshift space distortion contributions.
Before running the full analysis, we checked the SNR of our theoretical only-ABH assumption, in the case of auto-power spectra (ij) = (pq). Figureย <ref> shows the โ-cumulative SNR for ET2CE.
Note that the bulk of the information comes from intermediate redshifts zโผ 2, where the ABH number distribution peaks. Moreover, the Cฬ_โ SNR saturates only because of the smoothing factor on small scales adopted in eqs.ย (<ref>),ย (<ref>): this intuitively suggests that our results will be quite conservative and the clustering study could provide more stringent constraints on the PBH merger contribution if the sky localization was improved.
ยง FULL SET OF RESULTS
ยง.ยง SNR: detector dependence
Different values of {f^E,f^L} determine different SNR in eq.ย (<ref>). Its measurement in each scenario depends also on the detector: tablesย <ref> andย <ref> collects our results respectively for ET2CE and ET, alone or in cross-correlation with a wide or deep galaxy survey.
ยง.ยง Fisher matrix: model dependence
In sectionย <ref> we computed forecasts for marginalized errors on ABH bias parameters and we showed how they can be used to distinguish the only-ABH scenario from the case in which both ABH and PBH contribute to the merger rate. Figureย <ref> compared ABH bias errorbars with the effective bias in the cases f^E = 0.2; panels in figure fromย <ref> show the comparison in the cases f^E = {0,0.4,0.6,0.8,1} for the ET2CE scenario. Finally, tableย <ref> collects results for the ET scenario, alone or in cross-correlation with galaxy surveys (similar plots to the previous ones can be produced for ET, ETรWide survey and ETรDeep survey as well).
ยง.ยง Fisher matrix: detector dependence
The predicted constraining power on bias parameters depends on the detector properties. The following table is analogous toย <ref> when we adopt ET instead of ET2CE. The only-GW survey has no constraining power with uninformative prior, while it reduces to the amplitude of the prior itself when the Gaussian 50%ฯ_b is considered. Cross-correlations improve the results; in particular, the wide survey leads to โฒ 100% error at low-z in the case of uninformative prior.
Note that this result could be improved considering a larger local merger rate or a less conservative cut/smoothing on the small scales: for instance, inย <cit.> marginalized errors โฒ 100% were found up to zโผ 3 by using a sharp cutoff (i.e.,ย no exponential smoothing from eq.ย (<ref>),ย (<ref>)) associated with ฮฮฉ = 100 deg^2.
|
http://arxiv.org/abs/2306.07312v2
|
20230612180000
|
The ABJM Momentum Amplituhedron -- ABJM Scattering Amplitudes From Configurations of Points in Minkowski Space
|
[
"Tomasz Lukowski",
"Jonah Stalknecht"
] |
hep-th
|
[
"hep-th"
] |
[email protected]
Department of Physics, Astronomy and Mathematics, University of Hertfordshire,
AL10 9AB Hatfield, Hertfordshire, United Kingdom
[email protected]
Department of Physics, Astronomy and Mathematics, University of Hertfordshire,
AL10 9AB Hatfield, Hertfordshire, United Kingdom
In this paper, we define the ABJM loop momentum amplituhedron, which is a geometry encoding ABJM planar tree-level amplitudes and loop integrands in the three-dimensional spinor helicity space. Translating it to the space of dual momenta produces a remarkably simple geometry given by configurations of space-like separated off-shell momenta living inside a curvy polytope defined by momenta of scattered particles. We conjecture that the canonical differential form on this space gives amplitude integrands, and we provide a new formula for all one-loop n-particle integrands in the positive branch. For higher loop orders, we utilize the causal structure of configurations of points in Minkowski space to explain the singularity structure for known results at two loops.
The ABJM Momentum Amplituhedron
ABJM Scattering Amplitudes From Configurations of Points in Minkowski Space
Jonah Stalknecht
July 31, 2023
===============================================================================================================
ยง INTRODUCTION
Recent years have seen remarkable progress in applying positive geometries <cit.> to the problem of finding scattering amplitudes in ๐ฉ=4 super Yang-Mills (sYM) <cit.> and ABJM <cit.> theories, based on previous works on positive Grassmannians <cit.>. Most recently, by considering the reduction of the kinematics from the four-dimensional space of massless momenta to three dimensions, the ABJM amplituhedron ๐ฒ_n^(L) was defined in <cit.> in the three-dimensional momentum twistor space. This geometry encodes planar tree-level amplitudes A_n^(0) and loop integrands A_n^(L) in its canonical differential form. Importantly, it was conjectured in <cit.> that the L-loop integrands can be explicitly obtained from the ABJM amplituhedron by subdividing it into smaller pieces that are cartesian products of tree-level geometry (๐_m) times the L-loop geometry (โ^(L)_m)
๐ฒ_n^(L)=โ_m ๐_m รโ^(L)_m ,
where ๐_m are maximal intersections of BCFW cells at tree level termed chambers. For a given chamber, the loop geometry is the same, and can be though of as a fibration of the loop amplituhedron over the tree one.
In this paper we define a close cousin to the ABJM amplituhedron we termed the ABJM momentum amplituhedron ๐_n^(L), that lives directly in the three-dimensional spinor helicity space. To do that, we utilize the map used in the recently conjectured construction of the loop momentum amplituhedron for ๐ฉ=4 sYM <cit.>. This will define a geometry whose differential form is the integrand for the so-called positive branch of the theory. Importantly, the geometry that we obtain is remarkably basic when depicted in the space of dual momenta in three-dimensional Minkowski space. In particular, for a given tree-level configuration of points in ๐_m, the loop geometry is a subset of the Cartesian product of L curvy versions of simple polytopes we denote ฮ_n^(m). Remarkably, at one-loop the ABJM loop momentum amplituhedron in a given chamber is the curvy polytope ฮ_n^(m), and it is straightforward to find its canonical form. For each chamber one can easily identify vertices of ฮ_n^(m) and therefore find a general form of all one-loop integrands A_n^(1) in the positive branch for scattering amplitudes with n=2k particles:
A_n^(1) =โ_(T_1,T_2)โ๐ฏ^oร๐ฏ^eฮฉ_T_1,T_2^(0)โงฮฉ^(1)_T_1,T_2 .
Here ฮฉ_T_1,T_2^(0) is the tree-level canonical form for a given chamber that we labelled by a pair of triangulations (T_1,T_2) of two k-gons formed of odd/even particle labels. The one-loop differential form ฮฉ^(1)_T_1,T_2 associated with the chamber (T_1,T_2) takes the form
ฮฉ^(1)_T_1,T_2 =โ_a=1^n (-1)^a ฯ_a-1,a,a+1+โ_(a,b,c)โ T_1ฯ^+_abc-โ_(a,b,c)โ T_2ฯ^-_abc ,
and we present its explicit expression later. Results for other branches can be obtained from (<ref>) by parity operations defined in <cit.>.
For higher loops, the ABJM momentum amplituhedron ๐_n^(L) is specified by configurations of L points inside ฮ_n^(m) that are space-like separated from each other. By studying such configurations of points, we are able to give a simple explanation for the structure of the answer for the two-loop integrands known for n=4 <cit.>, n=6 <cit.> and n=8 <cit.>. To do that, we will utilise the notion of negative geometries <cit.> and use the causal structure of the three-dimensional Minkowski space.
This paper is organised as follows: we start by recalling basic facts about three-dimensional Minkowski space that will set the stage for next sections. Then, we study configurations of dual momenta that originate from the definition of the tree-level ABJM momentum amplituhedron that will allow us to define the curvy polytopes ฮ_n^(m). We follow by defining the ABJM momentum amplituhedron at loop level, and detailing its structure at one loop. In particular, we provide the explicit formula for one-loop integrands in the positive branch for all multiplicities. The final section focuses on the two-loop geometry. We conclude the paper with some open questions arising from our construction.
ยง ABJM MOMENTUM AMPLITUHEDRON
ยง.ยง Three-dimensional Minkowski Space
We will work in the three-dimensional Minkowski space โณ with signature (+,-,-). Scattering data for n-particle scattering in ABJM is encoded in a set of n=2k three-dimensional on-shell momenta p_a^ฮผ, a=1,โฆ,2k, ฮผ=0,1,2, with (p_a)^2=0. We assume that particles with odd labels are outgoing and the ones with even labels are incoming. This leads to the following momentum conservation
โ_a oddp_a^ฮผ-โ_a evenp_a^ฮผ=0 .
In planar theory, this data can be equivalently encoded using dual coordinates
p_a^ฮผ x_a+1^ฮผ-x_a^ฮผ ,
that define a null polygon in Minkowski space. For convenience, we choose x_1=0. This allows us to invert relation (<ref>) to get
x_b=โ_a=1^b-1(-1)^ap_a .
We denote by โ_x={yโโณ:(x-y)^2=0} the lightcone of point x.
The on-shell condition p^2=0 can be resolved by introducing three-dimensional spinor helicity variables and writing
p^ฮฑฮฒ=[ -p^0+p^2 p^1; p^1 -p^0-p^2 ]ฮป^ฮฑฮป^ฮฒ .
Scattering amplitudes are invariant under the action of the Lorentz group SL(2) on ฮป and therefore a point in the kinematic space is an element of the orthogonal Grassmannian ฮปโ OG(2,2k)๐ฆ_2k, where orthogonality is defined with respect to ฮท=diag(+,-,โฆ,+,-). In the following, we will repeatedly use the spinor brackets โจ abโฉ:=ฮป_a^1ฮป_b^2-ฮป_a^2ฮป_b^1.
ยง.ยง Tree-level momentum amplituhedron
Following <cit.>, the tree-level ABJM momentum amplituhedron ๐_2k^(0) =ฯ_ฮ(OG_+(k,2k)) is a subset of the kinematic space ๐ฆ_2k which is the image of the positive orthogonal Grassmannian OG_+(k,2k) through the map
[ ฯ_ฮ OG_+(k,2k) โ OG(2,2k); C โฆ ฮป=(Cยทฮทยทฮ)^โฅยทฮ , ]
where ฮโ M(k+2,2k) is a fixed matrix [We use a modified version of the definition in <cit.> that can be mapped to the original one by the parity duality.]. To get the desired geometry, in this paper we slightly modify the original definition and demand that the matrix ฮ is such that the image contains points satisfying the following sign-flip pattern: โจ ii+1โฉ>0 for all i=1,โฆ,n, the sequence {โจ 12โฉ,โจ 13โฉ,โฆ,โจ 1nโฉ} has k-2 sign flips and all planar Mandelstams are negative. We found that taking a generic twisted positive matrix ฮ, i.e. ฮยทฮท has all maximal minors positive, leads to the correct sign-flip pattern.
Importantly, for every point ฮปโ๐_2k^(0), when translated to the dual space we get a configuration of dual points x_a, a=1,โฆ,n forming a null polygonal loop, that satisfy the conditions
(x_a-x_b)^2<0, for |a-b|>1 ,
and all even-indexed points are in the future of their (null-separated) odd-indexed two neighbours.
These configurations of points x_a have a very interesting and intricate structure. In particular, for any three generic points x_a, x_b and x_c that are not neighbours, we find two points in the intersection of their lightcones โ_x_aโฉโ_x_bโฉโ_x_c, one in the future of points (x_a,x_b,x_c), and one in the past. We denote the future (past) point as p_abc^+ (p_abc^-).
Motivated by our future considerations, let us define the region of the Minkowski space
๐ฆ^โค0_x_a{ xโโณ(x-x_a)^2โค0 for a=1,โฆ ,n} ,
containing all points that
are space-like (or light-like) separated from all x_a. This region is non-empty since x_aโ๐ฆ_x_a^โค0 for all a=1,โฆ,n. Moreover, it can be naturally divided into two pieces: a compact one that we denote ฮ_n(x_a) and a non-compact one ฮ_n(x_a). We depict the compact region for n=4 and n=6 in figure <ref>.
It is easy to see that the region ฮ_4(x_a) is a curvy version of a tetrahedron, while ฮ_6(x_a) is a curvy version of a cube. The latter has two vertices coming from triple intersections of lighcones, p_135^+ and p_246^-, in addition to the six vertices x_i. For higher n, the shape of ฮ_n(x_a) is more involved and starts to depend on the details of the configuration of x's. For example, for n=8 there are four distinct geometries ฮ_8^(m)(x_a), each of them contains eight vertices x_a together with triple intersections of lightcones:
๐ฑ(ฮ_8^(1)(x_a))= {x_a,p_135^+,p_157^+,p_246^-,p_268^-} ,
๐ฑ(ฮ_8^(2)(x_a))= {x_a,p_137^+,p_357^+,p_246^-,p_268^-} ,
๐ฑ(ฮ_8^(3)(x_a))= {x_a,p_135^+,p_157^+,p_248^-,p_468^-} ,
๐ฑ(ฮ_8^(4)(x_a))= {x_a,p_137^+,p_357^+,p_248^-,p_468^-} .
These four cases exactly correspond to the four chambers for n=8 in <cit.>! One notices that the labels of the intersecting cones for each chamber can be thought of as products of triangulations of two 4-gons: one formed of odd labels, and one formed of even labels. This pattern continues for higher n. We found that there are exactly C_k-2^2 different geometries ฮ_n^(m) for n=2k particles, where C_p is the pth Catalan number. All these geometries have n vertices corresponding to x_a together with exactly (k-2) triple intersections of lightcones of points with odd labels, and (k-2) intersections with even labels. The geometry changes at non-generic configurations of points x_a, where an intersection of four lighcones is possible, corresponding to a bistellar flip in one of the aforementioned triangulations. All curvy polytopes ฮ_n^(m) are simple, meaning that each vertex has exactly three edges originating from it.
We note that the two triangulations of k-gons are more than just a mnemonic to distinguish distinct geometries: they in fact provide part of the skeleton of the dual geometry of ฮ_n^(m)! The fact that the facets of this dual are all triangles is equivalent to the statement that polytopes ฮ_n^(m) are simple. The triangles with vertices (a,b,c) in the dual correspond to vertices p_abc of ฮ_n^(m). Since the dual of a triangulation of a k-gon is a planar tree Feynman diagram for k particles, the skeleton of our geometry is that of a null polygon between points x_a, with two distinct Feynman diagrams drawn between the odd and even points, where different choices of Feynman diagrams label different chambers. Therefore, distinct ฮ_n^(m) correspond to all pairs of two planar tree Feynman diagrams for k particles.
ยง.ยง Loop-level momentum amplituhedron
Given a fixed tree-level configuration ฮปโ๐_n^(0), we will define the ABJM loop momentum amplituhedron using the map
[ ฮฆ_ฮป G(2,2k)^L โ GL(2)^L; (D_1,โฆ,D_L) โฆ (โ_1,โฆ,โ_L) ]
that was introduced for ๐ฉ=4 sYM in <cit.>. It is defined by
โ_l=โ_a<b(ab)_D_lโ_ab^โ/โ_a<b(ab)_D_lโจ abโฉ ,
where the matrix D_l is an element of Grassmannian space G(2,2k) and (ab)_D_l are 2ร 2 minors of the matrix D_l. Moreover, we define
โ_ab^โ= โ_c=b+1^n(-1)^cฮป_aโจ bcโฉฮป_c-โ_c=a+1^n(-1)^cฮป_bโจ acโฉฮป_c .
The image โ_l=ฮฆ_ฮป(D_l) is generically not a symmetric matrix, and therefore in order to adapt this construction to the ABJM theory, we need to restrict to a subsets of matrices D_l which result in three dimensional off-shell momenta. This imposes additional constraints on D_l, which correspond to the symplectic condition described in <cit.>.
We define the loop momentum amplituhedron ๐_2k^(L) as the image of a particular subset ๐_Lโ G(2,2k)รโฆร G(2,2k) obtained as follows. We take a matrix Cโ OG_+(k) such that ฮป=ฯ_ฮ(C) and construct its T-dual version ฤ (see <cit.>). Then we say that (D_1,โฆ,D_L)โ๐_L if the matrices (ฤ,D_l_1,โฆ,D_l_p) are positive for all subsets {l_1,โฆ,l_p}โ{1,โฆ,L }. Finally, we can define
๐_2k^(L)=ฮฆ_ฮป(๐_L) .
ยง.ยง One loop
We start our exploration of the loop momentum amplituhedron by examining the one-loop geometry ๐_2k^(1), which is relatively basic. One straightforward fact that can be derived from definition (<ref>) is that ๐_2k^(1)โ๐ฆ^โค0_x_a, which means that all point โโก xโ๐_2k^(1) are space-like separated from all points x_a. What is far less obvious is the fact that all points of the loop momentum amplituhedron sit in the compact part ฮ_2k(x_a) of ๐ฆ^โค0_x_a. We have performed extensive numerical tests to confirm that they are actually equal:
๐_2k^(1)=ฮ_2k^(m) ,
where the geometry on the right hand side depends on the choice of tree-level ฮป. Most importantly, for all points ฮป in the same chamber, the loop geometry ๐_2k^(1) looks the same, confirming that the geometry factorises as in (<ref>).
Knowing the geometry, we can find its canonical differential form. From the factorisation property, we immediately get that:
A_n^(1)=โ_m ฮฉ_n,m^(0)โงฮฉ_n,m^(1) ,
where ฮฉ_n,m^(0) is the tree-level form associated to the chamber C_m.
In the following we will derive explicit expressions for ฮฉ_n,m^(1)=ฮฉ(ฮ_2k^(m)).
Since ฮ_2k^(m) is just a curvy version of a simple polytope in three-dimensions, the canonical form should naively be just the sum over vertices of the dlog forms for all facets that meet at that vertex
ฮฉ_naive(ฮ_2k^(m))=โ_(abc)โ๐ฑ(ฮ_2k^(m))ฯ_abc ฯ_abc ,
where
ฯ_abc =dlog(x-x_a)^2โงdlog(x-x_b)^2โงdlog(x-x_c)^2 .
The signs ฯ_abc can be found by demanding that the form is projective, see <cit.>. We emphasize that the differential forms ฯ_abc separately are not dual conformally invariant, however, the final answer (<ref>) is. The difference in our case compared to the story of simple polytopes is that the differential form (<ref>) also has support outside of ฮ_2k^(m). More precisely, it has a non-zero residue at all points p_abc^ยฑ, while only one of them is a vertex of ฮ_2k^(m). Since
Res_x=p_abc^ยฑฯ_abc=1 ,
we need to find a form that contributes with opposite signs on the two points. The natural candidate is the triangle integrand
ฯ^_abc=4โ(x_ab^2x_bc^2x_ac^2)d^3x/(x-x_a)^2(x-x_b)^2(x-x_c)^2
=ยฑdlog(x-x_a)^2/(x-p_abc^ยฑ)^2โงdlog(x-x_b)^2/(x-p_abc^ยฑ)^2โงdlog(x-x_c)^2/(x-p_abc^ยฑ)^2
for which
Res_x=p_abc^ยฑฯ^_abc=ยฑ1 .
Therefore, the form that is supported only on ฮ_2k^(m) is
ฮฉ(ฮ_2k^(m))=โ_(abc)โ๐ฑ(ฮ_2k^(m))ฯ_abc ฯ^ยฑ_abc=โ_(abc)ฯ_abc (ฯ_abcยฑฯ_abc^) ,
where the relative sign in the bracket depends on which solution of the triple intersection is a vertex of the geometry. Performing case-by-case studies, one finds that all intersections with even labels have negative sign, while all with odd labels positive sign. Therefore, we conjecture that the one-loop ABJM integrand for the positive branch for any n=2k is (<ref>). The formula agrees with the one provided in <cit.> for n=4,6,8,10 [It would also be interesting to compare our result to the all-multiplicity one-loop formula found in <cit.> for a subset of component amplitudes.].
As argued in <cit.>, the complete 2k-point ABJM integrand is given by a sum over 2^k-2 different branches. While our current construction of the geometry gives the integrand only for the positive branch, the integrands for other branches can be obtained from (<ref>) by making use of the parity operations introduced in <cit.>. These effectively interchange certain p_abc^+โ p_abc^-, while keeping all points x_a unchanged. Therefore, we can think of other branches as having exactly the same shape ฮ_2k^(m) as the positive branch, but with some of the triple intersection points swapped. It is clear that such a parity operation will only flip the signs of the corresponding triangle integrands, while leaving the rest of the form unchanged. Therefore, if one is able to classify all parity operations for given n, the full amplitude can be derived from our geometric construction by summing over these relabelled geometries.
ยง.ยง More loops
In this section we will have a first look at implications of our construction for the L-loop problem for L>1. In this case, each off-shell loop momentum โ_l is space-like separated from all points x_a and therefore sits inside ฮ_2k^(m). However, we also need to impose mutual positivity constraints that translate into the requirement that every pair of loop momenta is space-like separated (โ_l_1-โ_l_2)^2<0 for all l_1,l_2=1,โฆ,L.
This is particularly simple for n=4 at two loops, where for a fixed position of momentum โ_1, the geometry accessible to the momentum โ_2, depicted in figure <Ref>, does not depend on the position of โ_1.
We are therefore interested in the region inside ฮ_4 which sits outside the lightcone of โ_1. Importantly, the latter intersects the lightcones of points x_1 and x_3 in the future and the lightcones of points x_2 and x_4 in the past. It is not obvious how to directly find the canonical differential form of this region. We will however circumvent this problem by considering negative geometries, see <cit.>. It is clear that
๐_4^(2)โชโ_4=๐_4^(1)ร๐_4^(1) ,
where โ_4={(โ_1,โ_2)โฮ_4รฮ_4: (โ_1-โ_2)^2>0)}. This region further decomposes into โ_4=โ_4,โ_1โบโ_2โชโ_4,โ_1โปโ_2, where โ_4,โ_1โบโ_2 (resp โ_4,โ_1โบโ_2) contains all points for which โ_1 is in the past (resp. future) of โ_2. The boundary structure of these two regions is significantly simpler then the one of ๐_4^(2), see figure <ref>.
In particular, the only boundaries accessible by momentum โ_1 are (โ_1-โ_2)^2=0, (โ_1-x_2)^2=0 and (โ_1-x_4)^2=0 (resp. (โ_1-โ_2)^2=0, (โ_1-x_1)^2=0 and (โ_1-x_3)^2=0). By comparing with known results, there is a natural differential form that we can associate to each of these regions:
ฮฉ(โ_4,โ_1โบโ_2)=x_13^2x_24^2 d^3โ_1โงd^3โ_2/(โ_1-x_2)^2(โ_1-x_4)^2(โ_1-โ_2)^2(โ_2-x_1)^2(โ_2-x_3)^2
and ฮฉ(โ_4,โ_1โปโ_2)=ฮฉ(โ_4,โ_1โบโ_2)|_โ_1โโ_2.
Therefore, the two-loop integrand is given by
ฮฉ_4^(2)=ฮฉ^(1)_1(โ_1)โงฮฉ^(1)_1(โ_2)-ฮฉ(โ_4,โ_1โบโ_2)-ฮฉ(โ_4,โ_1โปโ_2) .
This agrees with <cit.>.
For higher number of points at two loops, we observe that, when going to negative geometries, we can again define two regions: โ_n,โ_1โบโ_2 and โ_n,โ_1โปโ_2, where loop momenta are time-like separated and time-ordered. Unlike for n=4, we get different regions for โ_2 depending on the position of โ_1. However, there is a simple classification of all these regions. We focus first on โ_6,โ_1โบโ_2 and notice that for fixed โ_1 the only boundaries for momentum โ_2 are at (โ_1-โ_2)^2=0 and at the lightcones of points x_a that intersect the lightcone of โ_1 in the future. There are four possibilities:
{โ_โ_1โฉโ_x_1โ โ
,โ_โ_1โฉโ_x_3โ โ
} ,
{โ_โ_1โฉโ_x_1โ โ
,โ_โ_1โฉโ_x_5โ โ
} ,
{โ_โ_1โฉโ_x_1โ โ
,โ_โ_1โฉโ_x_5โ โ
} ,
{โ_โ_1โฉโ_x_1โ โ
,โ_โ_1โฉโ_x_3โ โ
,โ_โ_1โฉโ_x_5โ โ
} ,
as depicted in figure <ref>.
Similar analysis holds true for boundaries for โ_1 when โ_2 is fixed: the four possibilities are {2,4}, {2,6}, {4,6} or {2,4,6}.
We found that there are 13 allowed regions in this geometry (notice that region ({2,4}, {1,5}) and its two cyclic rotations are not allowed), which correspond to the bipartite graphs in <cit.>. Therefore, it should be possible to rewrite the answer found there such that each term matches one of these 13 regions. Similar structure is also present in the n=8 two-loop answer, and it naturally follows from our construction, since it reflects which lightcones intersects โ_โ_1 and โ_โ_2 in the past/future. At the moment we do not have a good understanding of how to associate differential forms to this regions and leave it for future work.
Finally, by moving to higher loop orders, one can use negative geometries and study configurations of loop momenta inside the region ฮ_n^(m) that are (partially) time-ordered in three-dimensional Minkowski space, as suggested in <cit.>. Our construction provides a simple geometric picture that can be used to organise the calculations based on the causal structure of the corresponding configuration of loop momenta.
ยง CONCLUSIONS AND OUTLOOK
In this paper we defined the spinor helicity version of the ABJM amplituhedron and, by translating it to the space of dual momenta, we found a surprisingly simple geometry associated with the causal structure of configurations of point in three-dimensional Minkowski space. This allowed us to find a formula for all integrands at one loop, and shed some light on the structure of the answer at two loops and beyond.
Intriguingly, if we knew nothing about amplituhedra, we could still define ฮ_n(x_a) as a compact region in the Minkowski space that is space-like separated from a null polygon with n vertices. By studying this region we would rediscover the structure of scattering amplitudes in ABJM theory! However, it is far from obvious why ABJM theory is selected from all possible three-dimensional quantum field theories. Finding the answer to this question might shed light on generalisations of our construction beyond ABJM.
There are many interesting avenues to follow based on this new picture of scattering in ABJM. The most urgent one is to better understand the geometry itself and in particular to provide a constructive way to derive its differential forms. It will be crucial to check whether the structure of the answer that we saw at one loop can be systematically generalised to higher loops, promising all multiplicity answers for ABJM loop integrands.
Finally, the ABJM momentum amplituhedron is a reduction of the momentum amplituhedron in ๐ฉ=4 sYM. A natural question that arises is whether a similar basic geometry lives in the four-dimensional space of dual momenta in (2,2) signature.
The authors would like thank Song He, Yu-tin Huang and Chia-Kai Kuo for useful discussions on their work. We would like to thank the Dublin Institute for Advances Studies for hosting the workshop โAmplituhedron at 10โ, which provided the initial motivation for this work.
|
http://arxiv.org/abs/2306.06048v1
|
20230609171650
|
How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?
|
[
"Yifei Ming",
"Yixuan Li"
] |
cs.CV
|
[
"cs.CV",
"cs.CY",
"cs.LG"
] |
Article Title]How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?
1]Yifei [email protected]
1]Yixuan [email protected]
[1]Department of Computer Sciences, University of Wisconsin-Madison
Recent large vision-language models such as CLIP have shown remarkable out-of-distribution (OOD) detection and generalization performance. However, their zero-shot in-distribution (ID) accuracy is often limited for downstream datasets.
Recent CLIP-based fine-tuning methods such as prompt learning have demonstrated significant improvements in ID classification and OOD generalization where OOD labels are available. Nonetheless, it remains unclear whether the model is reliable to semantic shifts without OOD labels. In this paper, we aim to bridge the gap and present a comprehensive study to understand how fine-tuning impact OOD detection for few-shot downstream tasks. By framing OOD detection as multi-modal concept matching, we establish a connection between fine-tuning methods and various OOD scores. Our results suggest that a proper choice of OOD scores is essential for CLIP-based fine-tuning. In particular, the maximum concept matching (MCM) score provides a promising solution consistently. We also show that prompt learning demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart.
[
[
July 31, 2023
=================
ยง INTRODUCTION
Machine learning (ML) is undergoing a paradigm shift with the rise of models that are trained on massive data and are adaptable to a wide range of downstream tasks. Popular pre-trained large vision-language modelsย <cit.> demonstrate remarkable performance, and allow researchers without extensive computation power to benefit from these models. It is now the common practice of the ML community to adopt pre-trained models for transfer learning on downstream tasks rather than learning from scratch. Despite the promise, the safety risks of these large pre-trained models can be potentially inherited by all the fine-tuned models. Without appropriately understanding the safety risks, development on top of pre-trained models can exacerbate and propagate safety concerns writ large, causing profound impacts on society.
In response to these urgent challenges, the overall objective of this paper is to systematically understand the out-of-distribution risks of learning with pre-trained vision-language models. This paper seeks to address the research question that arises in building responsible and ethical AI models:
How does fine-tuning influence out-of-distribution (OOD) detection for large vision-language models?
Detecting OOD samples is crucial for machine learning models deployed in the open world, where samples from unseen classes naturally emerge, and failure to detect them can have severe consequences.
Despite
increasing attentionย <cit.>, OOD detection research for large vision-language models has been scant. Among the most recent works, <cit.> investigated training-free OOD detection based on the pre-trained CLIP model. However, the impact of fine-tuning on OOD detection has been unexplored in the vision-language literature.
In this paper, we bridge the gap by investigating how fine-tuning large vision-language models affects OOD detection.
Parameter-efficient fine-tuning methods have been popularized in recent years. In particular, prompt learning ย <cit.> optimizes learnable word embeddings of the prompts, while adaptors directly optimize the internal feature representationsย <cit.>. Both methods are parameter-efficient as image and text encoders are frozen during fine-tuning, and have shown significant improvement for few-shot in-distribution (ID) classification.
Complementary to existing research, we focus on OOD detection for fine-tuned models using multi-modal concept matching. At the core of the concept matching framework, we use the few-shot ID training set and textual descriptions of the labels to derive a set of visual and textual features that represent the typical features for each ID class. We can measure OOD uncertainty based on the distance between the input feature and the nearest ID prototype.
Based on the concept matching framework, we then present a comprehensive and systematic study to explore how different parameter-efficient fine-tuning methods impact OOD detection performance, and contribute unexplored findings to the community. We disentangle various aspects such as adaptation methods and OOD scoring functions.
Interestingly, we observe that parameter-efficient fine-tuning can significantly improve OOD reliability compared to zero-shot CLIP models. In particular, prompt learning methods exhibit very competitive performance when coupled with the maximum concept matching (MCM) scoreย <cit.>.
Furthermore, we delve deeper into prompt learning and analyze how
the pre-trained features are modified during fine-tuning, and how it impacts OOD detection as a consequence. We study the impact of shots, architectures, and explore the effects of prompt learning on various downstream tasks, including the challenging ImageNet-1k (ID) benchmark. Our results demonstrate that prompt learning perturbs the pre-trained feature space that benefits both ID and OOD performance. More encouragingly, the trend holds consistently across different settings, highlighting its potential for reliable fine-tuning in vision-language modeling.
We summarize the contributions of this work as follows:
* We provide a timely and systematic study on how CLIP-based fine-tuning influence OOD detection in the few-shot setting. Our study disentangles various factors, including adaptation methods and OOD scoring functions.
* We present novel evidence that parameter-efficient fine-tuning do not deteriorate pre-trained features. Instead, they can improve both ID and OOD performance with a proper OOD scoring function, especially the MCM score. We show that prompt learning consistently demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart.
* We provide an in-depth analysis of prompt learning's impact on the feature space for OOD detection and conduct comprehensive ablations across datasets, architectures, and the number of shots with various OOD detection scores.
ยง PRELIMINARIES
Contrastive vision-language models.
Recent large vision-language models have shown great potential for various computer vision tasks. In this paper, we focus on CLIP-like modelsย <cit.>, which adopt a dual-stream architecture with one text encoder f: t โโ^d and one image encoder g: x โโ^d. CLIP is pre-trained on a massive web-scale image-caption dataset with a multi-modal contrastive loss that promotes the alignment of features from different modalities. CLIP learns transferable feature representations and demonstrates promising zero-shot generalization performanceย <cit.>. Despite the promise, existing vision-language models perform zero-shot
classification in a closed-world setting. That is, it will match an input into a fixed set of categories,
even if it is irrelevant. For example, a bird in Figureย <ref> can be blindly predicted as one of the in-distribution classes ๐ด_in={headphone, cat, flamingo, butterfly}. This motivates the importance of OOD detection for vision-language models.
OOD detection for vision-language models. In the open-world setting, the goal of OOD detection is to detect samples that do not belong to ID classes ๐ด_in. Here ID classes are
defined w.r.t. the classification task of interest, instead of the classes used in pre-training. Accordingly,
OOD is defined w.r.t. the ID classes, not the data distribution during pre-training. ย <cit.> explore the zero-shot OOD detection for the pre-trained CLIP model, without adapting to the ID dataset. Instead, we focus on the setting where CLIP models are fine-tuned on a few-shot dataset ๐_in, and hence are better adapted to the downstream ID task. We evaluate the fine-tuned CLIP model on a combination of ID and OOD datasets ๐_inโช๐_out, where ๐_out = {x_i, y_i^out}_i=1^m contains inputs with semantically different categories y^outโ๐ด_in. Formally, given an input x,
OOD detection can be formulated as:
G(x; f, g) =
1 S(x;f, g)โฅฮป
-1 S(x;f, g) < ฮป,
where S(ยท) is a scoring function that measures OOD uncertainty. In practice, ฮป is chosen so that a high fraction of ID data (e.g., 95%) is above the threshold.
Parameter-efficient fine-tuning. To improve the performance on downstream tasks, parameter-efficient approaches are proposed to fine-tune CLIP on datasets of interest. Prompt learning and adaptor tuning have recently gained popularity and demonstrated improved results over zero-shot settings. In particular, prompt learning optimizes the word embeddings of the prompts, while adaptors directly optimize the internal feature representations. Both methods are parameter-efficient as image and text encoders are frozen during fine-tuning. In what follows, we introduce prompt-based and adaptor-based methods respectively.
For a downstream dataset with K in-distribution classes ๐ด_in={y_1, y_2,...,y_K}, prompt learning method such as ย <cit.>
introduces M learnable context vectors v_iโโ^e to replace hand-engineered text prompts such as
โโ, where e is the dimension of word embeddings. For each class y_k, we obtain its contextualized representation t_k = [v_1, v_2, โฏ, v_M, w_k] by concatenating the context vectors and the word embedding w_kโโ^e of the label (upper left, Figureย <ref>). To avoid overfitting and improve generalization performance, ย <cit.> further introduces instance-conditional prompts via a meta-network which produces a meta token m(x) given the visual feature of the input x. The meta token is added to each context token v_i(x) = v_i + m(x) for i โ{1,2,โฏ, M}. Therefore, the prompt for class k is conditioned on each input: t_k(x) = [v_1(x), v_2(x), โฏ, v_M(x), w_k].
To learn the context vectors, the cross-entropy loss is used in fine-tuning:
p(y_k|x)=exp(s_k(x) / ฯ)/โ_i=1^K exp( s_i(x)/ ฯ),
where s_k(x) = g(x) ยท f(t_k)/โ g(x)โยทโ f(t_k) โ is the cosine similarity of input x with the k-th label, and ฯ is the temperature.
Alternatively, adaptor-based methods directly optimize the feature representations g(x) instead of learning context vectors. Specifically, given a K-way-D-shot ID training set (consisting of K classes with D examples per class), ย <cit.> propose a training-free adaptation method which extracts all the visual features W_g = [g(x_1,1),g(x_1,2),โฏ, g(x_K,D)]โโ^KD ร d from the few-shot training dataset. For each input x, we can obtain Kร D cosine similarities
s_k,d(x) = g(x) ยท g(x_k,d)/โ g(x)โยทโ g(x_k,d) โ. The cosine similarities are scaled by an exponential function sฬ: s โฆexp(-ฮฒ + ฮฒ s) with a hyperparameter ฮฒ that modulates the sharpness. Therefore, we can obtain an average similarity vector for each class based on visual features, sฬ_k(x) = 1 Dโ_d=1^D sฬ_k,d(x). The final similarity for class k is a weighted sum of similarities from the two modalities ฮฑsฬ_k(x) + s_k(x). To achieve better few-shot ID performance, <cit.> set visual features W_g as learnable parameters and denote the method as , where F stands for fine-tuning.
Despite the stronger downstream classification performance, it remains unknown if fine-tuning leads to more reliable OOD detection at test time.
We aim to provide a comprehensive understanding in this paper.
ยง METHOD
ยง.ยง OOD detection with fine-tuning
We investigate OOD detection with parameter-efficient fine-tuning on downstream tasks. We present a unified framework in Figureย <ref>, where the learnable part of the CLIP model is marked with an โunlockโ icon while the frozen part is marked with a โlockโ icon. For prompt learning methods such as and , the cosine similarity of the input feature with the k-th class s_k(x) = g(x) ยท f(t_k)/โ g(x)โยทโ f(t_k) โ is derived based on the adapted textual feature vector t_k.
Alternatively, adaptor-based methods such as and first scale the cosine similarities of visual prototypes and perform a weighted sum with the similarities of textual prototypes. Therefore, we can view as an ensemble method that utilizes multi-modal prototypes.
To summarize, for each adaptation algorithm ๐, OOD detection can be performed by:
G_๐(x; f, g) =ID S(x;f, g)โฅฮป
OOD S(x;f, g) < ฮป,
where ๐ can be instantiated by an adaptation method such as , , , or . Therefore, the OOD detector G_๐(ยท) can be viewed as a
โsafeguardโ for the classification model. Next, we introduce various OOD score functions S(x;f, g) assuming G_๐(x; f, g) is defined implicitly as each score function corresponds to an OOD detector G.
ยง.ยง OOD score for vision-language models
Recently,ย <cit.> propose a conceptual framework of CLIP-based OOD detection via concept matching, where the textual feature f(t_k) is viewed as the concept prototype for ID class kโ{1,2,...,K}. OOD uncertainty is then characterized by the distance from the visual feature of the input to the closest ID textual prototype. That is, images closer to one of the ID prototypes are more likely to be ID and vice versa. ย <cit.> suggest that softmax scaling with a proper temperature ฯ provably leads to state-of-the-art performance under the zero-shot (training-free) setting. Specifically, the maximum concept matching (MCM) score is defined as:
S_MCM(x) = max_k โ [K]e^s_k(x)/ฯ/โ_j=1^K e^s_j(x)/ฯ,
where the temperature ฯ needs to be tuned on the downstream dataset. As a special case of MCM, we use MSP to denote the MCM score when the temperature ฯ_d is set as default for CLIP models at inference time (e.g., 100 for CLIP-B/16).
Additionally, we consider a simpler scoring function based on the maximum similarity (MS) among ID prototypes before applying softmax scaling:
S_MS(x) = max_kโ[K] s_k(x),
which does not require any hyperparameter tuning. We show in Sectionย <ref> that the MS score demonstrates strong OOD detection performance with fine-tuning, especially for fine-grained ID datasets.
We now proceed to experiments where we investigate the impact of fine-tuning on real-world tasks.
ยง EXPERIMENTS
ยง.ยง Setup
Datasets. Following <cit.>,
we consider a wide range of real-world ID datasets with various semantics and number of classes: Caltech-101ย <cit.>, Stanford-Carsย <cit.>, Food-101ย <cit.>, Oxford-Petsย <cit.> and ImageNet-1kย <cit.>.
For each ID dataset, we followย <cit.> and construct the training set with D random samples per class, while the original test set is used for testing. We use D=16 by default and study the impact of shots as ablations in Sectionย <ref>.
For OOD test datasets, we use the same ones inย <cit.>, including subsets of iNaturalistย <cit.>, Sunย <cit.>, Placesย <cit.>, and Textureย <cit.>.
For each OOD dataset, the categories are not overlapping with the ID dataset. For ImageNet-1k as ID, we also consider two additional OOD datasets ImageNet-Oย <cit.> and OpenImage-Oย <cit.>.
Models and training details. For pre-trained models, we use CLIP-B/16 as the default backbone for main experiments, which uses ViT-B/16ย <cit.> as the image encoder. The impact of backbones is included in the ablation studies.
We use to denote pre-trained CLIP without fine-tuning. For each method, we closely follow the original implementations. Specifically, for and , the context length is set to 4, and the context vectors are initialized using the pre-trained word embeddings of โa photo of aโ. is
trained with a batch size of 1 for 10 epochs using SGD, while is trained for 100 epochs with a batch size of 32. is trained with a batch size 256 using AdamWย <cit.> for 20 epochs. Cosine scheduling is used for all methods and the data preprocessing protocol consists of random re-sizing, cropping, and random horizontal flip.
Evaluation metrics. We consider the following evaluation metrics: (1) the false positive rate (FPR95) of OOD samples when the true positive rate of in-distribution samples is at 95%, (2) the area under the receiver operating characteristic curve (AUROC), and (3) ID classification accuracy (ID ACC).
ยง.ยง Main results and discussions
In this section, we first present novel evidence that parameter-efficient fine-tuning generally improves OOD performance over the zero-shot counterpart with a simple OOD scoring function. Next, we investigate the effects of various OOD scoring functions in the parameter-efficient fine-tuning setting. In particular, we will show that the MCM score consistently demonstrates the most promising performance compared to alternative OOD scores when coupled with prompt learning.
How does parameter-efficient fine-tuning impact OOD detection? We evaluate the OOD detection performance on various ID datasets. The results are summarized in Tableย <ref>.
We show that adapted CLIP models demonstrate nearly perfect OOD detection performance for ID datasets with fine-grained categories such as Stanford-Cars and Oxford-Pets.
Moreover, when the ID dataset contains a diverse collection of categories such as Caltech-101[Similar trends also hold for ImageNet-1k as ID.],
parameter-efficient fine-tuning still significantly improves the OOD detection performance on average compared to . In particular, yields the best performance among other adaptation methods on Caltech-101 (ID). It achieves an average FPR95 of 5.94% using S_MS, improving by 32.02% over .
While prior works suggest that parameter-efficient fine-tuning methods improve ID accuracy on few-shot datasets, our results complement their findings and show that fine-tuning also improves the OOD detection performance with proper OOD scoring functions.
Effects of OOD scoring functions. We investigate the effect of OOD scoring functions under fine-tuned vision-language models. In Tableย <ref>, we contrast the OOD detection performance using MCMย <cit.> vs. MS on Caltech-101 (ID).
Our findings suggest that: (1) S_MCM performs on par with S_MS for fine-grained ID tasks across a wide range of adaptation methods (Tableย <ref>). (2) However, when ID contains diverse categories, utilizing S_MCM generally leads to better performance compared to using S_MS for most adaptation methods (Tableย <ref>). (3) In particular, prompt learning methods such as demonstrate very competitive results with both OOD scores (an average FPR95 of 5.02% with S_MCM and 5.94% with S_MS in Tableย <ref>).
Effects of softmax scaling.
Previously, <cit.> observe that the commonly used maximum softmax score (S_MSP) is suboptimal for zero-shot OOD detection with vision-language models.
We investigate whether MSP is competitive for OOD detection with fine-tuned models.
To better illustrate the effects, we plot the score distributions for Stanford-Cars (ID) vs. SUN (OOD) in Figureย <ref> when the model is fine-tuned with , , and respectively.
For each fine-tuning method, we can clearly see that the S_MS leads to superior ID-OOD separability, while S_MSP displays significant overlapping. Quantitatively, compared to S_MSP, the average FPR95 is significantly decreased with S_MS (Tableย <ref>). Our findings highlight that directly applying MSP is not competitive for fine-tuned vision-language models.
ยง.ยง Delving into parameter-efficient fine-tuning for OOD detection
The impact of fine-tuning on feature geometry.
To better understand how fine-tuning leads to improved OOD detection performance, we examine the geometry of the feature representations. For illustration, we use the simple S_MS score as it provides an intuitive geometric interpretation. For each test input, S_MS captures the angular distance between its visual features and the closest ID prototype. Figureย <ref> shows S_MS for ID and each OOD test dataset, where radians are converted to degrees for better readability. Intuitively, one desires to learn compact ID clusters such that ID inputs are closer to the nearest ID prototypes than OOD inputs. We illustrate the effects of prompt learning in Figureย <ref>. Compared to zero-shot CLIP, and decrease the angular distance for ID inputs to the nearest concept prototype while simultaneously increasing the angular distance for OOD inputs. In particular, decreases the angular distance for ID inputs more significantly, resulting in better ID-OOD separability.
Although prompt learning methods introduce perturbations to the feature space, the overall effect is modest, with only a slight deviation of a few degrees from the pre-trained model[Similar observations can also be verified for adaptor-based methods.]. Nonetheless, these perturbations play a crucial role in enhancing both ID classification and OOD detection performance.
Exploring prompt learning for OOD detection on challenging large-scale benchmarks
In previous sections, we show that prompt learning with both S_MS and S_MCM score display competitive performance. Next, we consider a more challenging large-scale benchmark ImageNet-1k (ID). The results in FPR95 and AUROC are shown in Figureย <ref> and Figureย <ref>. While S_MS outperforms S_MSP score, we can clearly see that S_MCM is particularly advantageous compared to the simpler S_MS baseline. In particular, S_MCM outperforms S_MS by 7.44% in FPR95 averaged across the four OOD test sets.
Moreover, with S_MCM achieves an average FPR95 of 37.74% on the benchmark, surpassing the zero-shot performance of the large backbone CLIP-L/14 model which has an FPR95 of 38.17%ย <cit.>. These results further demonstrate the effectiveness of S_MCM in CLIP-based prompt learning for challenging scenarios.
The impact of shots. We investigate the impact of shots for and with various OOD detection scores. The results are shown in Figureย <ref> and Figureย <ref>, where each point represents the average FPR95 over the four OOD test sets. We highlight two key findings. First, the OOD detection performance with both S_MS and S_MCM score improves as the number of shots increases. This trend is consistent with the ID classification accuracy reported inย <cit.>, suggesting that using a suitable OOD uncertainty score can enhance the representation quality as more data is incorporated during prompt learning. Second, the performance of S_MCM is promising even with a low number of shots, demonstrating its effectiveness in resource-constrained settings.
The impact of backbone architecture. We conduct another ablation study on the impact of model architectures. We consider CLIP with ResNet backbones (N50, RN101) and ViT backbones (CLIP-B/32, CLIP-L/14), where the vision encoder is based on ViT-B/32 and ViT-L/14, respectively. We train with with hyperparameters following the original implementation for each architectureย <cit.>. We evaluate the models using S_MSP, S_MS, and S_MCM score and summarize the results in Tableย <ref> and Tableย <ref>. Interestingly, compared to S_MSP, S_MS brings more significant improvements under ViT backbones than ResNet backbones.
In contrast, S_MCM score consistently demonstrates competitive performance for all the architectures considered. For instance, with CLIP-B/32, S_MCM achieves an average FPR95 of 6.17%, a 20.23% improvement over the S_MSP baseline. We observe similar improvements for RN101 (18.57%) and RN50 (22%). Moreover, larger backbones lead to superior performance when fixing the OOD detection score as MCM. For example, with CLIP-L/14, the average FPR95 is improved by 11.17% compared to RN50 and 2.67% compared to CLIP-B/32. A similar trend has been shown for ID classificationย <cit.>, where larger models yield better feature representation.
ยง RELATED WORKS
Parameter-efficient fine-tuning of vision-language models. Large-scale vision-language models have shown impressive performance on various downstream tasksย <cit.>. These models learn transferable feature representations via pre-training on web-scale heterogeneous datasets. However, as downstream datasets can have a limited number of samples, adapting these large models in a parameter and data-efficient manner is crucial for effective knowledge transfer. Recent works propose various ways to tackle this challenge. ย <cit.> propose to tune a set of soft promptsย <cit.> while freezing the encoders of CLIP. ย <cit.> aims to improve the generalization ability of CoOp by introducing a meta-network that learns input-dependent tokens. ย <cit.> propose to learn prompts in an unsupervised manner while TPTย <cit.> uses test-time prompt tuning to learn adaptive prompts on the fly. Beyond textual prompt learning, <cit.> propose to tune visual prompts for CLIP-based fine-tuning. Another line of work focuses on adaptor-style fine-tuning, where instead of tuning prompts, the feature embedding is directly optimized using an adaptor moduleย <cit.>. Prior works demonstrate significant improvement over zero-shot CLIP for few-shot ID classification and OOD generalization where OOD labels are given. However, it is unclear how reliable these parameter-efficient fine-tuning methods are for OOD detection tasks. Our work bridges this gap and explores how fine-tuning impacts OOD detection for few-shot downstream datasets.
OOD detection with vision-language representations.
In recent years, an increasing number of works have been proposed that utilize textual information for visual OOD detection. <cit.> propose a scheme where pre-trained CLIP models are provided with candidate OOD labels for each target dataset, and show that the output probabilities summed over the OOD labels effectively capture OOD uncertainty.
Without the assumption of OOD labels, <cit.> propose to train a decoder based on the visual encoder of CLIP to generate candidate labels for OOD detection. However, training a high-quality decoder incurs significant computational costs and requires extra data. While both <cit.> and <cit.> focus on small-scale inputs, <cit.> propose an OOD label-free method MCM and show promising results on a wide range of large-scale datasets. However, <cit.> only investigate pre-trained CLIP models.
In contrast, our work focuses on the impact of parameter-efficient fine-tuning methods for OOD detection in few-shot downstream tasks, which has not been explored.
ยง CONCLUSION
In this paper, we provide a timely study on the impact of parameter-efficient fine-tuning methods for OOD detection with large vision-language models. We focus on the few-shot setting without access to OOD labels, which has been largely unexplored in the literature. We show that parameter-efficient fine-tuning methods can improve both ID and OOD performance when coupled with a proper OOD score, with prompt learning-based methods showing the strongest performance under the MCM score. We analyze the feature space and provide insights into the effectiveness of such methods through the lens of multi-modal concept matching. We hope our findings will inspire and motivate future research on designing reliable fine-tuning methods for large vision-language models.
ยง DATASET DETAILS
Details on ID and OOD dataset construction For ID datasets, we follow the same construction as in previous worksย <cit.>. Detailed instructions on dataset installation can be found in <https://github.com/KaiyangZhou/CoOp/blob/main/DATASETS.md>. For OOD datasets,ย <cit.> curate a collection of subsets from iNaturalistย <cit.>, SUNย <cit.>, Placesย <cit.>, and Textureย <cit.> as large-scale OOD datasets for ImageNet-1k, where the classes of the test sets do not overlap with ImageNet-1k. Detailed instructions can be found in <https://github.com/deeplearning-wisc/large_scale_ood>.
ยง ADDITIONAL RESULTS
ยง.ยง ID accuracy
While we primarily focus on the OOD detection performance of CLIP-based fine-tuning methods, we present the results of the ID accuracy for each dataset based on CLIP-B/16 in Tableย <ref> for completeness. Further results on the ID accuracy with various datasets and architectures can be seen inย <cit.>,ย <cit.>, and ย <cit.>.
ยง.ยง OOD detection performance based on visual features alone
In this section, we explore several commonly used OOD detection scores solely based on the visual branch of CLIP models. Specifically, we consider the Mahalanobis scoreย <cit.>
on the penultimate layer of the visual encoder and MSPย <cit.>, Energyย <cit.>, and KL Matchingย <cit.> scores on the logit layer after linear probing the visual encoder. The results are summarized in Tableย <ref>, based on 16-shot Caltech-101 (ID). We can see that the Mahalanobis score does not yield promising performance because 1) the feature embeddings from the visual encoder of CLIP may not follow class-conditional Gaussian distributions, 2) it is challenging to estimate the mean and especially covariance matrix when the number of samples is much smaller than the feature dimension in the few-shot setting. On the other hand, the OOD scores based on fine-tuned logit layer result in worse performance compared to the MCM score. One major reason is that fine-tuning CLIP in the few-shot setting is prone to overfitting the downstream ID dataset, making the model less reliable. This further highlights the importance of choosing OOD detection scores fitted to parameter-efficient fine-tuning methods.
ยง.ยง Additional results on ImageNet-1k
In this section, we consider two additional OOD test sets ImageNet-Oย <cit.> and OpenImage-Oย <cit.> for ImageNet-1k (ID). OpenImage-O is a subset curated from the test set of OpenImage-V3ย <cit.> containing a diverse set of categories. ImageNet-O is a challenging OOD dataset that contains naturally adversarial examples for ImageNet-1k.
The results are shown in Tableย <ref>. The model (CLIP-B/16) is trained with CoOp. We can see that: 1) The performance on ImageNet-O is generally worse than the rest of OOD test sets (iNaturalist, Textures, SUN, Places) in Sectionย <ref>, suggesting that this task remains challenging in the context of few-shot prompt learning. 2) MCM score still performs the best compared to MS and MSP on both OOD test sets, consistent with our previous observations, which further highlights the importance of softmax and temperature scaling for OOD detection with fine-tuning.
ยง.ยง Alternative OOD scores
In this section, we investigate the performance with several alternative OOD scoring functions based on the cosine similarities of input x with the k-th label s_k(x), kโ{1,2,...,K} (defined in Sectionย <ref>). Specifically, we consider the energy and the KL matching score for each adaptation method and summarize the results based on Caltech-101 (ID) in Tableย <ref>. We observe that 1) using the energy score, all adaptation methods significantly enhance the performance over the zero-shot baseline (ZOCLIP). 2) the general performance vastly improves when utilizing the KL Matching score. However, even the highest achieved performance (FPR95 at 7.91 with CoCoOp) falls short when compared to the MCM score (FPR95 at 5.02 with CoCoOp).
|
http://arxiv.org/abs/2306.09382v3
|
20230615125904
|
Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3
|
[
"Minseok Kim",
"Jun Hyung Lee",
"Soonyoung Jung"
] |
cs.SD
|
[
"cs.SD",
"cs.LG",
"cs.MM",
"eess.AS"
] |
One-loop Effective Action up to Dimension Eight: Integrating out Heavy Scalar(s)
Upalaparna Banerjee, Joydeep Chakrabortty, Shakeel Ur Rahaman, Kaanapuli Ramkumar
July 31, 2023
=====================================================================================
In this report, we present our award-winning solutions for the Music Demixing Track of Sound Demixing Challenge 2023.
First, we propose TFC-TDF-UNet v3, a time-efficient music source separation model that achieves state-of-the-art results on the MUSDB benchmark. We then give full details regarding our solutions for each Leaderboard, including a loss masking approach for noise-robust training. Code for reproducing model training and final submissions is available at <github.com/kuielab/sdx23>.
Music Source Separation, Robustness, Machine Learning Challenge
ยง INTRODUCTION
This is a technical report for our solutions for the Music Demixing Track of Sound Demixing Challenge 2023[1] (MDX23). In addition to the standard music source separation (MSS) task conducted in Music Demixing Challenge 2021<cit.> (MDX21), MDX23 introduced additional challenges: robustness to label-noise and bleeding.
[1]www.aicrowd.com/challenges/sound-demixing-challenge-2023
Label-noise and bleeding are frequently encountered issues in music.
Label-noise occurs from erratic instrument groupings during automatic metadata-based stem generation in music production.
Bleeding takes place during music recording sessions when unintended sounds overlap with other instruments.
This poses a challenge for training source separation models since sources (stem files) may contain instruments that do not belong to the particular class, which requires models to be robust to these errors at training time. Our goal was to enhance the quality of music source separation by addressing these challenges that arise from label-noise and bleeding, in addition to the standard MSS task.
Challenge submissions are ranked into three categories: Leaderboard A for robustness to label-noise, Leaderboard B for bleeding, and Leaderboard C for standard MSS. Furthermore, Leaderboards A and B restrict models to be trained only on specific datasets provided by Moises (namely SDXDB23_labelnoise and SDXDB23_bleeding), whereas Leaderboard C does not pose any limitation on training data.
We first introduce TFC-TDF-UNet v3, the base model architecture for all submissions. Then we describe our approach for each Leaderboard.
For all experimental results, we use Source-to-Distortion Ratio as the evaluation metric. Throughout the report, โSDR" will refer to the version used in MDX23 while the other definition of SDR<cit.> will be referred to as โcSDR" (chunk-level SDR).
ยง TFC-TDF-UNET V3
For MDX23 we build upon TFC-TDF-UNet v2, the spectro-gram-based component of KUIELab-MDX-Net<cit.> (award-winning model of MDX21). Our current version, TFC-TDF-UNet v3, achieved top ranks in all Leaderboards.
ยง.ยง Improvements
Here we provide a list of changes that were made to the model structure of TFC-TDF-UNet v2. Our goal was to improve SDR without gaining too much inference time, taking into account the time limit of the MDX23 evaluation system. Changes to v2 that are not listed here (which can be found in our submission code) had negligible effects on performance.
* Change overall structure to a ResUnet<cit.>-like structure and add a TDF block<cit.> to each Residual Block.
* Use Channel-wise Sub-bands<cit.> together with larger frequency dimensions.
* Train one multi-target model instead of training a single-target model for each instrument class.
* Use Instance Normalization and GELU instead of Batch Normalization and ReLU.
* Use waveform L2 loss instead of waveform L1.
* Add an โinput skip-connection"; concatenate the input spectrogram right before the final convolution (this was effective for multi-source models).
Finally, for each Leaderboard, we selected the optimal model hyperparameters using evaluation results on the challenge public set. The final configurations are in Table <ref>.
[2]github.com/crlandsc/Music-Demixing-with-Band-Split-RNN
ยง.ยง Evaluation
For a quantitative comparison with v2 as well as state-of-the-art models, we report performance of TFC-TDF-UNet v3 on the MUSDB18-HQ<cit.> (Table <ref>).
We trained an additional model for MUSDB with the hyperparameters described in Table <ref>.
For data augmentation we applied pitch-shift using Soundstretch[3] (semitones โ{-3,-2,-1,0,1,2,3}) and randomly mixed sources from different songs (remixing)<cit.>.
The v3 model was trained for 47 epochs (we define โepoch" as 10k steps with batch size 8), which took 3 days using two RTX 3090.
For early stopping, we stopped training when SDR did not improve by at least 0.05dB within 10 epochs.
[3]www.surina.net/soundtouch/soundstretch.html
For a better comparison with v2, we also report results for v3 without โoverlap-addโ and instead uses the inference method of v2 (trim and concatenate). This made v3 roughly 1.2 times faster than v2 on the MDX23 evaluation server. Even with this lightweight structure, v3 improves v2 by a significant 1.61dB cSDR for โdrums" and 0.92dB on average. Trading off speed for accuracy with overlap-add, TFC-TDF-UNet v3 achieves the highest average SDR/cSDR over all instruments.
ยง LEADERBOARD A&B
In this section, we present our approach for robust training and details regarding our solutions for Leaderboard A (3rd place) and Leaderboard B (1st place).
ยง.ยง Data
For both Leaderboards, all 203 tracks of the Moises datasets were used for training (SDXDB23_labelnoise for Leaderboard A and SDXDB23_bleeding for Leaderboard B). We did not hold out a validation split; doing validation on noisy data did not generalize well. Instead, for early stopping and model selection, we used the challenge public set results where we submitted every 25k steps until mean SDR stopped improving. For data augmentation we used remixing, with no pitch-shift/time-stretch.
An important preliminary for Leaderboards A and B was understanding what kind of noise label-noise and bleeding produced. By definition, 1) both corruptions add instrument sounds belonging to other classes and 2) for data with label-noise the loudness of noise would be equal to that of the clean source, while for bleeding the loudness would be lower. For a closer look at how these corruptions were actually simulated, we also listened to several tracks and found that label-noise adds just one instrument belonging to another class, while bleeding seemed to add all other instruments.
ยง.ยง Noise-Robust Training Loss
Since the Moises datasets were corrupted in a way so that manual cleaning would not be possible,
the main challenge was to design a robust training algorithm for source separation. Our noise-robust training loss, which is basically a loss masking (truncation) method, was clearly effective for this task as shown in Table <ref>. We now describe our method for each Leaderboard.
ยง.ยง.ยง Leaderboard A: Label-noise
If we randomly chunk a noisy target source at training time, each chunk will have different amounts (e.g., duration, loudness) of label-noise. We gain on the fact that some chunks can be clean and these clean chunks can be filtered using its training loss. Intuitively, target source chunks with more noise are likely to produce higher training loss since they lack instrument-related patterns such as timbre.
To reduce the negative effects of these noisy chunks and train mostly on clean chunks, we use a loss masking scheme where for each training batch, elements with high loss were discarded before weight update. Specifically, for each batch and each class, we masked out per-element losses greater than the q-quantile and left q as a hyperparameter. For our final submissions, we used a batch size of 6 and discarded 4 chunks per batch and class.
ยง.ยง.ยง Leaderboard B: Bleeding
As discussed in Section <ref>, there were more erroneous instruments in bleeding sources than sources with label-noise, which means the amount of noise is more constant throughout the playing time. Consequently, clean random chunks from bleeding data would be rare and harder to obtain. From this inspection, we used a more fine-grained masking scheme where we masked along the temporal dimension as well as the batch dimension.
But as shown in Table <ref>, the optimal q value for Leaderboard B models was 0.93, which means only 7% of the temporal bins were discarded. This may have resulted from the difference in the loudness of noise; compared to label-noise, bleeding was not as harmful and filtering clean chunks was not as important (this can also be inferred from Table <ref> where modelB outperforms modelA when using regular L2 loss).
ยง.ยง Model
For each Leaderboard, the final submission is an ensemble of three TFC-TDF-UNet v3 models trained with the noise-robust training loss of Section <ref>. Each of the three models has the same configurations (following Table <ref>) but is trained with different random seeds.
ยง LEADERBOARD C
We present our approach for the standard MSS task where any training data can be used. Our solution ranked 4th place.
ยง.ยง Data
We used all 150 songs of MUSDB18-HQ for training. As was done for Leaderboards A and B, there was no validation split and submitted every 100k steps instead. For data augmentation we applied pitch-shift (semitones โ{-2,-1,0,1,2}) and time-stretch (acceleration % โ{-20,-10,0,10,20}) as well as remixing.
ยง.ยง Method
The final submission is an ensemble of five models: Hybrid Demucs<cit.>, Hybrid Transformer Demucs<cit.> and three TFC-TDF-UNet v3 models. For the Demucs models, we used pretrained weights from the official Github repository[4][4]github.com/facebookresearch/demucs (hdemucs_mmi and htdemucs_ft) each with 2 โshifts" and 50% overlap. For the TFC-TDF-UNet v3 models, we used the models specified in Table <ref>. model1 and model2 are multi-source v3 models, whereas model3 is a single-source model for the โvocals" class that applies high-frequency truncation<cit.>.
SDR performance for each model are shown in Table <ref>. Blending<cit.> weights were chosen according to these evaluation results.
IEEEbib
|
http://arxiv.org/abs/2306.04138v1
|
20230607042359
|
Weighted trajectory analysis and application to clinical outcome assessment
|
[
"Utkarsh Chauhan",
"Kaiqiong Zhao",
"John Walker",
"John R. Mackey"
] |
stat.ME
|
[
"stat.ME"
] |
1]Utkarsh Chauhan
2]Kaiqiong Zhao
3]John Walker
3]John R. Mackey*
CHAUHAN et al
[1]Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
[2]Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Canada
[3]Division of Medical Oncology, University of Alberta, Edmonton, Canada
*John R. Mackey, Division of Medical Oncology, University of Alberta, Cross Cancer Institute, 11560 University Ave NW, Edmonton, AB, T6G 1Z2. [email protected]
[Summary]The Kaplan-Meier estimator (KM) is widely used in medical research to estimate the survival function from lifetime data. KM is a powerful tool to evaluate clinical trials due to simple computational requirements, a logrank hypothesis test, and the ability to censor patients. However, KM has several constraints and fails to generalize to ordinal variables of clinical interest such as toxicity and ECOG performance. We devised Weighted Trajectory Analysis (WTA) to combine the advantages of KM with the ability to visualize and compare treatment groups for ordinal variables and fluctuating outcomes. To assess statistical significance, we developed a new hypothesis test analogous to the logrank test. We demonstrate the functionality of WTA through 1000-fold clinical trial simulations of unique stochastic models of chemotherapy toxicity and schizophrenia disease course. At increments of sample size and hazard ratio, we compare the performance of WTA to KM and Generalized Estimating Equations (GEE). WTA generally required half the sample size to achieve comparable power to KM; advantages over GEE include its robust non-parametric approach and summary plot. We also apply WTA to real clinical data: the toxicity outcomes of melanoma patients receiving immunotherapy and the disease progression of patients with metastatic breast cancer receiving ramucirumab. The application of WTA demonstrates that using traditional methods such as KM can lead to both Type I and II errors by failing to model illness trajectory. This article outlines a novel method for clinical outcome assessment that extends the advantages of Kaplan-Meier estimates to ordinal outcome variables.
Weighted trajectory analysis and application to clinical outcome assessment
[
===========================================================================
ยง INTRODUCTION
The Kaplan-Meier estimator (KM)<cit.>, also referred to as the product-limit estimator, is widely used in medical research to estimate the survival function from lifetime data. KM is a non-parametric approach for time-to-event data that are often not normally distributed. To generate the KM estimates, time-to-event and the status of each subject at the last observed timepoint are needed.<cit.> The event of interest may be death from any cause when we are determining overall survival, and death due to specific cause for cause-specific survival. KM estimates are frequently used in oncology and other medical disciplines. KM is used to compare two or more treatment arms in clinical trials using the logrank test.<cit.> Patients that exit the trial without having experienced the event of interest at last follow-up are censored and omitted from further estimates.
The relatively simple computational requirements for KM provide a powerful method to estimate time-to-event data. However, the advantages of KM in clinical research cannot be extended to important ordinal outcomes such as toxicity grade and Eastern Cooperative Oncology Group (ECOG) performance status.<cit.> Ordinal outcome variables are ubiquitous in medicine to measure patient health status over time, but no statistical methods exist that combine censoring, graphical comparison of trajectories, and hypothesis testing for these variables. Often, ordinal clinical outcomes are collapsed to binary definitions to facilitate the use of KM; this causes information loss, introduces an arbitrary cutpoint, and may lead to inaccurate conclusions. New methods are required to map the trajectory of ordinal outcomes and compare treatment arms in clinical trials.
The KM method has three conditions that limit its generalizability to other variables of interest in clinical research:
* Binary Condition
The event must be binary in nature or coded into binary form (0 for non-occurrence, 1 for occurrence). It is not possible to capture grades or stages of severity. For example, death is naturally binary (0 for alive, 1 for dead), but an outcome variable such as toxicity (measured in grades from 0 to 4) must be coded into binary form by setting a threshold for event occurrence, such as arbitrarily defining an event as any toxicity exceeding grade 2.
* Descent Condition
Event occurrence always produces a drop in the KM curve (a consequence of plotting probability). It is not possible to track the trajectory of conditions that can both improve and worsen over time. For example, patients experiencing rising toxicity due to chemotherapy require additional interventions to tolerate therapy. The interventions may initially improve symptoms and reduce toxicity grade, but fail to sustain benefits in subsequent treatment cycles. In the KM estimate following the above example, this complex trajectory is simplified to an event occurrence the first time toxicity increases beyond grade 2.
* Finality Condition
Once a patient experiences the event of interest, they are omitted from any subsequent analysis.
Weighted Trajectory Analysis (WTA) is a method that combines the simplicity and practicality of KM with the ability to compare treatment groups for ordinal variables and bidirectional outcomes. Trajectories are presented using plots that track health status for treatment arms over time. WTA permits the censoring of patients that exit the study. To determine statistical significance, we developed a โweightedโ logrank test.
In section 2, we describe the methodology and theory of KM and WTA along with their respective hypothesis tests and provide a computational approach to WTA robust to smaller datasets. We also outline GEE longitudinal analysis prior to its use as an additional comparator to WTA in subsequent simulation studies. In sections 3 and 4, we describe unique simulation studies with chemotherapy toxicity grade and schizophrenia symptom stage as the variables of interest, respectively. In section 5, we apply WTA to real clinical datasets: first with the toxicity outcomes of melanoma patients receiving different immunotherapy protocols, and second with tumour response outcomes of patients with metastatic breast cancer receiving an anti-angiogenic drug. Finally, we discuss the results and implications of both our simulations and real-world analyses in Section 6.
ยง METHODOLOGY AND THEORY
ยง.ยง Kaplan-Meier Estimator
The goal of the Kaplan-Meier estimator (KM) is to estimate a population survival curve from a sample with incomplete time-to-event observations.<cit.> โSurvivalโ times need not relate to death, but can refer to any event of interest, such as local recurrence or stroke. The event in this instance is a binary variable, meaning that samples have either experienced the event up to a given time point or not. The times to failure on each subject are thus characterized by two variables: (1) serial time, (2) outcome of event occurrence or censorship.
Suppose that t_0 < t_1 < t_2 < ... < t_K be the K distinct failure times observed in the sample. We write n_j and d_j as the number of patients at risk and number of events at time t_j, respectively, where j = 1, 2, ..., K. Note that the patients who are lost to follow-up or withdraw from the trial before experiencing the event of interest (i.e. censored samples) are taken out of the risk set at the subsequent time points.
The KM estimate at time t_j, S(t_j), is calculated as the cumulative survival probability up to and including time t_j,
S(t_k) = โ_j=1^k (1-d_j/n_j ),
where S(t) = 1 for t < t_1. The Kaplan-Meier curve is plotted as a stepwise function representing the change in survival probability over time.
To compare treatment arms, multiple survival functions are plotted together, enabling the comparison of differences in survival experience between groups. Treatment options can be compared using metrics such as median survival and hazard ratios. The logrank test is used to assess if the differences are statistically significant: this test and its modification for WTA are discussed in Sections 2.4 and 2.5, respectively.
ยง.ยง Weighted Trajectory Analysis
Weighted Trajectory Analysis (WTA) is a modification of KM that provides the following advantages:
* Assesses outcomes defined by various ordinal grades (or stages) of clinical severity;
* Permits continued analysis of participants following changes in the variable of interest;
* Demonstrates the ability of an intervention to both prevent the exacerbation of outcomes and improve recovery, and the time course of these effects.
Several properties of KM curves crucial for clinical trial evaluation are incorporated within WTA. The test is non-parametric and provides the ability to censor patients that withdraw or are lost to follow-up. Outcomes for various treatment arms can be assessed using a summary plot that depicts all patients in serial time. The test for significance is a modification of the logrank test by Peto et al., which is the standard method for comparing KM survival curves.<cit.> The logrank test is described in Section 2.4, and the weighted logrank test follows in Section 2.5. As the analytical form of the test is a conservative estimate that operates under the normal approximation, a more computationally intensive simulated approach is outlined in Section 2.6.
In WTA, an event is a change in grade or stage, or more generally, a severity score. The severity score must be ordinal but can have an arbitrary range of severity that depends on the variable of interest (for example, I-IV for heart failure class<cit.>). Unlike KM, an event does not omit the patient from subsequent analysis. Both increases and decreases in variables of interest are captured as events. Participants can enter trajectory analysis at any starting stage, though inferences on trial results are most powerful if treatment arms are randomized to the same median starting stage.
Redefining the event allows clinical assessment of the overall trajectory of a group of patients, mapping both deterioration and improvement in health status over time. Graphically, the staircase representing survival in the Kaplan-Meier estimator always descends. The WTA staircase can both descend and rise over time to capture the dynamics of a patientโs clinical status.
Variables of interest include any ordinal outcome variables with a defined, finite range. Examples include ECOG Performance<cit.> and Common Terminology Criteria for Adverse Events (CTCAE) toxicity scores,<cit.> both with ordinal scoring that ranges from 0 to 5.
For this reason, a binary variable such as death (0, alive vs. 1, dead) is not an appropriate variable of interest. In this circumstance, the range of the ordinal variable is set to 1, and the modified significance test reduces to the standard logrank test. Conversely, ECOG performance is an appropriate variable of interest, given that it is ordinal with a defined range and can both improve and worsen over time. In WTA, a higher score in the variable of interest generally represents poorer health status. Variables that follow the opposite trend can be adapted to WTA by simply reversing the polarity of the ordinal scale.
Censoring in WTA is similar to KM. Patient loss to follow-up and withdrawal requires censoring, but patients may experience several events prior to being censored. Censoring is represented on the plot using a Wye-symbol (). The number of patients remaining within the study is tabulated below the plot at evenly spaced time intervals for each treatment arm.
ยง.ยง Mathematical Overview of Weighted Trajectory Analysis
Weighted Trajectory Analysis plots the health status of treatment arms as a function of time. Time values must be discrete, but can correspond to days, weeks, months, or any chosen interval. For each time value on the x-axis, there is a corresponding score on the y-axis: a weighted health status. The higher the weighted health status, the healthier the group. This score is scaled by the initial size of the treatment arm to facilitate simple comparison of groups with unequal size.
Consider a group of n patients with toxicity grade ranging from Grade 0 (asymptomatic/mild toxicity) to Stage 5 (death related to an adverse event). The weighted health status at time point j is denoted by U_j, where j = 0, 1, ..., z. For each treatment arm, U_j has a maximum value of 1 and a minimum value of 0. Suppose we begin a trial with all patients absent disease burden at Grade 0: U_j = U_0 = 1. A trial with the highest possible morbidity requires all patients to experience a Grade 5 toxicity (death): at this point, U_j will drop to 0.
We let g_i,j represent the severity score for the ith patient at time j, i = 1, ..., n. The severity score is identical to their ordinal score for the variable of interest. If the range of the ordinal variable of interest does not have 0 as one extreme end, all values must be shifted to set 0 as the starting score (the polarity may also be reversed so that 0 represents peak health status). All patients begin the trial at Grade 0, which reflects g_i,0 = 0. If a patient labeled with index 50 has a Grade 3 injury on the seventh time point, their severity score g_50,7 = 3.
Scaling for the WTA curve is performed through normalizing to a minimum of 0 and a maximum of 1 by using the initial weight of the treatment arm. This weight, w_0, is the product of starting patient count n_0 and the range of the ordinal variable of interest r:
w_0 = n_0r.
Suppose the initial size of the group, n_0, is 100 patients. The range r in the ordinal variable (toxicity grade) is 5. Then, w_0 is 500. The value of the weight changes over time due to patient censoring reflected by a drop in n_j. The general equation for w_j is provided in Section 2.5 and is used in the weighted logrank test. However, for scaling and plotting U, only the initial weight of a given treatment arm, w_0, is required.
The initial value U_0 is a perfect score of 1.
U_0 = 1
Subsequent values of U deviate based on observed event occurrences d_j. We define event occurrence as a change in the variable of interest for a given patient i at time j:
d_i,j = g_i,j+1 - g_i,j.
Therefore, the observed event score for a group of n patients is defined as
d_j = โ_i=1^nd_i,j = โ_i=1^n(g_i,j+1 - g_i,j),
with patients censored following time j not contributing to the sum. Events and resulting changes in treatment arm trajectory are always scaled by w_0. Using this event definition, U_j can be calculated iteratively from U_0:
U_j+1 = U_j - d_j/w_0, j = 0, 1, 2...
Alternatively, U_j for any given time point can be computed as follows:
U_j = 1 - โ_j=0^j-1d_jw_0, j ฯตโค^+.
Values for d_j at a given time point can be negative, and these represent cases in which the treatment arm improved in overall health status. From Equations <ref> and <ref> it follows that a negative value of d_j produces an increase in the weighted health status U_j.
ยง.ยง The Logrank Test
We present here the standard formula of the log-rank test statistic.
* Let t_1<t_2<โฆ<t_K be K distinct failure times observed in the data
* n_j^A is the number of patients in group A at risk at t_j, where j=1,2,โฆ,๐พ.
* n_j^B is the number of patients in group B at risk at t_j, where j=1,2,โฆ,๐พ.
* n_j=n_j^A+n_j^B is the total number of patients at risk at t_j, where j=1,2,โฆ,๐พ.
* d_j^A is the number of patients who experienced the (binary) event in group A at t_j.
* d_j^B is the number of patients who experienced the (binary) event in group B at t_j.
* d_j=d_j^A+d_j^B is the total number of patients who experienced the (binary) event at t_j.
* S^A(t) and S^B(t) are the survival functions for group A and B, respectively.
The information at t_j can be summarized in a 2ร2 table.
Under the null hypothesis H_0:S^A(t)=S^B(t),d_j^A follows a hypergeometric distribution conditional on the margins (n_j^A,n_j^B,d_j,n_j-d_j). The expectation and variance of d_j^A take the form
e_j^A=E(d_j^A)=n_j^Ad_j/n_j
V_j=๐๐๐(d_j^A)=n_j^An_j^B(n_j-d_j)/n_j^2(n_j-1)d_j.
Define the observed aggregated number of failures in group A as
O^A=โ _j=1^Kd_j^A.
The expected aggregated number of failures in group A is thus
E(O^A)=E^A=โ _j=1^Ke_j^A.
The contributions from each t_j are independent and thus the variance of O^A is
๐๐๐(O^A)=V=โ _j=1^KV_j.
Under the null hypothesis H_0:S^A(t)=S^B(t), the log-rank test statistic
Z=O^A-E^A/โ(V)=โ _j=1^K(d_j^A-e_j^A)/โ(โ _j=1^KV_j)โผN(0,1).
This is an asymptotic result derived from the central limit theorem (CLT). Note that replacing O^A and E^A with
O^B and E^B leads to the exact same p-value.
The extension to ordinal event in the following section is based on this Z test statistic.
ยง.ยง The Weighted Logrank Test - Analytical Method
We define an event as a change in the severity score of a given condition. Let g_i,j^A be the severity score for
the i^๐กโ individual in group A at time t_j, where i=1,2,โฆ,n_j^A and j=1,2,โฆ,K.
Define d_i,j^A as the change in the severity score from time t_j+1 to t_j.
d_i,j^A=g_i,j+1^A-g_i,j^A, j=1,2,K-1.
Without loss of generality, we consider a severity score ranging from Stage 0 to Stage 4. As a result, d_i,j^A
has a total of 9 possible values -4,-3,-2,-1,0,1,2,3,4, if the observation of this person is uncensored at
t_j+1.
* Let L be the total number of possible values taken by the change variable d_i,j^A. When a severity score
takes values from 0 to 4, L=9.
* Let W be the ordered non-decreasing list of the L possible change values. When a severity score takes
values from 0 to 4, W=(-4,-3,-2,-1,0,1,2,3,4).
* Let w_l be the l^๐กโ element of ๐.
* Let d_j^A,l be the number of subjects in group A at t_j whose change values equal to w_l:
d_j^A,l=โ _i=1^n_j^Ad_i,j^AI(d_i,j^A=w_l)
where, I(d_i,j^A=w_l)=1, when d_i,j^A=w_l and equal to 0, otherwise.
* Let d_j^B,l be the number of subjects in group B at t_j whose change values equal to w_l.
* d_j^(l)=d_j^A,l+d_j^B,l is the total number of patients whose change values equal to w_l
at t_j.
The information at t_j,j=1,2,โฆ,K-1 can be summarized in a 2 x 10 table:
Under the null hypothesis H_0:S^A(t)=S^B(t), (d_j^A,1,d_j^A,2,d_j^A,3,โฆ,d_j^A,L) follows a multivariate hypergeometric distribution conditional on the margins (n_j^A,n_j^B,{d_j^(l)}_l=1^L,n_j-โ
_ld_j^(l)).
We can show that the mean and variance of d_j^A,l, where lโ{1,2,โฆ,L}, are
e_j^A,l โ E(d_j^A,l)=n_j^Ad_j^(l)/n_j
ฯ _j,ll โ ๐๐๐(d_j^A,l)=n_j^An_j^B(n_j-d_j^(l))/n_j^2(n_j-1)d_j^(l).
For distinct l,qโ{1,2,โฆ,L}, we can derive the covariance of d_j^A,l and d_j^A,q
ฯ_j,lq โ ๐ถ๐๐ฃ(d_j^A,l,d_j^A,q)=-n_j^An_j^B/n_j^2(n_j-1)d_j^(l)d_j^(q), lโ q.
These moment results are derived from the definition of multivariate hypergeometric distribution. To account for the direction and the magnitude of the change variable, we define the observed weighted changes as
O_j^w=โ _l=1^Lw_ld_j^A,l.
When a severity score is defined as a range from 0 to 4, the weights w_l takes the values of (-4,-3,-2,-1,0,1,2,3,4) for l=1,2,โฆ,9.. The expected value of O_j can be written as
E_j^w=โ _l=1^Lw_le_j^A,l.
When the event is coded as a binary outcome, this weighted changes O_j^w is reduced to the e_j^A defined above. Using the results in (1) and (2), we can write the variance of the weighted score O_j^w as
V_j^w=๐๐๐(O_j^w)=โ _l=1^Lโ _q=1^Lw_lw_qฯ _j,๐๐,
where, ฯ _j,๐๐ is defined in equation (2) when lโ q and is
defined in equation (1) when l=q.
Similarly, we can aggregate the observed/expected weighted changes across all K time points and define a Z test statistic. The weighted log-rank test statistic is defined as
Z=โ _j=1^K(O_j^w-E_j^w)/โ(โ _j=1^KV_j^w),
which follows the standard normal distribution N(0,1), under the null hypothesis
H_0:S^A(t)=S^B(t). Equivalently,
Z^2=[โ _j=1^K(O_j^w-E_j^w)]^2/โ _j=1^KV_j^wโผฯ _1^2,
i.e. the square of the Z test statistic follows a chi-square distribution with 1 degrees of freedom.
The asymptotic result in equation (3) is based on the assumption that the total number of distinct failure times recorded in the pooled samples (i.e. K) is sufficiently large. For smaller trials with shorter follow-up periods, this analytical method can provide conservative conclusions and result in Type II errors below the designated significance level, as demonstrated in Section 3.3. To complement the analytical method, we also propose a bootstrap-based approach for calculating p-values, which, despite requiring more computational effort, remains accurate and sensitive independent of trial sizes.
ยง.ยง The Weighted Logrank Test - Computational Method
A completed trial can be analyzed either instantly by the analytical approach or through rigorous simulations in a more sensitive computational approach. Compared to the design phase, the advantage of a completed trial is the wealth of collected data. Multistate Markov modelling (MSM), available in the msm package in R, provides a powerful method to compute transition intensities of an inputted dataset through maximum likelihood estimation. The steps to analyze a complete trial are as follows:
* Determine transition probabilities using msm to load into n-fold simulations blind to treatment assignment
* Generate a distribution of the null hypothesis using the test statistic, equation (<ref>)
* Calculate a test statistic from the clinical data and then determine a p-value by comparison to distribution of the null
Software with build-in tools to facilitate analytical and computational methods to streamline the use of WTA for investigators is in production.
ยง.ยง GEE Longitudinal Analysis
Generalized Estimating Equation (GEE) (Liang and Zegar 1986) has been a widely used regression-based tool for analyzing longitudinal data.<cit.> We compare the performance of our weighted trajectory approach to GEE method. In GEE, we model the severity score as outcomes and treatment group as covariate. We specify the autoregressive correlation structure to account for the dependence among the severity measures from the same patient. We use an identity mean-variance link function and leave the scale parameter unspecified. The significance test for the association between patientsโ severity score and treatment status is carried out using a Wald test statistic with the sandwich variance estimator.
A major advantage of GEE over likelihood-based methods (e.g., multi-state models), a is that the joint distribution of longitudinal outcomes does not have to be fully specified. Therefore, if the mean structure is accurately specified, the mean parameters, e.g. the treatment effect in our case, can be consistently estimated, regardless of whether or not the covariance structure is correctly characterized. Our weighted log-rank test is more robust than GEE, because it is a nonparametric test and does not make any assumptions about the survival outcomes. In addition, a visual representation of the survival trajectory over time is naturally accompanied by our proposed test statistic, which tracks of the number of changes in the severity score over time. On the other hand, GEE enables simultaneous modeling of multiple covariates, while our approach focuses on comparison between two treatment groups. In the following simulation studies, we make a direct performance comparison between GEE and WTA.
ยง SIMULATION STUDY 1 - TOXICITY
In our first clinical trial simulation study, we demonstrate the functionality of WTA and present its advantages over KM analysis. We establish the strength of our novel method through rigorous power comparison between KM, GEE, and both analytical and simulated approaches to WTA.
The design is a Phase III comparison of toxicity outcomes from chemotherapy between two treatment arms (control and treatment, 1:1 allocation). The variable of interest is CTCAE toxicity: grades range from 1 (mild/no toxicity) to 5 (death from toxicity).<cit.> For example, grades of oral mucositis range from (1) asymptomatic/mild, (2) moderate pain or ulcer that does not interfere with oral intake, (3) severe pain interfering with oral intake, (4) life threatening consequences indicating urgent intervention, and (5) death. For the purposes of WTA, the ordinal range of 1-5 is shifted to 0-4, and thus censoring takes place at grade 4.
The simulation study was generated using Python 3.7.<cit.> Study simulations are a stochastic process in which randomly generated numbers are programmed to mirror fluctuating toxicities experienced by groups of patients undergoing chemotherapy cycles with daily measurements of treatment toxicity. Each instance of the simulation requires a specified hazard ratio and sample size prior to the stochastic generation of toxicity. Table 2 provides a snapshot of the results for a single simulated clinical trial.
Each patient (represented by an ID number) has a risk of developing treatment toxicity over time. This risk is determined by their treatment group and the numbers of days they have spent in the study. The values within Table 2 were assigned as follows:
* Treatment group: randomly assigned as 0 or 1 with the constraint of having an equal number of patients allocated to each group.
* Duration: the number of days a patient remains within the trial was programmed as a random value within a uniform distribution of 0 to 50 days.
* Toxicity Grade: computed for each patient on a daily basis for the extent of their assigned duration. To model the trajectory of toxicity grade over time, we made the following simplifying assumptions:
* On any given day, patients can rise or fall a single toxicity grade
* Transitions in toxicity grade are random, but a larger hazard ratio suggests a greater chance of exacerbation and lower chance of recovery
* A patient is censored once their pre-assigned duration within the trial has elapsed or they reach maximum toxicity, in this case representing death, whichever occurs first.
A hazard ratio for control:treatment is modeled for the control group to have higher toxicity burden through time compared to the treatment group (the value is programmed 1.0 or higher). For the control group, the probability of exacerbation is a base probability of 0.10 multiplied by the hazard ratio. Should exacerbation not occur, and the current stage is above the minimum, the probability of recovery is a base probability of 0.05 divided by the hazard ratio. Patients in the treatment group fluctuate based on base probabilities alone. Once a patient reaches the maximum toxicity or their maximum assigned duration, they are censored.
ยง.ยง Kaplan-Meier Estimator: Toxicity Trial
We performed Kaplan-Meier estimation using the Python 3.7 library โlifelinesโ.<cit.> This library was used to plot survival probabilities and conduct logrank tests. Results were validated by assessing source code for accuracy and making a direct comparison to results from SPSS v26 (IBM Corp.).<cit.>
To permit comparison to KM, all patients begin the trial at stage 0, which represents grade 1 toxicity. An โeventโ was considered exacerbation to the next stage. Following event occurrence, patients were removed from analysis. Censoring is represented by a Wye-symbol ().
A single toxicity comparison trial was conducted with the following parameters: 200 patients (1:1 treatment allocation at 100 patients/arm) and a 1.25:1 hazard ratio for control:treatment. Fig. <ref> depicts the corresponding Kaplan-Meier plot.
The outcome for a logrank test conducted on this trial is P = 0.411; the result is not statistically significant. The Kaplan-Meier method is not sufficiently sensitive to distinguish between treatment arms for this simulated trial; high grades of toxicity may differ between the groups, but standard time-to-event statistics fail to capture the complex trajectory of morbidity.
Next, we analyze and report the identical drug trial using Weighted Trajectory Analysis.
ยง.ยง Weighted Trajectory Analysis: Simulated Trial
The WTA is performed as described in the Section 2.3 on the identical trial dataset of 200 patients. Censoring is represented by a Wye-symbol () and occurs for each patient once they are no longer followed for toxicity grade. This takes places under two conditions: either the assigned duration for the patient has been reached, or the patient has suffered fatal toxicity. Fig. <ref> provides the plot of WTA.
Note the change in x-axis range, number of patients at risk, and the trajectory of health status: patients are followed for the full course of toxicity and both declines and improvements are mapped. As compared to the KM plot, the treatment arms in this trial are visually distinct across all time points, demonstrating a reduced disease burden for the treatment group, a difference sustained across time. By approximately Day 30, a minor proportion of the original patients within the trial remain, and the delta between groups plateaus. Much like KM plot interpretation, the clinical significance of each trajectory drops after a substantial fraction of patients have been censored.
Using the โweightedโ logrank test, P = 0.005. WTA is a more powerful and more clinically relevant statistic for this dataset for its ability to track toxicity severity across all grades. As KM failed to reject the null hypothesis despite clinically meaningful group differences, a Type II error occurred. The improved sensitivity of WTA prevents such an error from taking place.
ยง.ยง 1000-Fold Power Comparison - KM vs. WTA
The trial analyzed in Sections 3.1 and 3.2 was a single instance of randomly generated data; the improved performance of WTA over KM may have occurred by chance. To accurately compare the ability of the tests to distinguish between treatment arms, we ran 1000-fold analyses across increments of sample size from 20 to 300 and hazard ratio from 1.0 to 1.5. For each trial, a p-value was computed using both KM and WTA. The fraction of tests that were significant (at ฮฑ < 0.05) represents the power of the test (correctly rejecting the null hypothesis that the two groups are the same).
Fig. <ref> demonstrates that WTA has a consistently higher power than KM: it permits comparable analyses with a smaller sample size. Given that trial data is randomly generated, the plots are not perfectly smooth, but follow the expected logarithmic shape of power as a function of sample size.
For the simulated clinical trial at a 1.3 hazard ratio, WTA is able to reach 80% power at 180 patients while KM requires well over 300 patients. At a 1.4 hazard ratio, WTA requires about 100 patients for 80% power while KM requires about 300. Across many hazard ratios, WTA requires less than half the sample size to achieve a power equivalent to KM. Note that the power of the KM method for these clinical trials at a 1.5 hazard ratio mirrors the power of WTA at a 1.3 hazard ratio.
In this simulated example, Weighted Trajectory Analysis demonstrated greater sensitivity than Kaplan-Meier to a dataset with ordinal severity scoring. With a greater likelihood of correctly rejecting the null hypothesis, the novel method reduces Type II errors.
ยง.ยง 1000-Fold Power Comparison - KM, WTA (Analytic and Computational), GEE
To demonstrate the differences between the analytical and computational approach of WTA (and reference these against standard approaches of KM and GEE), we ran 1000-fold analyses under 9 unique conditions, at sample sizes of 100, 200, and 300 across hazard ratios of 1.0, 1.2, and 1.4. For each trial, a p-value was generated for all four of KM, WTA (analytical approach), WTA (simulated approach), and GEE longitudinal analysis using their respective hypothesis tests. The fraction of tests that were significant (at ฮฑ < 0.05) represents the power of the test (correctly rejecting the null hypothesis that the two groups are the same).
Fig. <ref> demonstrates that the analytical approach of WTA is less sensitive and less powerful than the computational approach. This is expected considering its computational effort and independence of trial size. Importantly, the analytical approach provides conservative results: in this stochastic model, the Type I error hovers around half of the 0.05 standard met by KM, GEE, and the computational approach of WTA. In the second simulation study, the explanation for this discrepancy becomes evident; the analytical approach is based on a normal approximation that becomes more precise with a larger number of distinct failure times and longer follow-up. As the second simulation study meets these criteria, the simulated Type I error correspondingly becomes closer to the 0.05 standard, the asymptotic limit.
GEE longitudinal analysis was found to be consistently weaker than both methods of WTA. This remains true in the second simulation study. The discrepancy is likely a trade-off on the parametric nature of each test: WTA is non-parametric and does not require any assumptions about survival outcomes. GEE is semi-parametric, which is less robust, but permits simultaneous modeling of multiple covariates as opposed to a sole comparison across treatment groups. As per this simulation study at a hazard ratio of 1.4, the analytical WTA meets the 80% power standard of clinical trial design at 100 patients; GEE requires over 150 patients and KM requires 300. The most accurate method, the computational WTA, requires fewer than 100 patients.
ยง SIMULATION STUDY 2 - SCHIZOPHRENIA
The first simulation study highlighted the functionality of WTA under restrictive and common trial conditions to permit analysis with KM. However, some trials or datasets outside of medicine optimally analyzed using WTA may involve more extreme input parameters. Longer durations of patient participation are larger fluctuations within the data would also grant sensitivity to the analytical approach in Section 2.5. Accordingly, we developed a second simulation study to demonstrate the flexibility of WTA, in this case solely in analytic form, and compared its power to the versatile GEE longitudinal analysis.
The design is a Phase III comparison of antipsychotic efficacy in the management of schizophrenia. Compared to most chronic medical illnesses, psychiatric illness often demonstrates a more tumultuous course, with periods that may be completely asymptomatic interspersed with episodes of debilitating disease burden. Schizophrenia combines this generalization with a progressive disease course and often incomplete recovery following acute decompensations of the primary disorder or substance-induced episodes of psychosis.
As before, there are two treatment arms (control and treatment, 1:1 allocation). The variable of interest is symptom severity stage: stages range from 0 (absence of symptoms) to 6 (life-threatening illness due to severe disease burden and neurocognitive decline). Patients enter the trial at stage 2, which represents a symptom burden below full threshold for a psychotic episode; in our scenario, these patients are recruited for the trial due to a positive symptom screen as opposed to emergency psychiatric admission typical of greater symptom severity. Measurement intervals represent months as opposed to days, which permit larger transitions between stages in a single time interval, though loaded probabilities favour smaller transitions near extreme ends of the severity scale. Patients are enrolled into the trial for a randomized duration chosen from a uniform distribution between 36 and 84 months; they are censored when they reach the assigned duration or sooner if they reach stage 6. Mechanics of the study otherwise mirror Simulation Study 1.
ยง.ยง 1000-Fold Power Comparison - WTA vs. GEE
Once again, we ran 1000-fold analyses under 9 unique conditions, at sample sizes of 100, 200, and 300 across hazard ratios of 1.0, 1.2, and 1.4. For each trial, a p-value was generated for both WTA (analytical approach) and GEE longitudinal analysis using their respective hypothesis tests. The fraction of tests that were significant (at ฮฑ < 0.05) represents the power of the test (correctly rejecting the null hypothesis that the two groups are the same).
Fig. <ref> demonstrates that under a vastly different stochastic model compared to the first simulation study, WTA once again outperforms GEE. The Type I error of WTA has shifted to an average of 0.037, closer to 0.05 given a trial with increased follow-up and failure times, which better satisfies the normal approximation underlying the method. This longer trial with more complex fluctuations in disease severity exhibits a higher power at identical hazard ratios and sample size compared to the previous study.
ยง ILLUSTRATIVE REAL-WORLD EXAMPLE
ยง.ยง Immune Checkpoint Inhibitor Therapy for Melanoma
Immune checkpoint inhibitors (ICIs) have transformed the treatment landscape for melanoma.<cit.> Inhibitors targeting cytotoxic T lymphocyte antigen-4 (CTLA-4) and programmed death-1 (PD-1) produce a response in a large fraction of cancer patients. These responses are often durable and some are even curative. The use of Anti-CTLA-4 and Anti-PD-1 in combination has demonstrated the highest rate of durable response among melanoma treatment protocols. In prescribing a treatment plan, the promising response rates must be balanced with concerns about toxicity outcomes. Toxic effects associated with ICIs are immune-related in nature, may impact any organ, and remain a major challenge in clinical care.
Published data comparing therapy protocols suggests that the use of combination CTLA-4/PD-1 therapy results in significantly higher immune-related toxicity when compared to monotherapy regimens.<cit.> These results may limit the use of combination therapy for patients with melanoma and remain a barrier to the development of new combinations.
However, when treatment outcomes are compared over a longer time horizon, the discrepancy in immune-related toxicities seen between patients treated with combination versus monotherapy disappears. Those patients treated with combination therapy do experience greater toxicity during active treatment, but because the large majority of toxicities are reversible, the health status of patients treated with combination therapy improves with time. Longitudinally, patients treated with combination immunotherapy receive fewer actual treatment infusions, however treatment response rate is higher, and long-term survival comparable.<cit.> Put simply, the combination of CTLA-4 and PD-1-directed immunotherapy has greater efficacy despite a significantly shorter duration of therapy, and despite an initial increase in immune-related toxicities the health status of patients who respond to therapy is excellent. The key limitation of existing statistical methods used to evaluate toxicity outcomes is the failure to capture improvement and accurately map changes through time.
The hypothesis that long-term health status is comparable between patients treated with combination versus monotherapy ICIs can be tested using Weighted Trajectory Analysis. Rather than using percent incidence to inform treatment decisions (see Figure <ref>), WTA will enable clinicians to assess the time-course of toxicity. The more detailed and sensitive mapping of toxicity outcomes will enable clinicians to more accurately translate patient data into standards for treatment.
In this example, retrospective toxicity data was used to compare monotherapy (Anti-PD-1) with combination therapy (Anti-PD-1 + Anti-CTLA-4). Increases in alanine aminotransferase (ALT) levels indicate transient, immune-related hepatitis, and were recorded for 195 melanoma patients on either protocol over 180 days. The increase in ALT from baseline was graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events, version 5.0.<cit.> The baseline ALT scores are assigned a toxicity of 0 by definition. This enables comparison between KM and WTA.
Kaplan-Meier Estimator: Anti-PD-1 vs. Combination Therapy
To perform KM, the occurrence of any nonzero toxicity score was considered an event. The KM results in Figure <ref> demonstrate that patients on combination therapy had a greater risk of experiencing nonzero toxicity over 100 days compared to the monotherapy group. This difference between groups was statistically significant with a p-value < 0.001.
Weighted Trajectory Analysis: Anti-PD-1 vs. Combination Therapy
The WTA results are depicted in Figure <ref>. The Anti-PD-1 group has a steady accumulation of toxicity related events while the Combination group features a faster decline that plateaus at approximately 60 days. However, the trajectory of the Combination group recovers, and by 160 days, the two trajectories nearly converge. As immune-related toxicities are often reversible, the ability to model both exacerbation and recovery provides a more accurate picture of clinical outcomes.
The weighted logrank test had a p-value of 0.936, which is not statistically significant. The ability of recovery events to be captured within the weighted logrank hypothesis test demonstrates that differences in toxicity outcomes between these groups are misrepresented by prevalence data and the use of time-to-event curves like Kaplan-Meier. The absence of significant differences through more robust analysis suggests incidence data provides an incomplete picture of toxicity outcomes, leading to a false rejection of the null hypothesis. In the simulated example examining the development of toxicity to chemotherapy, WTA avoids a Type II error. In this real-world example, the use of WTA avoids a Type I error.
ยง.ยง ROSE/TRIO-012 Trial
Treatment using agents that disrupt tumor angiogenesis (the process of generating new blood vessels) have shown clinical benefits in colorectal cancer, renal cell carcinoma, and several gynecological cancers. The ROSE/TRIO-012 trial sought to evaluate ramucirumab, an anti-angiogenic drug, for the treatment of metastatic breast cancer.<cit.> Investigators compared ramucirumab to a placebo, when added to standard docetaxel chemotherapy.
Many phase III trials within oncology are evaluated using Kaplan-Meier estimates and additional metrics based on the Response Evaluation Criteria in Solid Tumors (RECIST).<cit.> In ROSE/TRIO-012, KM was performed to determine progression free survival, in which disease progression and death are considered events, and overall survival, where death alone is an event. The RECIST framework (Table <ref>) was used to determine overall response metrics. These metrics reflect patients whose cancer improved through the course of the trial (objective response rate, ORR) and patients that did not experience progressive disease or death (disease control rate, DCR).
The ORR and DCR are defined as follows:
ORR = CR + PR
DCR = CR + PR + SD
Together, the several endpoints provide a detailed picture of patient outcomes since randomization. However, the individual metrics take time to interpret, and can sometimes provide conflicting signals regarding trial success. ROSE/TRIO-012 provides an example: although investigator-assessed PFS (p = 0.077) was insignificant at p < 0.05, endpoints including ORR and DCR were significantly higher in the ramucirumab group.
The final verdict on the trial was that it had failed to meaningfully improve important clinical outcomes - a decision made solely on the absence of significance in investigator-assessed PFS, the trial's primary endpoint. Had trial success been defined as a composite of several endpoints, the investigators may have concluded that ramucirumab conferred a significant benefit to the patients within the study. Currently, ramucirumab is not approved for use in the treatment of metastatic breast cancer.
The ability to combine the RECIST framework with mortality in a single plot would allow oncologists to rapidly interpret the totality of results of a clinical trial. A judgment on trial success can remain tied to the significance of a primary objective, but this objective should capture a wide array of important patient outcomes. In this example, ROSE/TRIO-012 trial results from Mackey et al's 2014 paper are compared to Weighted Trajectory Analysis on the original data.
Kaplan-Meier: Ramucirumab vs. Placebo + Docetaxel
Figures 2A and 2C of Mackey et al.'s 2014 paper are provided below. Respectively, they represent progression-free survival (the primary endpoint) and overall survival, both using standard Kaplan-Meier techniques. Upon inspection, progression free survival appears slightly higher within the ramucirumab group. The logrank p-value of 0.077 did not indicate statistical significance. As PFS was the primary endpoint, the intervention was deemed unsuccessful. Overall survival outcomes were no different between groups (p = 0.915).
RECIST Endpoints: Ramucirumab vs. Placebo + Docetaxel
Conflicting signals about the efficacy of ramucirumab arise when analyzing secondary endpoints. ORR and DCR were significantly higher in the ramucirumab arm (44.7% vs. 37.9%, p = 0.027; 86.4% vs. 81.3%, p = 0.022).
ORR and DCR provide no time-to-event information. The goal of combining RECIST metrics with KM is to generate a complete picture of patient outcomes. However, by omitting information on time and severity, respectively, the distinct methods may disagree on intervention efficacy. The whole is less than the sum of its parts.
The existing solution to this apparent conflict is a decision made by the investigators prior to the study: select a single metric as the primary objective to determine success. This both focuses and simplifies any conversation about study outcomes. Had this primary objective been ORR, the conclusion of the study would have supported the use of ramucirumab for these patients.
Weighted Trajectory Analysis: Ramucirumab vs. Placebo in addition to Docetaxel
We used Weighted Trajectory Analysis to combine the RECIST framework with mortality to depict comprehensive time-to-event outcomes. To perform the method, we assign the following ordinal severity scoring framework:
The starting point of each patient at the time of randomization is stable disease (SD), a score of 2. At the ends of the ordinal scale are complete response (CR, the best outcome) and death (the worst outcome). Patients are censored upon withdrawal, loss to follow-up, or directly following death.
Using the original ROSE/TRIO-012 dataset and Table <ref>, we generate Figure <ref>. Censoring is indicated using vertical tick marks.
This plot provides a comprehensive view of all patient outcomes for the full study duration. A few months into the trial we see the peak in weighted health status for both groups. This occurs at 68 days for the placebo group and 76 days for the ramucirumab group. At this phase, some patients have experienced partial or complete response. Following this peak is a gradual descent that represents progressively increasing morbidity and death across both groups. The trajectories are strikingly similar, with the ramucirumab group experiencing slightly better outcomes throughout the study. The difference is not statistically significant (p = 0.587). This corroborates the current regulatory standard that ramucirumab should not be approved for the treatment of metastatic breast cancer.
With the WTA plot alone, investigators can easily interpret the time course of disease response. Patients likely to respond or recover generally do so following the first two chemotherapy cycles. After three months, the prognosis is poor: both treatment arms are characterized by progressive disease and death.
ยง DISCUSSION
WTA was created to (a) evaluate Phase III clinical trials that assess outcomes defined by various ordinal grades (or stages) of severity; (b) permit continued analysis of participants following changes in the variable of interest; (c) demonstrate the ability of an intervention to both prevent the exacerbation of outcomes and improve recovery and the time course of these effects. Its development was inspired by a pressure injury study โ a disease process characterized by several stages of severity โ for which Kaplan-Meier estimates would fail to capture complete trajectory. Despite its limitations, KM provides crucial advantages such as patient censoring, rapid interpretation of a survival plot, and a simple hypothesis test. To this end, we sought to create a statistical method that built on the foundations of Kaplan-Meier analysis, but would overcome the inherent limitations of the technique.
We built the WTA toolkit based on expansion and extension of the Kaplan-Meier methodology. We adapted the KM to support analysis of ordinal variables by redefining events as a change in disease score rather than assigning "1" and omitting the patient from further analysis. We adapted the KM to permit fluctuating outcomes (worsening and improvement of the ordinal outcome) by plotting a novel weighted health status as opposed to probability. We retained the ability to censor patients at the time of non-informative status. These changes warranted a novel significance test, for which we developed a modification to Peto et al.'s logrank test.<cit.> This analytical approach is rather conservative in its Type I error rates for smaller trials, but the rate approaches 0.05 in the limit of massive trials with many distinct failure times. Thus, we developed a computational approach that is more resource intensive but remains precise and accurate independent of trial size.
In order to explore and demonstrate the utility of WTA, we applied WTA to two randomized clinical trial simulation studies. The first clinical setting was chemotherapy toxicity, a trial in which the variable of interest ranged from 1-5 (shifted to 0-4), stage transitions were singular and started at 0, and up to 50 discrete time points were measured for each patient. The second setting was schizophrenia disease course, a more complex trial in which the variable of interested ranged from 0-6, stage transitions were often multiple and started at 2, and up to 84 discrete time points were measured for each patient. We performed sensitivity and power comparisons across both sample size and hazard ratio. Through 1000-fold validation, WTA showed greater sensitivity and power, often requiring fewer than half the patients for comparable power to KM. WTA also showed increased power compared to GEE, likely secondary to its more robust non-parametric methodology compared to the semi-parametric GEE, at the cost of GEEโs ability to model covariate effects. This demonstrates that designing a Phase III clinical trial using our novel method as the primary endpoint can substantially lower cost, duration, and the risk of Type II errors.
We also applied WTA to real-world clinical trial data. The first was the assessment of time-dependent toxicity grades in melanoma patients receiving one of two immunotherapy treatment regimens. Although toxicities are generally reported in oncology trials as the worst grade experienced by each individual patient, this fails to capture those toxicities that resolve with treatment modification or targeted intervention. As such, the published literature suggested prohibitive toxicity of the most effective therapy, while practitionersโ experience was that high grade toxicities were often transient and treatable. The WTA we conducted confirmed that treatment-related toxicities of combination therapy resolved to rates close to that seen with less effective monotherapy regimens. The second was the re-evaluation of a published phase III registration trial or an antiangiogenic drug for the treatment of metastatic breast cancer. Although this study failure to demonstrate statistically significant improvement in the pre-defined primary endpoint, a number of secondary endpoints suggested the possibility of meaningful clinical benefit from the antiangiogenic therapy. By using an ordinal scale to describe the spectrum of clinical outcomes after therapy, spanning complete disease response, partial response, disease stability, disease progression, and death, WTA demonstrated that although patients derived a modest benefit from antiangiogenic therapy when compared to control therapy, the difference was neither clinically nor statistically significant. The resulting graph captures the full clinical course of patients in a single figure. This result underscores that WTA did not inappropriately provide an overly sensitive analytic tool and justified the regulatory stance that the intervention did not warrant approval to market. Overall, the novel method affords greater specificity and reduces the likelihood of Type I errors.
In aggregate, we feel the strengths of the Weighted Trajectory Analysis statistic are its ability to capture detailed trajectory outcomes in a simple summary plot, its greater power, and its ability to map exacerbation and improvement. These strengths are built upon key advantages that make KM a favored tool for clinical trial evaluation: namely, the ability to censor patients and a compare treatment arms using a simple hypothesis test. WTA-dependent trial design can substantially reduce sample size requirements, raising the practicality and lowering the cost of Phase III clinical trials. However, we acknowledge several limitations to this method. WTA does not facilitate Cox regression analysis or generate the equivalent of a hazard ratio. The WTA is a new technique and does not yet have a clinical or regulatory track record. WTA relies on the assumption of non-informative censoring, and investigation into alternative approaches to censoring such as inverse-probability of censoring weighting (IPCW) remains important future work.<cit.> Lastly, the WTA requires an assumption that the change between adjacent ordinal severities is equally important independent of the levels transitioned by applying a direct numerical weight. This conversation is not always medically appropriate: taking the example of pressure injuries, a transition from Stage 0 to 1 may necessitate a topical ointment, whereas a transition from Stage 3 to 4 warrants surgical repair. Thus, the method relies on a simplifying assumption and future research will be conducted to evaluate non-linear scoring systems. For multi-stage systems, this method remains more precise than collapsing scores to binary systems in order to use KM. Alternative statistical methods such as multi-state modelling are recommended to elicit transition intensities of each unique level as necessary. To encourage the evaluation and improvement of WTA, software is in development permit biostatisticians to further test, apply, and potentially expand the utility of WTA.
In summary, we report the development and validation of a flexible new analytic tool for analysis of clinical datasets that permits high sensitivity assessment of ordinal time dependent outcomes. We see multiple clinical applications, and have successfully applied the new tool in the analysis of both simulated and real-world studies with complex illness trajectories. Future direction with Weighted Trajectory Analysis includes the addition of confidence intervals to group trajectories, non-linear weights to mirror disease burden, exploration of alternative censoring assumptions, and a regression method analogous to the Cox model.
ยง ACKNOWLEDGMENTS
The authors thank Dr. David Vock (University of Minnesota), Drs. Edward Mascha and Chase Donaldson (Cleveland Clinic), and Dr. Sunita Ghosh (University of Alberta) for helpful early discussions.
The authors also thank Britsol Myers Squibb for access to their melanoma clinical trial dataset and the TRIO-012/ROSE study team along with the TRIO Science Committee for access to their database.
ยง.ยง Conflict of interest
None declared.
ยง.ยง Data Availability Statement
The data that support the findings of this research are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
*
99
kaplanmeier
Kaplan EL, Meier P. Nonparametric Estimation from Incomplete Observations. J Am Stat Assoc. 1958;53(282):457-481. doi:10.1080/01621459.1958.10501452.
peto1
Peto R, Pike MC, Armitage P, et al. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. I. Introduction and design. Br. J. Cancer. 1976;34(6):585-612. doi:10.1038/bjc.1976.220.
peto2
Peto R, Pike MC, Armitage P, et al. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. II. Analysis and examples. Br. J. Cancer. 1977;35(1):1-39. doi:10.1038/bjc.1977.1.
ecog
Oken M, Creech R, Tormey D, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5:649-655.
nyha
American Heart Association. Classes of Heart Failure. Published June 2, 2022. https://www.heart.org/en/health-topics/heart-failure/what-is-heart-failure/classes-of-heart-failure. Accessed September 29, 2022.
ctcae
U.S. Department of Health and Human Services. Common Terminology Criteria
for Adverse Events (CTCAE) Version 5.0. Published Nov 27, 2017. Accessed March 23, 2020.
python
Python Software Foundation. Python Language Reference, version 3.7. http://www.python.org. Accessed March 16, 2020.
gee
Liang K, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73(1):13โ22. doi:10.1093/biomet/73.1.13.
lifelines
Davidson-Pilon C. lifelines: survival analysis in Python. J. Open Source Softw. 2019;4(40);1317. doi: 10.21105/joss.01317.
spss
IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY: IBM Corp.
jamaonc
Wang DY, Salem JE, Cohen JV, at al. Fatal toxic effects associated with immune checkpoint inhibitors: a systematic review and meta-analysis. JAMA Oncol. 2018;4(12):1721-8. doi:10.1001/jamaoncol.2018.3923.
larkin2015
Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N. Engl. J. Med. 2015;373(1):23-34. doi:10.1056/NEJMoa1504030.
larkin2019
Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Five-year survival with combined nivolumab and ipilimumab in advanced melanoma. N. Engl. J. Med. 2019;381(16):1535-1546. doi:10.1056/NEJMoa1910836.
trio
Mackey JR, Ramos-Vazquez M, Lipatov O, et al. Primary results of ROSE/TRIO-12, a randomized
placebo-controlled phase III trial evaluating the addition of ramucirumab to first-line docetaxel chemotherapy in metastatic breast cancer. J Clin Oncol. 2015;33(2):141-148. doi:10.1200/JCO.2014.57.1513.
recist
Schwartz LH, Litiรจre S, de Vries E, et al. RECIST 1.1-Update and clarification: From the RECIST committee. Eur J Cancer. 2016;62:132โ137. doi:10.1016/j.ejca.2016.03.081.
ipcw
Robins, JM, Rotnitzky A, Zhao LP. Analysis of Semiparametric Regression Models for Repeated Outcomes in the Presence of Missing Data. Journal of the American Statistical Association. 1995;90(429):106โ121. doi: 10.2307/2291134.
ยง TABLES
ยง FIGURES
|
http://arxiv.org/abs/2306.03595v1
|
20230606113512
|
Transversals via regularity
|
[
"Yangyang Cheng",
"Katherine Staden"
] |
math.CO
|
[
"math.CO"
] |
0pt
0pt
40pt
10pt
0pt
10pt
8.8in
6.6in
C>c<L>l<[enumerate]topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex[itemize]topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1explaintheoTheorem[section]
prop[theo]Propositionlemma[theo]Lemmacor[theo]Corollaryconj[theo]Conjectureclaim[theo]Claimprob[theo]Problemques[theo]Questiondefinitiondefn[theo]Definitionrem[theo]Remarkeg[theo]Examplealg[theo]Algorithmset[theo]Setting
Transversals via regularity
Yangyang ChengMathematical Institute,
University of Oxford, Oxford, UK. Email: [email protected] Cheng was supported by a PhD studentship of ERC Advanced Grant 883810. Katherine StadenSchool of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes, UK. Email: [email protected] Staden was supported by EPSRC Fellowship EP/V025953/1.
July 31, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================
Given graphs G_1,โฆ,G_s all on the same vertex set and a graph H
with e(H) โค s,
a copy of H is transversal or rainbow if it contains
at most one edge from each G_c.
We study the case when H is spanning and explore how the regularity
blow-up method, that has been so successful in the uncoloured setting,
can be used to find transversals.
We provide the analogues of the tools required
to apply this method in the transversal setting. Our main result is a blow-up lemma for
transversals that applies to separable bounded degree graphs H.
Our proofs use weak regularity in the 3-uniform hypergraph whose edges are those xyc
where xy is an edge in the graph G_c.
We apply our lemma to give a large class of spanning 3-uniform linear hypergraphs H such that
any sufficiently large uniformly dense n-vertex 3-uniform hypergraph with minimum vertex degree ฮฉ(n^2) contains H as a subhypergraph.
This extends work of Lenz, Mubayi and Mycroft.
ยง INTRODUCTION
ยง.ยง Transversal embedding
The problem of deciding whether an n-vertex graph G contains a given subgraph H is a central topic in graph theory.
Since this problem is NP-complete, much of the research on this topic has focused on finding sufficient
conditions on G that guarantee the presence of H.
Given a graph parameter ฯ, we seek the best possible bound ฯ_H,n such that if ฯ(G) > ฯ_H,n, then G contains a
copy of H, whereas there are graphs G with ฯ(G)โคฯ_H,n which are H-free.
In this paper, we investigate the generalisation of this problem to graph collections, also known as a multilayer graph.
Here, we are given graphs G_1,โฆ,G_s on the same vertex set, where s โฅ e(H),
and we seek a transversal copy of H, which is a copy of H containing at most one
edge from each of the graphs G_1,โฆ,G_s.
We often think of each G_c having the colour c, so a transversal copy of H is
also called a rainbow copy.
Again, we seek a best possible condition on ฯ which guarantees
the existence of H. That is, we seek the minimum ฯ_H,n^s such that if G_1,โฆ,G_s are any graphs on the same vertex set of size n and ฯ(G_c) > ฯ_H,n^s for all 1 โค c โค s,
then there is a transversal copy of H.
Observe that we recover the original `uncoloured' problem when G_1=โฆ=G_s,
so the transversal embedding problem is indeed a generalisation.
Ideally, we take s=e(H) graphs in the collection and are most interested in determining ฯ_H,n^ col := ฯ_H,n^e(H).
There has been a lot of recent progress in this area. The central question about transversal embeddings, posed by Joos and Kimย <cit.>, is whether the addition of colours changes the answer to the problem. That is, given some graph parameter ฯ, when do we have ฯ_H,n = ฯ_H,n^ col?
When equality does hold, we say that the embedding problem is colour-blind.
We already have a non-example of colour-blindness in the case of the triangle and the size parameter. Mantel's theorem from 1907 states that e(G)>โ n^2/4 โ guarantees that an n-vertex graph G contains a triangle,
whereas the complete balanced bipartite graph shows that this is best possible. So e_K_3,n=โ n^2/4โ.
However, Aharoni, DeVos, de la Maza, Montejano and ล รกmal <cit.> proved that whenever graphs G_1,G_2,G_3 on the same set of n vertices satisfy e(G_c)>an^2 for all c=1,2,3, where a:=26-2โ(7)/81โ 0.2557>1/4, then there is a rainbow triangle,
while there is an example which shows a cannot be decreased.
That is, e_K_3,n^ col = โ an^2 โ.
The transversal embedding problem for larger cliques is still open.
When one is interested in embedding a spanning graph, it makes sense to consider minimum degree
conditions rather than size conditions, as, for example, a graph could be very dense but contain an isolated vertex, and therefore not contain a copy of any given spanning graph with minimum degree at least 1. Let M_n and C_n be the perfect matching and Hamilton cycle on n vertices respectively.
Joos and Kim <cit.> proved that _C_n,n^ col=_C_n,n and _M_n,n^ col=_M_n,n (which both have the common value =โ n/2โ-1);
that is, Hamilton cycle embedding and matching embedding are both colour-blind with respect to minimum degree.
An easier question is to ask for approximately best possible conditions. For minimum degree, this means
we would like to find _H and _H^ col, where
* _H is the minimum such that for all >0, any sufficiently large n-vertex graph G with (G) โฅ (+) n contains a copy of H; and
* _H^ col is the minimum such that for all >0, any collection G_1,โฆ,G_e(H) of sufficiently large n-vertex graphs with (G_c) โฅ (+) n for all 1 โค c โค e(H) contains a transversal copy of H.
So the results ofย <cit.> imply that ^ col_C_n=^ col_M_n=1/2; this approximate
version was earlier proved by Cheng, Wang and Zhaoย <cit.>.
If _H=_H^ col, then we say that H is approximately colour-blind.
Montgomery, Mรผyesser and Pehovaย <cit.>
determined which F-factors are approximately colour-blind for any given small graph F,
and showed that spanning trees with maximum degree o(n/log n) are approximately colour-blind.
They observed that some spanning graphs are very far from being colour-blind: taking H to be the disjoint union of copies of K_2,3โช C_4, it holds that _H โค4/9 by a result of Kรผhn and Osthusย <cit.>, whereas _H^ colโฅ1/2.
Gupta, Hamann, Mรผyesser, Parczyk and Sguegliaย <cit.> showed that powers of Hamilton cycles are approximately colour-blind.
They also showed, improving results of Cheng, Han, Wang, Wang and Yangย <cit.>, that Hamilton โ-cycles in k-uniform hypergraphs are approximately `d-colour-blind' for some range of โ, k and d,
which means with respect to minimum d-degree, which we do not define here.
The aim ofย <cit.> was to provide a fairly general approach for transversal embedding problems that used the corresponding uncoloured embedding result as a black box.
Roughly speaking, their approach is designed to work when the graph H to be embedded is made up of small almost-disconnected blocks: they prove results for F-factors, which are made up of vertex-disjoint copies of F, and for trees which are made up of an ordered sequence of small subtrees each sharing one vertex with a previous tree.
Similarly,ย <cit.> gave a widely applicable sufficient condition for approximate colour-blindness, which applies to graphs obtained by `cyclically gluing' copies of a smaller graph (e.g.ย a Hamilton cycle is obtained by `cyclically gluing' copies of an edge). Nevertheless, the H to which these two results apply are fairly specific, and in this paper we build onย <cit.> to develop a method for H which are `more connected' than the above.
Very recently, during the preparation of this paper, Chakraborti, Im, Kim and Liuย <cit.> made important progress in this direction by proving a `transversal bandwidth theorem'.
A graph has bandwidthb if there is an ordering v_1,โฆ,v_n of its vertices such that |i-j| โค b whenever v_iv_j is an edge.
Their result states that if a bounded degree graph H of sublinear bandwidth has chromatic number k, then ^ col_Hโค 1/k.
This extends several of the above results, and is asymptotically best possible for many graphs H, as well as generalising the original bandwidth theorem of Bรถttcher, Schacht and Tarazย <cit.> for single graphs.
The proof uses similar ideas to the proof of the main result of our paper, a transversal blow-up lemma, which we introduce in the next section.
ยง.ยง The regularity-blow-up method for transversals
The so-called `regularity-blow-up method' has been employed to prove many results concerning the embedding of a given spanning subgraph H in a large graph G. Such proofs typically run along the following lines. Apply Szemerรฉdi's regularity lemmaย <cit.> to obtain a constant-size `reduced graph' R of G, which approximates the structure of G: vertices of R correspond to disjoint vertex clusters in G, and edges of R correspond to regular pairs of vertex clusters, which informally means that the edges between them are randomlike and therefore easy to embed into. There may be a small number of `exceptional' vertices which are not part of this structure. Next, find a suitable subgraph H' of R which is simpler than H, and often consists of many small connected components (whereas H could be connected). Next, embed small pieces of H which connect the components of H'. At this stage, some vertices may need to be moved from the structure into the exceptional set to make clusters balanced and to ensure regular pairs have sufficiently large minimum degree (they are superregular). Then incorporate the exceptional vertices: for each such vertex v find a component of H' where v has many suitable neighbours and a few vertices inside the H'-structure which can be used along with v to embed a small part of H. Finally apply the blow-up lemma of Komlรณs, Sรคrkรณzy and Szemerรฉdiย <cit.> to embed most of H into H' allowing for the restrictions imposed by the initial embedding. The blow-up lemma states that, for the purposes of embedding a possibly spanning bounded degree graph, a regular pair behaves the same as a complete bipartite graph.
The primary goal of this paper is to provide the tools needed to apply the regularity-blow-up method
to obtain transversal embeddings, following the same steps as above.
The basic idea is that one can think of a graph collection G=(G_c: c โ) with common vertex set V as the 3-uniform hypergraph G^(3) with vertex and edge sets
V(G^(3)) = V โช and E(G^(3)) = {xyc: xy โ G_c}.
This natural idea has been noticed and used before, for example inย <cit.>, where it was noted that a result of Aharoni, Georgakoupoulos and Sprรผsselย <cit.> on matchings in k-partite k-uniform hypergraphs can be used to find a transversal matching in a bipartite graph collection; and in <cit.>, where the weak regularity lemma was applied to G^(3) to find an almost spanning clique factor.
In this paper, we take the idea further by using this perspective to provide the tools one would require to use the regularity-blow-up method for graph collections.
To this end, we
* define regularity, superregularity, reduced graphs and provide a regularity lemma for graph collections;
* state some of the standard tools for regularity arguments, such as a `slicing lemma' which states that regularity is inherited in subpairs of regular pairs, degree inheritance in the reduced graph, and various embedding lemmas with target sets, candidate sets and prescribed colours;
* prove a blow-up lemma for `separable graphs', which we define shortly.
The first item is mainly a convenient reformulation of weak regularity for 3-uniform hypergraphs
and does not require any new ideas; nonetheless the statements are useful to have.
The third (which builds upon the tools developed in the second item) is our main contribution, and the proof uses the original blow-up lemma combined with colour absorption ideas fromย <cit.>.
We envisage that our blow-up lemma will be useful in solving various transversal embedding problems, making it a viable alternative to the commonly used absorption technique. Furthermore, there are cases where the blow-up lemma is a more appropriate tool. For instance, when attempting to embed a given transversal 2-factor (a spanning 2-regular subgraph) or, more generally, a graph with low bandwidth, the absorption technique is not as effective. In such situations, the blow-up lemma can still be employed for successful embedding. For example, our blow-up lemma can be used to embed powers of Hamilton cycles, providing an alternative proof of a result inย <cit.>.
Abbasiย <cit.>, proving a conjecture of El-Zaharย <cit.>, used the original blow-up lemma to obtain the best possible minimum degree bound in a large graph containing a given 2-factor; our blow-up lemma could be useful in proving a transversal version of this result.
Another notable advantage of the blow-up lemma is its ability to aid in characterising extremal constructions. This is helpful for determining exact bounds in these types of problems, by utilising the stability method commonly employed in non-transversal graph embeddings. This natural problem was recently raised in <cit.>.
We discuss applications to transversal embedding problems further in Sectionย <ref>.
The secondary goal of the paper is to apply our transversal blow-up lemma to embeddings in uniformly dense 3-uniform hypergraphs.
We introduce these `randomlike' hypergraphs, and our results in this direction, in Sectionย <ref>.
Indeed, our blow-up lemma can be formulated as a blow-up lemma for 3-uniform hypergraphs. Consequently, there is also potential to employ this lemma in generalising results such as those of Kรผhn and Osthusย <cit.> who studied the problem of embedding loose Hamiltonian cycles under vertex degree conditions.
All our results apply to `separable graphs' with bounded maximum degree.
An n-vertex graph H is ฮผ-separable if there is X โ V(H) of size at most ฮผ n such that H-X consists of components of size at most ฮผ n.
Separable graphs (with suitable small ฮผ) include F-factors for fixed F, trees, 2-regular graphs, powers of a Hamilton cycle, and graphs of small bandwidth.
ยง.ยง Transversals in uniformly dense graph collections
Our first main result concerns transversal embeddings of spanning graphs inside a quasirandom graph collection. A `quasirandom' condition is one possessed by a random graph of a similar density. Our condition is that for every linear subset of vertices and linear subset of colours, the total number of edges in these colours spanned by the subset is large.
A graph collection satisfying this condition is said to be uniformly dense.
We also require that the number of edges of each colour and the total degree of each vertex is ฮฉ(n^2), where the graph collection has a common vertex set of size n. This condition cannot be completely removed, since if a colour is not present or if a vertex is isolated, we certainly cannot find a transversal copy of a spanning graph H.
A graph collection satisfying both conditions is super uniformly dense, in analogy with regular and superregular.
We are interested in the following question: for which (spanning) graphs H must any super uniformly dense graph collection contain a transversal copy of H?
Given d,ฮท>0, we say that a graph collection G=(G_c: c โ) with vertex set V of size n is (d,ฮท)-dense if
for all A,B โ V and ' โ, we have
โ_c โ'e(G_c[A,B]) โฅ d|'||A||B| - ฮท n^3,
where e(G_c[A,B]) denotes the number of pairs (x,y) โ A ร B
for which {x,y} is an edge of G_c.
We informally refer to a graph collection G on n vertices that is (d,ฮท)-dense for parameters 0<1/n โชฮทโช d โค 1 as uniformly dense, and if additionally there is โซฮท for which โ_c โd_G_c(v) โฅ|| n for all v โ V and e(G_c) โฅ n^2 for all c โ, then G is super uniformly dense.
Note that the condition is vacuous unless A,B and ' are of linear size (in the number n of vertices).
We prove that a transversal copy of a separable graph can be found in a super uniformly dense graph collection.
For all ,d,,>0, there are ฮท,ฮผ>0 and n_0 โN such that the following holds for all integers n โฅ n_0.
Let G=(G_c: c โ) be a (d,ฮท)-dense graph collection on a common vertex set V of size n, where || โฅ n, and suppose that
for all v โ V we have โ_c โd_G_c(v) โฅ||n,
and for all c โ we have e(G_c) โฅ n^2.
Then G contains a transversal copy of any given ฮผ-separable graph H on n vertices with (H) โค and e(H) = ||.
This result is a consequence of a more general transversal blow-up lemma,
which we state in the next section.
First, we discuss possible extensions to other colour patterns.
One can ask whether a graph collection contains a copy of H with a given colour pattern, extending the monochromatic case (a copy in a single graph G_c) and rainbow case (a copy with one edge from each G_c).
For example, the following was shown inย <cit.>. Not only does the same minimum degree (2/3+o(1))3n which guarantees a triangle factor in a large graph with a vertex set of size 3n in fact guarantee a transversal copy when ||=3n,
it also guarantees a triangle factor where the c-th triangle lies inside G_c (has colour c), when ||=n.
Given d,ฮท>0, a (single) graph with a vertex set V of size n is (d,ฮท)-dense if for all A,B โ V, we have e(G[A,B]) โฅ d|A||B|-ฮท n^2.
A collection of (d,ฮท)-dense graphs on the same vertex set is a (d,ฮท)-dense graph collection, but the converse does not hold.
It is easy to see that, for any d>0, as long as ฮท and 1/n are sufficiently small, such a G contains ฮฉ(n^3) triangles: the condition applied to A=B=V implies that there are ฮฉ(n) vertices v with d_G(v) โฅ dn/2; each one has ฮฉ(n^2) edges in its neighbourhood.
However, there are uniformly dense graph collections with d fairly large which do not contain any monochromatic triangles, as the following example shows.
(The example is essentially equivalent to one in the setting of uniformly dense hypergraphs โ introduced in Sectionย <ref>โ of Reiher, Rรถdl and Schachtย <cit.>, which itself has its roots in work of Erdลs and Hajnalย <cit.>.)
Form an auxiliary oriented 2-graph J by letting V, be disjoint, where, say, |V|=||=n is large, and first adding edges between every pair in V, and every pair in (V,).
Independently for each edge, choose an orientation uniformly at random. Add the pair xy to G_c with x,y โ V and c โ
precisely when xyc is a cyclic triangle.
Then, with probability tending to 1 as n โโ, G is (1/4,o(1))-dense.
However, for every triple x,y,z of vertices in V and every colour c โ, there are at most two edges in G_c[{x,y,z}].
This raises the question of which colour patterns of which graphs H one can expect to find in a (super) uniformly dense graph collection.
If d is sufficiently large, then any bounded degree H with any colour pattern can be found; for those H which are not present when d is an arbitrary positive constant, how large must d be to guarantee such a copy of H?
We will explore a generalisation of this question in Sectionย <ref>.
ยง.ยง A transversal blow-up lemma
In this section, we state a simplified version of our blow-up lemma for bipartite graphs.
We defer the complete statement to Theoremย <ref> at the end of this section.
Our blow-up lemma has the advantages that its proof, sketched at the beginning of Sectionย <ref>, is conceptually straightforward,
following from the original blow-up lemma and a colour absorption tool introduced inย <cit.>;
and the lemma should be powerful enough for all of the transversal embedding applications we have in mind.
This is since the usual regularity-blow-up method is almost always successfully applied to `non-expanding' separable graphs, since one embeds them by using the blow-up lemma on a series of small pieces.
The disadvantage is that there does not seem to be a reason why the separability condition should be necessary.
[Simplified transversal blow-up lemma]
Let 0 < 1/n โช,ฮผโช d,,1/โค 1.
Let be a set of at least n colours and let G = (G_c: c โ) be a collection of bipartite graphs with the same vertex partition V_1,V_2,
where n โค |V_1|โค|V_2|โค n/, such that
* for all V_i' โ V_i for i=1,2 and ' โ
with |V_i'|โฅ|V_i| for i=1,2 and |'| โฅ ||, we have that
โ_c โ'e_G_c(V_1',V_2') โฅ d|'||V_1'||V_2'|;
* for i=1,2 and every v โ V_i we have โ_c โd_G_c(v) โฅ d||n
and for every c โ, we have e(G_c) โฅ dn^2.
Let H be a ฮผ-separable bipartite graph with parts of size |V_1|,|V_2|, and || edges and maximum degree .
Then G contains a transversal copy of H.
The theorem requires that the number of vertices and edges of H are comparable, so it does not apply to very sparse H.
If one adds edges to H to obtain a suitable denser graph H', and duplicates colours until there are e(H') colours, the copy of H' produced by the theorem will not necessarily contain a transversal copy of H.
The general version of the theorem, Theoremย <ref>, applies to a graph collection whose graphs have common parts V_1,โฆ,V_r, and a graph R with V(R)=[r] such that the bipartite condition in the theorem holds between all ij โ E(R), and each ij has a dedicated set of colours _ij.
ยง.ยง.ยง Rainbow blow-up lemmas
There are by now many blow-up lemmas for various settings which have been applied to many embedding problems.
Of particular relevance to this paper are rainbow blow-up lemmas which apply to a single graph whose edges are coloured.
We say that an edge-coloured graph G is k-bounded if no colour appears on more than k edges, and G is locally k-bounded if each vertex is incident to at most k edges of the same colour.
Glock and Joosย <cit.> proved a rainbow blow-up lemma for o(n)-bounded edge-colourings, which allows one to find a rainbow embedding of a given bounded degree graph H. Here, the number of colours is many times larger than e(H). Later, Ehard, Glock, and Joosย <cit.> proved a similar lemma for locally O(1)-bounded colourings, which allows the number of colours to be (1+o(1))e(H).
Therefore it does not seem possible to use such blow-up lemmas for transversal embedding problems where we require exactly e(H) colours.
ยง.ยง Applications to embedding in uniformly dense hypergraphs
In this section we revisit the hypergraph perspective and discuss the consequences of our work to embeddings in 3-uniform hypergraphs.
`Weak regularity' is the straightforward generalisation of Szemerรฉdi regularity from (2-uniform hyper)graphs to k-uniform hypergraphs (hereafter referred to as k-graphs), and the `weak regularity lemma' was proved by Chungย <cit.> in much the same way as the originalย <cit.>. However, this lemma is not powerful enough to prove many of the analogues of graph results which one would like. In particular, there is no general counting lemma which guarantees that the number of copies of a given small graph F in a regularity partition is similar to what one would expect if pairs were replaced by uniform random hyperedges, with the same density. Thus the `strong regularity lemma' was developed, which uses, as the name suggests, a more complicated and much stronger notion of regularity, and does have an associated counting lemma; this lemma is much more applicable than its weaker counterpart.
It was shown by Conlon, Hร n, Person and Schachtย <cit.> and independently Kohayakawa, Nagle, Rรถdl and Schachtย <cit.> that weak regularity is in fact strong enough to give a counting lemma for linear hypergraphs, which have the property that |e โฉ f| โค 1 for all distinct hyperedges e,f.
As a transversal copy of a (simple) graph inside a graph collection G is a linear subhypergraph of the associated 3-graph G^(3), weak regularity is an effective tool for transversal embedding problems.
And conversely, the tools developed in this paper are useful for embedding 3-graphs.
We generalise the definition we gave in Sectionย <ref> for graphs.
Let d,ฮท>0 and let k โฅ 2 be an integer.
A k-graph is (d,ฮท)-dense if for all subsets U_1,โฆ,U_k โ V, we have
e(U_1,โฆ,U_k) โฅ d|U_1|โฆ |U_k| - ฮท n^k,
where e(U_1,โฆ,U_k) is the number of k-tuples
(x_1,โฆ,x_k) โ U_1รโฆร U_k
for which {x_1,โฆ,x_k} is a hyperedge.
We informally refer to a k-graph G on n vertices that is (d,ฮท)-dense for parameters 0<1/n โชฮทโช d โค 1 as uniformly dense.
If there is โซฮท for which additionally d_G(v) โฅ n^k-1 for all v โ V(G), then we say that G is super uniformly dense.
This is a type of quasirandomness, since a random k-graph of density at least d is (d,o(1))-dense with high probability. Several papers studying uniform density use the term `quasirandom' instead.
Following a suggestion of Erdลs and Sรณsย <cit.>, a systematic treatment of extremal problems in uniformly dense hypergraphs has been started by Rรถdl, Reiher and Schacht, in part due to the great difficulty of such problems in general hypergraphs.
Inย <cit.>, they fully answered the zero Turรกn density question for uniformly dense 3-graphs, i.e.ย for which F is the following true? For all d>0, there exists ฮท>0 such that every sufficiently large (d,ฮท)-dense 3-graph contains F as a subhypergraph.
They showed that these F are precisely those with the following property. There is an ordering v_1,โฆ,v_r of V(F) and a colouring of the set of pairs of vertices contained in edges by red, blue and green, so that whenever v_iv_jv_k โ E(F) with i<j<k, we have that v_iv_j is red, v_iv_k is blue and v_jv_k is green.
This set of F includes linear F and 3-partite F (Erdลsย <cit.> proved that a k-graph F has Turรกn density 0 if and only if F is k-partite), but is much richer than this: for example, the hypergraph obtained by removing one edge from the tight cycle on 5 vertices is such a hypergraph.
The simple argument given in Sectionย <ref> shows that for k=2 and >0, a sufficiently large uniformly dense 2-graph contains a copy (in fact many copies) of K_.
In contrast, for k โฅ 3 there are very simple k-graphs F which require a fairly large density d to appear in any uniformly dense k-graph.
Indeed, there is a (1/4,o(1))-dense 3-graph in which K^(3)-_4, the (unique) 3-graph with 4 vertices and 3 edges, does not appear. (The example is closely related to the one given in Sectionย <ref>.)
In this paper, we are interested in which spanning hypergraphs appear in any super uniformly dense hypergraph.
Note that we cannot remove `super' since uniform density does not preclude the existence of isolated vertices.
Let H = (H_โ: โโN) be a collection of k-graphs where |V(H_โ)|=: n_โโโ.
For which H is the following true?
For all d,>0, there exist ฮท>0 and โ_0 โN such that for all integers โ>โ_0, every (d,ฮท)-dense k-graph G on n_โ vertices with d_G(v) โฅ n_โ^k-1 for all v โ V(G) contains H_โ as a subhypergraph.
For k=2, the blow-up lemma of Komlรณs, Sรกrkรถzy and Szemerรฉdiย <cit.> implies that every uniformly dense graph contains as a subgraph any given spanning graph H with bounded degree and sublinear bandwidth (as observed by Glock and Joos, see Theoremย 9.3 inย <cit.>).
For general uniformities k, Lenz and Mubayiย <cit.> proved that uniformly dense k-graphs contain an F-factor for any given fixed size linear F. In fact they found non-linear F for which uniform density is sufficient, and `almost-linear' F for which it is not.
Ding, Han, Sun, Wang and Zhouย <cit.> completed this work for k=3 by characterising those F for which uniform density guarantees an F-factor,
and for general k they also obtained a characterisation for k-partite F.
Lenz, Mycroft and Mubayiย <cit.> showed that uniform density guarantees a loose cycle.
To the best of our knowledge, there are no further results on embedding spanning F in uniformly dense hypergraphs of arbitrarily small positive density.
Our next main result generalises the result ofย <cit.> for k=3 by providing a new family of spanning H which can be found in super uniformly dense 3-graphs.
An expanded graph is obtained from a 2-graph by replacing every edge e=xy by some 3-edges xyc_1,โฆ,xyc_t_e, where all the new vertices c_1,โฆ,c_t_e are distinct, and the number t_e of new edges/vertices for the edge e depends on e. If every t_e is equal to t then we call this a t-expansion. For example, a loose 3-uniform cycle is an expanded cycle where each edge is replaced by one expanded edge, that is, a 1-expansion of a cycle.
The 1-expansion of a (simple) graph H is linear and has |V(H)|+|E(H)| vertices.
For all ,d,>0 there are ฮท,ฮผ>0 and n_0 โN such that the following holds for all integers n โฅ n_0.
Let G be a (d,ฮท)-dense
3-graph on n vertices with d_G(v) โฅ n^2 for all v โ V(G).
Then G contains a copy of the 1-expansion of any given ฮผ-separable graph H with (H) โค and |V(H)|+|E(H)|โค n.
Since a large 2-regular graph (a union of vertex-disjoint cycles) is o(1)-separable, the theorem implies that any super uniformly dense 3-graph G contains as a subhypergraph any 3-graph consisting of vertex-disjoint loose cycles.
Next, we reformulate Theoremย <ref> as a blow-up lemma for 3-graphs, which may be of independent interest.
[Simplified weak hypergraph blow-up lemma]
For all ,d,>0 there are ,ฮผ>0 and n_0 โN such that the following holds for all integers n โฅ n_0.
Let G be a 3-partite 3-graph with parts
V_1,V_2,V_3 where every |V_i|=:n_i
and n โค n_1 โค n_2 โค n/ and n_3 โฅ n such that G is a weakly (,d)-half-superregular triple, that is,
(i) for all i โ [3] and V_i' โ V_i with |V_i'| โฅ|V_i|, we have
e_G(V_1',V_2',V_3') โฅ d|V_1'||V_2'||V_3'|,
(ii)d_G(v) โฅ dn^2 for all v โ V(G).
Let H be a ฮผ-separable bipartite 2-graph with parts of size n_1,n_2, with n_3 edges and maximum degree .
Then G contains a copy of the 1-expansion of H, where the images of the new vertices lie in V_3.
Weak superregularity and uniform density of 3-graphs and graph collections are closely connected.
Observe that if G is (d+โ(ฮท),ฮท)-dense, where ฮทโซ,, and V_1,V_2,V_3 is any partition with sizes as in the theorem, then the 3-partite subhypergraph G' of G induced on these parts satisfiesย (i). If d_G(v) โฅ n^2 for all v โ V, and in addition the vertex partition is uniformly random, then with
probability tending to 1 as n โโ, G' is weakly (,min{d,/2})-half-superregular, say.
Furthermore, if one takes any partition V(G)=V โช into parts which are not too small, then the graph collection G consisting of graphs G_c for c โ with V(G_c)=V and E(G_c)={xy: x,y โ V and xyc โ E(G)} is (d,ฮท)-dense. Again, if d_G(v) โฅ n^2, a random such partition gives rise to a super uniformly dense graph collection.
As an example, any 3-graph G satisfying the hypotheses of the theorem with n_1=n_2 and n_3=n_1+n_2 contains a copy of a loose Hamilton cycle.
The full statement of a 3-graph version of our transversal blow-up lemma (Theoremย <ref>) is stated in Theoremย <ref>.
A much more general hypergraph blow-up lemma was established by Keevash <cit.> which applies to k-graphs satisfying a strong regularity property, which is designed to be used with the strong regularity lemma; this blow-up lemma applies to embed any bounded degree k-graph H.
Nevertheless, it would be interesting to determine which graphs can be embedded into a weakly superregular triple. We discuss this further in Sectionย <ref>.
There are many other notions of quasirandomness of hypergraphs, often related to the number of edges in/between subsets, that have been studied in the literature. We refer the reader toย <cit.> for a detailed comparison of such notions; it turns out that equivalent notions of quasirandomness in graphs have inequivalent hypergraph analogues. The embedding of spanning structures has been studied in these various settings, for example tight Hamilton cycles in 3-graphs with a stronger quasirandomness condition than the one in this paper, inย <cit.>,
and graphs of sublinear bandwidth in `locally dense' 2-graphs, which is weaker than uniformly dense, inย <cit.>, and in `inseparable' 2-graphs inย <cit.>.
While in this paper we ask which H are subhypergraphs of any quasirandom hypergraph G of positive density (where for us, `quasirandom' means `uniformly dense'),
the results above mainly concern the generalisation of this question: given H, what minimum degree in a quasirandom hypergraph G is sufficient to imply the existence of H?
There are various notions of degree in hypergraphs and the results mentioned in this paragraph consider several of them.
Quasirandomness conditions that apply to sparse (hyper)graphs have also been well studied. Recently, Hร n, Han and Morrisย <cit.> extended the results ofย <cit.> to the sparse regime.
ยง.ยง The statement of our main result
The full statement of our transversal blow-up lemma is as follows.
Its proof will follow from Theoremย <ref> which is a very similar statement written in the notation defined in Sectionย <ref>.
[Transversal blow-up lemma]
For all ฮฝ,d,,,r>0 where r โฅ 2 is an integer, there exist ,ฮผ,>0 and m_0 โN such that the following holds for all integers m โฅ m_0.
Suppose that G=(G_c:c โ) is a graph collection with the following properties.
* There is a graph R with vertex set [r] and a partition = โ_e โ E(R)_e where |_e| โฅ m for all e โ E(R);
* for all ij โ E(R) and c โ_ij, G_c is bipartite with parts V_i,V_j,
and V is a vertex set of size n with partition V=V_1 โชโฆโช V_r, where m โค |V_i| โค m/ for each i โ [r];
* for all ij โ E(R),
* for all V_h' โ V_h for h=i,j and '_ijโ_ij
with |V_h'|โฅ|V_h| for h=i,j and |'_ij| โฅ |_ij|, we have that
โ_c โ'_ije_G_c(V_i',V_j') โฅ d|'_ij||V_i'||V_j'|;
* for h=i,j and every v โ V_h we have โ_c โ_ijd_G_c(v) โฅ d|_ij|m
and for every c โ_ij, we have e(G_c) โฅ dm^2.
Suppose that H is a graph with the following properties.
* (H) โค;
* H is ฮผ-separable;
* H has vertex partition A_1 โชโฆโช A_r such that |A_i|=|V_i| for all i โ [r] and for every xy โ E(H) there is ij โ E(R) such that x โ A_i and y โ A_j;
* e(H[A_i,A_j])=|_ij| for all ij โ E(R);
* for each i โ [r], there is a set U_i โ A_i with |U_i| โค m and for each x โ U_i, a set T_x โ V_i with |T_x| โฅฮฝ m.
Then there is a transversal embedding of H inside G such that for every i โ [r], every x โ U_i is embedded inside T_x.
ยง.ยง Notation and organisation
Notation. For reals x,a,b, we write x=aยฑ b if we have a-bโค xโค a+b. For any two constants ฮฑ,ฮฒโ (0,1), we write ฮฑโชฮฒ if there exists a function ฮฑ_0=ฮฑ_0(ฮฒ) such that the subsequent arguments hold for all 0<ฮฑโคฮฑ_0. When we write multiple constants in a hierarchy, we mean that they are chosen from right to left. For any positive integer a, let [a]:={1,2,โฆ,a}.
Given a set X and a positive integer b, write Xb to denote the set of b-element subsets of X.
Let k โฅ 2 be an integer and let G=(V,E) be any k-graph, so each edge consists of k vertices. We use V(G):=V to denote its vertex set and E(G):=E to denote its edge set. Let v(G):=|V(G)| and e(G):=|E(G)| be their sizes. For any vertex vโ V(G), let N_G(v) be the neighbourhood of v, that is, the set of (k-1)-tuples of v that are incident to v and let d_G(v):=|N_G(v)| be the degree of v. Sometimes we use d(v) for short if G is obvious from the text.
We write (G) := min_v โ V(G)d_G(v) and (G) := max_v โ V(G)d_G(v) for the minimum and maximum degree of G.
For each vertex vโ V(G) and subset S โV(G)k-1, let N_G(v,S):=N_G(v)โฉ S and d_G(v,S):=|N_G(v,S)|.
For any vertex set Uโ V(G), let G[U] be the induced hypergraph of G on U, i.e.ย the hypergraph with vertex set U and all the edges of G with vertices only in U. Let G-U:=G[V(G) U].
For a subhypergraph H of G, let G H be the graph with vertex set V(G) and edge set E(G) E(H).
For any (not necessarily disjoint) vertex sets U_1,โฆ,U_k โ V(G), we write G[U_1,โฆ,U_k] for the k-graph with vertex set U_1 โชโฆโช U_k and edge set E_G(U_1,โฆ,U_k), which is the set of edges of G with one vertex in U_i for each i โ [k].
Let e_G(U_1,โฆ,U_k) be the number of k-tuples (x_1,โฆ,x_k) โ U_1รโฆร U_k for which {x_1,โฆ,x_k}โ E_G(U_1,โฆ,U_k)
(note that when these sets are not disjoint, edges may be counted more than once).
We say that a statement about a graph on n vertices holds with high probability
if it holds with probability tending to 1 as nโโ.
We use script letters (e.g.ย C) to denote sets of colours and bold letters (e.g.ย G) to denote graph collections.
It will be convenient to represent a graph collection in three equivalent ways:
* a collection G=(G_c: c โ) of graphs on the same vertex set V. We call the colour set of G;
* an edge-coloured graph G on vertex set V with edge set โ_c โG_c, where xy has a multiset of colours consisting of those c โ for which xy โ G_c.
We say that G is the edge-coloured graph of G (but rarely use this representation);
* a 3-graph G^(3) on vertex set V โช with edges xyc whenever xy โ G_c. We say that G^(3) is the 3-graph of G.
Given two 2-graphs H and G, a graph homomorphism (from H to G) is a map ฯ: V(H) โ V(G) such that ฯ(x)ฯ(y) โ E(G) whenever xy โ E(H). An injective graph homomorphism is an embedding (of H into G).
Given a graph H and a graph collection G=(G_c: c โ) on vertex set V, a transversal embedding (of H into G) is a pair ฯ:V(H) โ V and :E(H) โ of injective maps such that ฯ(x)ฯ(y) โ E(G_(xy)) whenever xy โ E(H).
Organisation.
In Sectionย <ref>, we define regularity for graph collections, state a regularity lemma for graph collections, define the `template' graph collections to which our theory applies, and prove some of the basic tools one uses when applying the regularity method, such as a slicing lemma and a degree inheritance lemma.
Sectionย <ref> contains some embedding lemmas which are the main ingredients of the proof of our transversal blow-up lemma, Theoremย <ref>, but which are also useful tools when applying the method.
In Sectionย <ref>, we prove Theoremย <ref>
(which readily implies Theoremย <ref>)
and in Sectionย <ref>, we prove Theoremsย <ref> andย <ref> on embeddings in uniformly dense graph collections and 3-graphs.
Sectionย <ref> contains some concluding remarks.
ยง.ยง Concentration of probability
We will use the following version of Chernoff's bound:
Let X be a binomially distributed random variable and 0<ฮต<3/2, then
โ[|X-E(X)|โฅฮตE(X)]โค 2e^-ฮต^2/3E(X).
We will frequently use the following easy corollary of Chernoff's bound for random partitions:
Let ฮฑ_1,โฆ,ฮฑ_k be k parameters and C is a constant where ฮต< ฮฑ_i< 1(1โค i โค k) and โ_i=1^k ฮฑ_i=1. Suppose that 1/nโชฮตโชฮด and โฑ={e_1,โฆ,e_t} is a family of subsets of [n] where tโค n^C and |e_i|=d_in โฅฮด n for each 1โค iโค t. Then there exists a partition of [n] into k subsets V_1,โฆ,V_k such that (i)|V_i|=(1ยฑฮต)ฮฑ_i n; (ii)for every edge
e_iโโฑ and every subset V_j, we have |e_iโฉ V_j|=(1ยฑฮต)d_i|V_j|.
We construct k random sets V_1,โฆ,V_k from [n] as follows: for each xโ [n], we throw x independently into the set โช_i V_i where x is thrown into V_i with probability ฮฑ_i for each 1โค i โค k. Thus we have E[|V_i|]=ฮฑ_i n for each i. And similarly, E[|e_iโฉ V_j|]=ฮฑ_j d_i n for each i,j. Therefore, by Chernoff's bound, with probability at most 2ke^-ฮต^2/3ฮฑ_i n+2n^Ce^-ฮต^2/3ฮฑ_i d_i n=o(1), we do not have |V_j|=(1ยฑฮต)ฮฑ_jn and |e_iโฉ V_j|=(1ยฑฮต)ฮฑ_jd_in for every i,j. And thus the lemma follows.
A hypergeometric distribution X with parameters (N,K,m) is defined as follows: let V be an N-set and U be a fixed K-subset of V, take m sets A randomly and uniformly from V and let X=|Aโฉ U|.
We have the following tail bound of hypergeometric distribution <cit.>.
Let X be a hypergeometric distribution with parameters (N,K,m) and 0<ฮผ <ฮต where ฮต=K/N, then
โ(Xโค (ฮต-ฮผ)m)โค e^-2ฮผ^2m.
ยง THE REGULARITY-BLOW-UP METHOD FOR TRANSVERSALS
ยง.ยง Regularity
Our notion of regularity for a graph collection G is essentially weak regularity for the 3-graph G^(3) of G. We now define several related notions of weak regularity for k-graphs, which we will use for k=2,3. We usually omit weak and weakly since we will not use any stronger type of regularity.
Let G be a k-partite k-graph with classes V_1,โฆ,V_k,
which we also denote as (V_1,โฆ,V_k)_G.
We define the density of G to be
d_G(V_1,โฆ,V_k) := e(G)/|V_1|โฆ|V_k|.
Given >0, we say that (V_1,โฆ,V_k)_G is
* (weakly) -regular if for every subhypergraph (V_1',โฆ,V_k')_G with V_i' โ V_i and |V_i'| โฅ|V_i| for all i โ [k] we have
|d_G(V_1',โฆ,V_k') - d_G(V_1,โฆ,V_k)|< ;
* (weakly) (,d)-regular if additionally d_G(V_1,โฆ,V_k)โฅ d;
* (weakly) (,d)-superregular if it is (weakly) (,d)-regular and additionally d_G(x)โฅ d(|V_1|โฆ|V_k|)/|V_i| for all i โ [k] and x โ V_i;
* (weakly) (,d)-half-superregular if for every subhypergraph (V_1',โฆ,V_k')_G with V_i' โ V_i and |V_i'| โฅ|V_i| for all i โ [k] we have d_G(V_1',โฆ,V_k')โฅ d and d_G(x)โฅ d(|V_1|โฆ|V_k|)/|V_i| for all iโ[k] and x โ V_i.
Our main results will use `half-superregular' given its close connection to uniform density. The 2-graph blow-up lemmaย <cit.> uses a similar notion, but nowadays `superregular' is generally used instead. This makes little material difference:
by definition, an (,d)-superregular hypergraph is (,d-)-half-superregular,
and as shown by Rรถdl and Ruciลski in the course of their alternative proof of the blow-up lemmaย <cit.>, a half-superregular 2-graph contains a spanning superregular subgraph, with weaker parameters. This proof generalises easily to k-graphs,
and therefore we postpone it to the appendix.
Let 0<1/nโชโช' โช d,,1/k โค 1 where k โฅ 2 is an integer. Suppose that G is a k-partite k-graph with parts V_1,โฆ,V_k where nโค |V_i|โค n/ for all i โ [k]. If G is (,d)-half-superregular, then
G contains a spanning subhypergraph G' that is (',d^2/2)-superregular.
Let G be a k-partite k-graph defined on V_1,โฆ,V_k where nโค |V_i|โค n/. We say G is (d,ฮท)-dense restricted to parts if for any subsets X_iโ V_i where |X_i|โฅ |V_i| for all iโ [k], we have e_G(X_1,โฆ,X_k)โฅ d|X_1|โฆ|X_k|-ฮท n^k.
Let 0<1/n โชฮทโช' โช d'โช d, , ,1/k โค 1 where k โฅ 2 is an integer. Suppose that G is a k-partite k-graph defined on V_1,โฆ,V_k where nโค |V_i|โค n/. If G is (d,ฮท)-dense restricted to parts, and each vertex in V_i has degree at least (|V_1|โฆ|V_k|)/|V_i| for any iโ[k], then
* G is (',d_0)-half-superregular where d_0 := min{d/2,},
* G contains a spanning subhypergraph G' that is (',d')-superregular.
Choose a new parameter and increase n if necessary so that 1/n โชโช' and the conclusion of Lemmaย <ref> holds with n,,',d_0 := min{d/2,}, playing the roles of n,,',d,.
By increasing n_0 and decreasing ฮท if necessary, we may assume that 1/n โชฮทโช.
Let X_iโ V_i be any subsets such that |X_i|โฅ |V_i| for iโ [k]. By definition of (d,ฮท)-dense, we have e_G(X_1,โฆ,X_k)โฅ d|X_1|โฆ|X_k|-ฮท n^k โฅ (d-)|X_1|โฆ|X_k| since ฮทโช. This implies G is (,d/2)-regular. Combined with the minimum degree condition, we see that G is also (,d_0)-half-superregular. Lemma <ref> and our choice of parameters implies that G contains a spanning (',d')-superregular subgraph G'.
Regularity for a bipartite graph collection G is defined in terms of the 3-graph G^(3) of G.
Suppose that G is a graph collection with colour set , where each G_c is bipartite with parts V_1,V_2.
Let G^(3) be the 3-graph of G.
We say that
* G is (,d)-regular if G^(3) is (,d)-regular. That is, for all V_i' โ V_i with |V_i'| โฅ|V_i| for i=1,2 and ' โ with |'| โฅ||, we have
|โ_c โ'e_G_c(V_1',V_2')/|'||V_1'||V_2'|-โ_c โe_G_c(V_1,V_2)/|||V_1||V_2|| < .
* G is (,d)-semi-superregular if it is (,d)-regular
and d_G^(3)(v) = โ_c โd_G_c(v) โฅ d|V_3-i||| for all i โ [2] and v โ V_i.
* G is (,d)-superregular if G^(3) is (,d)-superregular. That is, it is (,d)-semi-superregular and d_G^(3)(c) = e(G_c) โฅ d|V_1||V_2| for all c โ.
* G is (,d)-half-superregular if G^(3) is (,d)-half-superregular. That is, for all V_i' โ V_i with |V_i'| โฅ|V_i| for i=1,2 and ' โ with |'| โฅ||, we have โ_c โ'e_G_c(V_1',V_2') โฅ d|'||V_1'||V_2'| and โ_c โd_G_c(v) โฅ d|V_3-i||| for all i=1,2 and v โ V_i, and e(G_c) โฅ d|V_1||V_2| for all c โ.
Note that, if every G_c with c โ is the same, then G is (,d)-regular if and only if G_c is (,d)-regular; and G is (,d)-superregular if and only if G_c is (,d)-superregular.
The superregularity of G does not imply a minimum degree condition for any graph G_c in the collection, and indeed they could all have isolated vertices.
The following simple lemma shows that, in an (,d)-regular graph collection, most vertices โtypical vertices โ have large total degree (the sum of degrees over all colours) and typical colours have many edges.
Let 0<โช d โค 1, and let G be an (,d)-regular graph collection with colour set , where each G_c is bipartite with parts V_1,V_2. Then the following hold:
(i) for every i โ [2] and all but at most |V_i| vertices v โ V_i we have โ_c โd_G_c(v) โฅ (d-)|V_3-i|||;
(ii) for all but at most || colours c โ we have e(G_c) โฅ (d-)|V_1||V_2|.
Forย (i), let V_1' be the set of vertices in V_1 without this property and suppose for a contradiction that |V_1'|>|V_1|. Then (,d)-regularity implies that
d_G^(3)(V_1',V_2,)>d-, but the definition of V_1' implies that e_G^(3)(V_1',V_2,)โค|V_1'|(d-)|V_2||| and hence d_G^(3)(V_1',V_2,) โค d-, a contradiction.
By symmetry, the rest ofย (i) andย (ii) are identical.
The following lemma is a standard tool, written in our graph collection notation.
Let 0<1/n โชโช' โชโช d โค 1, and let G be a graph collection with colour set , where each G_c is bipartite with parts V_1,V_2 each of size at least n, and let V_i' โ V_i for i โ [2] and ' โ. Let G'=(G_c[V_1',V_2']:c โ').
(i) Suppose that G is (,d)-regular.
Suppose |V_i'| โฅ|V_i| for i โ [2] and |'| โฅ||. Then G' is (/,d/2)-regular.
(ii) Suppose that G is (,d)-superregular. Suppose |V_i'| โฅ (1-)|V_i| for i โ [2] and |'|>(1-)||. Then G' is (2,d/2)-superregular.
(iii) Suppose that G is (,d)-superregular. Given n_1โฅ |V_1|,n_2 โฅ |V_2| and h โฅ ||, if V_i'โ V_i is a uniform random subset of size n_i for each i=1,2
and 'โ is a uniform random subset of size h, then with high probability,
G' is (/,d^2/16)-superregular.
For (i), firstly we have d_G^(3)(V_1^',V_2^',')โฅ d-โฅ d/2. Secondly, given any subset V_i^''โ V_i' with |V_i^''|โฅ|V_i^'|/โฅ |V_i| where i=1,2 and any subset ^''โ^' with |^''|โฅ |^'|/โฅ ||, by regularity we have
|d_G^(3)(V_1^'',V_2^'',^'')-d_G^(3)(V_1^',V_2^',^')|
โค
|d_G^(3)(V_1^'',V_2^'',^'')-d_G^(3)(V_1,V_2,)|
+|d_G^(3)(V_1,V_2,)-d_G^(3)(V_1^',V_2^',^')|โค 2ฮตโค/.
For (ii), note that G' is (/(1-),d/2)-regular and thus (2,d/2)-regular by (i). Let G_c':=G_c[V_1',V_2'] for each c โ. For each i=1,2 and vโ V_i', we also have โ_cโ'd_G_c'(v)โฅ d|V_3-i|||-2|V_3-i|||โฅ d|V_3-i'||'|/2. Combining this with e(G_c')โฅ d|V_1||V_2|-2|V_1||V_2|โฅ d|V_1||V_2|/2 for each cโ', we get that G' is (2,d/2)-superregular.
For (iii), similarly, we always have that G' is (/,d/2)-regular byย (i). Let G_c':=G_c[V_1',V_2'] for all c โ. We only need to show that with high probability, for each i=1,2 and vโ V_i', we have โ_cโ'd_G_c'(v)โฅ d^2|V_3-i'||'|/16 and for each cโ', we have e(G_c')โฅ d^2|V_1||V_2|/16. Suppose that ' has been chosen. Since G is (,d)-superregular, for each cโ', we have e(G_c)โฅ d|V_1||V_2|. Let V_1^c:={vโ V_1 : d_G_c(v)โฅ d|V_2|/2}. Then we get d|V_1||V_2|โค |V_1^c||V_2|+(|V_1|-|V_1^c|)d|V_2|/2 and thus |V_1^c|โฅ d|V_1|/(2-d)โฅ d|V_1|/2.
A Chernoff bound implies that, with probability 1-e^-(n), we have |V_1'โฉ V_1^c|โฅ d n_1/4 and |V_2'โฉ N_G_c(v)|โฅ d n_2/4 for each vโ V_1^c.
Therefore, by a union bound, with high probability, we have e(G_c')โฅ d^2n_1n_2/16 for each cโ'. Similarly, with high probability, for each i=1,2 and vโ V_i', we have โ_cโ'd_G_c'(v)โฅ d^2n_3-ih/16. It follows that G' is (/,d^2/16)-superregular with high probability.
ยง.ยง The regularity lemma for graph collections
We use the following version of the regularity lemma for graph collections, which is obtained by applying the degree version of the weak regularity lemma (Lemmaย <ref>) to the 3-graph G^(3) of G and cleaning up the clusters so that vertex clusters and colour clusters are separate.
We postpone the derivation to the appendix.
For all integers L_0 โฅ 1 and every ,>0, there is an n_0=n_0(,,L_0) such that
for every d โ [0,1) and
every graph collection G=(G_c: c โ) on vertex set V of size n โฅ n_0 with n โค || โค n/, there exists a partition of V into V_0,V_1,โฆ,V_L, of into _0,_1,โฆ,_M and a spanning subgraph G'_c of G_c for each c โ such
that the following properties hold:
* L_0 โค L,M โค n_0 and |V_0|+|_0| โค n;
* |V_1|=โฆ=|V_L|=|_1|=โฆ = |_M| =: m;
* โ_c โd_G'_c(v) > โ_c โd_G_c(v)-(3d/^2+)n^2 for all v โ V and
e(G'_c) > e(G_c)-(3d/^2+)n^2 for all c โ;
* if, for c โ, the graph G'_c has an edge with both vertices in a single cluster V_i for some i โ[L], then c โ_0;
* for all triples ({h,i},j) โ[L]2ร [M], we have that either G'_c[V_h,V_i]=โ
for all c โ_j, or
G'_hi,j := (G'_c[V_h,V_i]: c โ_j) is (,d)-regular.
The sets V_i are called vertex clusters and the sets _j are called colour clusters, while V_0 and _0 are the exceptional vertex and colours sets respectively.
Given a graph collection G=(G_c: c โ) on V and parameters >0, d โ [0,1) and L_0 โฅ 1, the reduced graph collectionR=R(,d,L_0), reduced 3-graphR^(3)=R(,d,L_0) and the reduced edge-coloured graphR=R(,d,L_0) of G are defined as follows.
Apply Lemmaย <ref> to G with parameters ,d,L_0
to obtain G' and a partition V_0,โฆ,V_L of V and _0,โฆ,_M of where
V_0, _0 are the exceptional sets and V_1,โฆ,V_L are the vertex clusters
and _1,โฆ,_M are the colour clusters.
Then
R=(R_1,โฆ,R_M) is a graph collection of M graphs each on the same vertex set [L],
where, for ({h,i},j) โ[L]2ร [M], we have hi โ R_j whenever G'_hi,j is (,d)-regular.
Also, R^(3) is the 3-graph of R and R is the reduced edge-coloured graph.
The next lemma (related to Lemma 5.5 inย <cit.>)
states that clusters inherit a minimum degree bound in the reduced graph from G.
Suppose p>0, L_0 โฅ 1 and 0 < 1/n โชโค d โช,,p โค 1.
Let G=(G_c: c โ) be a graph collection on a vertex set V of size n with (G_c) โฅ (p+)n for all c โ and n โค || โค n/.
Let R=R(,d,L_0) be the reduced graph collection of G on L vertices with M graphs.
Then
* for every i โ [L] there are at least (1-d^1/4)M colours j โ [M] for which d_R_j(i) โฅ (p+/2)L;
* for every j โ [M] there are at least (1-d^1/4)L vertices i โ [L] for which d_R_j(i) โฅ (p+/2)L.
To proveย (i), note that for all v โ V V_0 we have
โ_c โd_G'_c-V_0(v) โฅโ_c โd_G_c(v)-(3d/^2+)n^2-||s โฅโ_c โd_G_c(v)-4dn^2/^2.
Let D_v be the collection of colours c in _0 for which d_G'_c-V_0(v) โฅ d_G_c(v)-โ(d)n. Then
โ_c โd_G'_c-V_0(v)
โคโ_c โd_G_c(v)-|D_v|โ(d)n
and therefore |D_v| โค 4dn^2/(^2โ(d)n) โค d^1/3n/2 by dโช,
so |D_v| โฅ || -d^1/3n/2- n โฅ ||-d^1/3n.
We have mM โค || โค mM+ n and mL โค n โค mL+ n.
Thus the number of clusters _j containing at least one colour of D_v is at least
|D_v|/m โฅ M-d^1/3n/m โฅ M-d^1/3L/(1-) โฅ M-2d^1/3M/โฅ (1-d^1/4)M.
Now let i โ [L] and v โ V_i.
For each cluster _j as above, choose an arbitrary colour c_j โ_j โฉD_v.
Then the number of clusters V_h containing some u โ N_G'_c_j(v) is at least
d_G_c_j(v)-โ(d)n/mโฅ(p+-โ(d))n/mโฅ (p+/2)L.
But thenย Lemmaย <ref>(v) implies that i is adjacent to each such V_h in R_j.
So for every i โ [L], d_R_j(i) โฅ (p+/2)L for at least (1-d^1/4)M colours j.
The proof ofย (ii) is similar and we omit it.
Let 0 โช 1/m โชโชโช d,D โค 1 and let F = (V,C,G) be an R-template
with parameters (n,,d,,D). Let t โค m and let
H be a subgraph of R(t) with maximum degree
and let C'_eโ C_e with |C'_e| โฅ t+ m for every e โ E(R).
Then there is a rainbow embedding of H in G where every edge corresponding to e โ E(R)
receives a colour in C'_e.
โฆ
ยง.ยง Templates
We define the notion of a `template', which is essentially a reduced graph in the transversal setting. We will use these as templates for embedding, in the same way that reduced graphs are used for embedding into a single graph.
Let 0 < 1/m โคโค d,โค 1 be parameters and let r โN.
Suppose that
* R is an r-vertex graph, with vertex set [r] unless otherwise specified,
* V={V_1,โฆ,V_r} is a set of r disjoint vertex sets with m โค |V_j| โค m/ for all j โ [r], whose union we denote by V,
* =โ_e โ E(R)_e is a colour set where |_e| โฅ m for all e โ E(R),
* G is a graph collection with colour set where for each c โ, the graph G_c is the union of bipartite graphs G_c^e where G_c^e has parts V_i,V_j, over all e=ij for which c โ_e.
For each e โ E(R), let G^e=(G_c^e: c โ_e).
We say that F=(V,,G) is an R-template with parameters (m,,d,)
if for every e โ E(R), G^e is (,d)-regular.
If we replace regular with semi-superregular, it is a semi-super R-template;
if we replace regular with superregular, it is a super R-template;
if we replace regular with half-superregular, it is a half-super R-template.
If =โ_e โ E(R)_e is a partition, we say that the template is rainbow.
A transversal embedding of a graph H inside F is a copy of H with vertices in V such that for every edge e there is a distinct c โ such that e โ G_c.
That is, there exist injections ฯ: V(H) โ V and : E(H) โ where ฯ(x)ฯ(y) โ G_(xy) for all xy โ E(H).
Note that the partition =โ_e โ E(R)_e is suppressed in the notation.
Given a template F, we explicitly use the notation in the definition unless otherwise specified.
Observe that for an R-template (V,,G) with parameters (m,,d,) and v(R)=r,
rm โค |V| โค rm/.
Given parameters ,',d','>0 with โค 1, ' โฅ, d' โค d and ' โค, any (m,,d,) template is also a ( m,',d',') template.
We will often take subtemplates of templates, meaning that the new vertex clusters, colour clusters and graphs G_c are subsets/subgraphs of the originals.
Some convenient notation for this is as follows: if F = (V,,G) is a template,
' โ and V' = {V_1',โฆ,V_r'} where V_j' โ V_j for all j โ [r], we say that F' = (V',',G') is the subtemplate of F induced by V',' when each _e'=_e โฉ', and (G')^e_c:=G_c[V_i',V_j'] is defined for each c โ'_e.
We also say that F' is obtained by deleting ' and V V'.
The following straightforward lemma shows that removing a small fraction of colours and vertices from a template produces a subtemplate with slightly weaker parameters, which remains super if the original template was super.
Let 0 <1/m โชโช' โชโช d,,1/r,1/k โค 1 where r โฅ 2 is an integer.
Let R be an r-vertex graph and
let F=(V,,G) be an R-template with parameters (m,,d,).
Let ' โ, V_i' โ V_i for all i โ [r]
and let F' = (V',',G') be the subtemplate of F induced by V','.
(i) If every |'_e| โฅ |_e|/k and |V_i| โค |V_i'| โค k |V_i|,
then F' is a template with parameters ( m,/,d/2,/k).
(ii) If every |_e '_e| โค m and |V_i V_i'| โค m,
then F' is a template with parameters (m/2,2,d/2,/2).
Moreover, if F is super, then F' is super.
(iii) Given |V_i| โค n_i โค k|V_i| for all i โ [r] and h_e โฅ|_e|/k for all e โ E(R), if V_i' is a uniform random subset of V_i of size n_i
and '_e is a uniform random subset of _e of size h_e and F is super,
then with high probability, F' is a super template with parameters ( m,/,d^2/16,/k).
(iv) Suppose F is half-super. For all c โ, there is G_c' โ G_c such that, defining G':=(G_c':c โ), the template (V,,G') is super with parameters (m,',d^2/2,).
Forย (i), we have m โค |V_i'| โค m/(/k)
and |_e'| โฅ m/k.
Also, by Lemmaย <ref>(i), for each eโ E(R), (G')^e is (/,d/2)-regular. Thus F' is a template with parameters ( m,/,d/2,/k).
For (ii), showing that F' is a template with the given parameters is similar toย (i). Note that if F is super, then (G')^e is (2,d/2)-superregular for each eโ E(R) by Lemma <ref>(ii) and thus F' is a super R-template with parameters (m/2,2,d/2,/2).
For (iii), if F is super, then G^e is (,d)-superregular for each eโ E(R). Thus by Lemma <ref>(iii), with high probability, (G')^e is (/,d^2/16)-superregular for each eโ E(R). Therefore, byย (i), F' is a super template with parameters ( m,/,d^2/16,/k) with high probability.
Partย (iv) follows immediately from Lemmaย <ref>.
Edges which lie in many graphs G_c are particularly useful for embedding and thus we define the (simple, uncoloured) thick graph consisting of all such edges.
Given >0 and an R-template F=(V,,G), let T^_F be the simple 2-graph
with vertex set V such that xy โ E(T^_F) whenever xy โ G_c for at least |_ij| colours c โ_ij where x โ V_i, y โ V_j and ij โ E(R). We call T^_F the -thick graph of F.
The following proposition states that the thick graph of a bipartite semi-super template has a spanning half-superregular subgraph. This is very useful for embedding as one can use the usual blow-up lemma to embed a bounded degree graph into the thick graph, and then greedily assign colours.
Let 0<1/m โชโชโช d,,1/r โค 1
where r โฅ 2 is an integer.
Let R be a graph on r vertices
and let F=(V,,G) be a semi-super R-template with parameters (m,,d,). Then for all ij โ E(R), T^_F[V_i,V_j] is (,d/2)-half-superregular.
Let ij โ E(R) and let U_iโ V_i and U_jโ V_j be any two subsets such that |U_i|โฅ |V_i| and |U_j|โฅ |V_j|. Let โ_ij := e(T^_F[U_i,U_j]). By regularity we have |d_G^(3)(U_i,U_j,_ij)-d_G^(3)(V_i,V_j,_ij)|โค and thus we get
(d-)|U_i||U_j||_ij|โค e_G^(3)(U_i,U_j,_ij) โคโ_ij|_ij|+(|U_i||U_j|-โ_ij)|_ij|.
Therefore, we have โ_ij/(|U_i||U_j|)โฅ (d--)/(1-)โฅ d/2.
Similarly, for any xโ V_i, let โ_x := d_T^_F(x,V_j). By semi-superregularity, we have โ_cโ_ijd_G_c(x)โฅ d|V_j||_ij|. Then we get d|V_j||_ij|โคโ_x|_ij|+(|V_j|-โ_x)|_ij| and thus โ_x/|V_j|โฅ (d-)/(1-)โฅ d/2. The half-superregularity then follows.
ยง EMBEDDING LEMMAS
In this section we state a series of embedding lemmas which we will combine to prove our transversal blow-up lemma, stated at the end of the section. These lemmas are also useful in their own right in applications.
Each lemma is of the following type: we are given an R-template and a bounded degree graph H whose partition matches the template. For a small number of vertices y in H we are given large target setsT_y where y must be embedded.
The output of the lemma is a transversal embedding of H, which consists of an embedding ฯ of vertices and an embedding of colours, so ฯ(x)ฯ(y) โ G_(xy) for all xy โ E(H), and such that ฯ(y) โ T_y whenever there is a target set.
In Lemmaย <ref>, H is a small graph and we in fact embed only part of H, while finding large candidate sets for the remainder (and a chosen colour for the future edge ฯ(x)ฯ(y) where x was embedded and y will later be embedded inside its candidate set).
In Lemmaย <ref>, H is again small, but now there are sets of prescribed colours which must be used in the embedding. This is possible since we assume that there are many more edges of H than the number of prescribed colours, and graph of each prescribed colour is dense.
Theoremย <ref> is the usual blow-up lemma, which one can think of as applying to a template where all coloured graphs are identical. Now the template is super, but H is allowed to be spanning.
In Lemmaย <ref>, we assume the template is semi-super, and H may again be spanning, but consists of many small (but still linear) connected components, and there are many more colours than needed for a rainbow embedding.
Theoremย <ref> is our transversal blow-up lemma, in which the template is super, H is a spanning graph with a separable graph homomorphism into the template, and the number of colours is exactly as required.
The first such lemma applies to embed a small graph H into a template. In fact we only embed some of H while finding large candidate sets for the rest of H. This means that we set aside some colours so that, for each unembedded vertex y
and each of its embedded neighbours x, there is a distinct colour (xy), and a large vertex set so that if the image of y is chosen in this set, then we can extend our embedding to a transversal embedding that uses the specified colours.
That is, for every z in this set we have ฯ(x)z โ G_(xy).
The proof embeds vertices one by one, at each step fixing the colours that will be used to future neighbours, and is similar to the `partial embedding lemma', an uncoloured version, inย <cit.>.
Let 0 < 1/m โช,โชฮฝ' โชฮฝ,d,,1/,1/r โค 1
where r โฅ 2 is an integer.
* Let R be a graph on vertex set [r] such that
F = (V,,G) is an R-template with parameters (m,,d,).
* Let H be a graph with
(H) โค for which there is a graph homomorphism
ฯ:V(H) โ V(R) such that |ฯ^-1(j)| โค m for all j โ [r],
and suppose there is a partition V(H) = X โช Y where E(H[Y])=โ
.
* For every w โ V(H), suppose
there is a set T_w โ V_ฯ(w) with |T_w| โฅฮฝ m.
Then there are injective maps ฯ: X โ V and :E(H) โ such that
* ฯ(x) โ T_x for all x โ X;
* (xy) โ_ฯ(x)ฯ(y) for all xy โ E(H), and if xx' โ E(H[X]), then ฯ(x)ฯ(x') โ E(G_(xx'));
* for all y โ Y there exists C_y โ V_ฯ(y)ฯ(X)
such that C_y โโ_x โ N_H(y) โฉ XN_G_(xy)(ฯ(x)) โฉ T_y and |C_y| โฅฮฝ' m.
Fix an ordering of V(H) where all vertices of X come before any vertex of Y, and for each x โ X, let N^<(x) be the set of neighbours of x in H which appear before x in the ordering, and N^>(x) the set of those which appear after.
At step (x) (with x โ X), we will choose ฯ(x)
and (xy) for all y โ N^>(x).
Initially, define candidate sets C_w:=T_w for all w โ V(H) and C_xy:=_ฯ(x)ฯ(y) for all xy โ E(H).
The following comprises step (x), which we perform for each x โ X in order.
* For all y โ N^>(x), delete all vertices v โ C_x with โ_c โ C_xy|N_G_c(v) โฉ C_y| < (d-)|C_xy||C_y|.
* Choose ฯ(x) โ C_x.
* For all u โ V(H), delete ฯ(x) from C_u.
* For each y โ N^>(x) in order:
* delete all colours c โ C_xy with |N_G_c(ฯ(x)) โฉ C_y| < d|C_y|/2,
* choose (xy) โ C_xy,
* for all uv โ E(H), delete (xy) from C_uv,
* delete all vertices v โ C_y with v โ N_G_(xy)(ฯ(x)).
We claim that, at the end of the process, |C_x|,|C_xy| โฅฮฝ'm for all x โ V(H) and xy โ E(H).
First, let us see why the claim implies the lemma.
Since candidate sets are never empty and every edge in H is incident to X, ฯ: X โ V and :E(H) โ are defined, and by stepย (<ref>) andย (<ref>), they are both injections.
Forย (i), we choose ฯ(x) from C_x which is always a (non-empty) subset of T_x.
Forย (ii), we choose (xy) from C_xy which is always a (non-empty) subset of _ฯ(x)ฯ(y).
If xx' โ E(H) where x' appears after x in the ordering, then we choose ฯ(x') from C_x' and by stepย (x,x',<ref>) we have ฯ(x') โ N_G_(xx')(ฯ(x)).
Forย (iii), given the claim and the fact that Y comes after X in the ordering, it suffices to show that, for all y โ Y and x โ N^<(y), we have C_y โ N_G_(xy)(ฯ(x)).
Noting that x โ X, this is a consequence of stepย (x,y,<ref>).
It remains to prove the claim.
We suppose that it is true up until step (x).
The vertex candidate set C_x can only shrink at stepsย (x,<ref>), (<ref>) and at step (u,x,<ref>) for u โ N^<(x).
Propositionย <ref> and the fact that, currently, |C_x| โฅฮฝ'm > |V_ฯ(x)|, imply that at stepย (x,<ref>), at most |C_x| vertices are deleted.
At stepย (<ref>), at most |X| vertices are deleted.
At stepย (u,x,<ref>),
the colour (ux) was chosen from C_ux which has |N_G_(ux)(ฯ(u)) โฉ C_x| โฅ d|C_x|/2 due to stepย (u,x,<ref>). Thus C_x shrinks by a factor of at most d/2 at this step. Therefore
|C_x| โฅ ((d/2)^-)|T_x|-|X| โฅ ((d/2)^-)ฮฝ - r)m โฅฮฝ(d/3)^ m > ฮฝ'm.
The colour candidate set C_xy, with x before y in the ordering, can only shrink at stepsย (x,y,<ref>) and (<ref>).
At most e(H) colours are lost at stepย (<ref>).
At stepย (x,y,<ref>), we chose ฯ(x) from C_x which, immediately afterย (x,<ref>), satisfies โ_c โ C_xy|N_G_c(ฯ(x)) โฉ C_y| โฅ (d-)|C_xy||C_y|.
Writing A โ C_xy for the subset of colours which are not deleted at stepย (x,y,<ref>), we have (d-)|C_xy||C_y|โค |A||C_y|+(d/2)(|C_xy|-|A|)|C_y| and thus |A| โฅ (d/2-)|C_xy|.
Therefore
|C_xy| โฅ (d/2-)|_ฯ(x)ฯ(y)|-e(H) โฅ (d/2-) m- r m โฅ d m/3 โฅฮฝ'm.
This completes the proof of the claim, and hence of the lemma.
The next ingredient is a version of the above lemma where we are embedding a small graph H such that a small fraction of its vertices have target sets, but additionally we now have a very small set of colours which must be used in the embedding (together with any other colours).
These prescribed colours will be used on an induced matching M in H. This matching, together with its neighbours, will be embedded greedily and then Lemmaย <ref> will apply to extend the embedding to the whole of H.
Let 0 < 1/m โชโช,โช_1,_2 โชฮฝ,d,,1/,1/r โค 1
where r โฅ 2 is an integer.
* Let R be a graph on vertex set [r] and let (V,,G) be an R-template with parameters (m,,d,).
* Let H be a graph with (H) โค for which there is a graph homomorphism
ฯ:V(H) โ V(R) such that |ฯ^-1(j)| โค_1 m for all j โ [r]
and e(H[ฯ^-1(i),ฯ^-1(j)]) โฅ_2 m for all ij โ E(R).
* Suppose there is a set W โ V(H) with |W| โค m, such that for all w โ W
there is a set T_w โ V_ฯ(w) with |T_w| โฅฮฝ m.
* For each e โ E(R), let D_e โ_e be a set of at most m colours
and let D=โ_e โ E(R)D_e, and suppose that e(G_c[V_i,V_j]) โฅ d|V_i||V_j| for all
c โD_ij and ij โ E(R), and |_e D| โฅ d|_e| for all e โ E(R).
Then there are injective maps ฯ: V(H) โ V and :E(H) โ such that
* ฯ(x) โ V_ฯ(x) for all x โ V(H);
* ฯ(w) โ T_w for all w โ W;
* for all xx' โ E(H) we have ฯ(x)ฯ(x') โ E(G_(xx'));
* Dโ(E(H)).
Let e_1,โฆ,e_s be an arbitrary ordering of E(R).
Let D_e_i' := D_e_i (D_e_1โชโฆโชD_e_i-1) for all i โ [s], so that the D_e' over e โ E(R) are pairwise disjoint, and their union equals D.
We claim that for each e_i=jj' โ E(R), we can choose a matching M_jj'โ H[ฯ^-1(j),ฯ^-1(j')] such that, writing M := โ_e โ E(R)M_e, we have
* V(M) โฉ W=โ
and
M is an induced matching in H such that for every y โ V(H) V(M), there is xx' โ E(M) such that N_H(y,V(M)) โ{x,x'};
* |M_e|=|D_e'| for each eโ E(R).
We can do this greedily, as follows.
Suppose we have found M_e_1,โฆ,M_e_i-1 with the required properties, for some i โฅ 1.
We show how to find M_e_i, where we write e_i:=jj'.
Vizing's theorem states that every graph J with maximum degree can be properly edge-coloured with at most +1 colours. Thus J contains a matching of size โ e(J)/(+1)โ.
Thus H[V_j,V_j'] contains a matching of size โ_2 m/(+1)โ.
Now, from this matching, delete W and any vertex at distance at most two to any vertex in a previously found matching.
There are at most r-1 neighbours h of j in R for which we have defined M_jh, and for each one we delete at most (1++^2)v(M_jh) โค 4^2 m such vertices.
The total number of deleted vertices is at most |W|+8r^2 m, which leaves a matching M' of size at least _2 m/(4).
Now we will greedily choose a suitable submatching.
Suppose we have chosen a subset E(M^q) := {x_1y_1,โฆ,x_q-1y_q-1}โ E(M') where 1 โค q โค |D_jj''| and the distance in H M^q between any pair of vertices in V(M^q) is at least three.
To choose x_qy_q, we delete all vertices in V(M') at distance in H M^q at most two. The number of deleted vertices is at most (1++^2)2q โค 4^2 m < _2 m/(4).
Thus we can always choose x_qy_q from the remaining vertices of M'.
We claim that M^q+1 := M^qโช{x_qy_q} is a suitable extension of M^q.
Indeed, x_q,y_q have no neighbours in M^q so M^q+1 is induced, and if x_q shared a neighbour z with some other y โ M^q, then it would be at distance two from y, a contradiction.
Thus we can obtain M_e_i and hence M_e_1,โฆ,M_e_s satisfying both required properties.
Let X := V(M) and Y := N_H(X) X.
We will embed M into the template first, defining injections ฯ : X โ V and : E(M) โช E(H[X,Y]) โD so that (M_e)=D_e' for all e โ E(R).
We want to choose good images for the vertices x โ X so that there are many choices for each neighbour y โ N_H(x) X.
Now we will embed M greedily,
using the fact that e(G_c) โฅ d|V_j||V_j'| for all c โD_jj' and โช d.
For each pโค e(M), let
i โค s be such that e(M_e_1)+โฆ e(M_e_i-1) โค p < e(M_e_1)+โฆ + e(M_e_i).
Let M_p := M_e_1โชโฆโช M_e_i-1โช M'_e_i where e(M_p)=p and
we have fixed an ordering of E(M_e_i) and M'_e_i is the (possibly empty) matching which consists of an initial segment.
Let X_p := V(M_p) and Y_p := N_H(X_p) X_p.
Suppose we have found injections ฯ : X_p โ V and : E(M_p) โช E(H[X_p,Y_p]) โC such that
* ฯ(x) โ V_ฯ(x) for all x โ X_p;
* ฯ(x)ฯ(y) โ G_(xy) for all xy โ E(M_p);
* (E(M_e_h)) = D_e_h' for all h < i and (E(M_e_i')) โD_e_i';
* for all y โ Y_p there exists C_y โ V_ฯ(y)
such that C_y โโ_x โ N_H(y) โฉ X_pN_G_(xy)(ฯ(x)), and |C_y| โฅ d|V_ฯ(y)|/6.
We want to extend ฯ and by embedding an edge xx' in M_e_i M_e_i' with a colour c^* โD_e_i'
and choosing colours and candidate sets for its unembedded neighbours. So fix any such c^* and let (xx') := c^*.
Let N(x),N(x') be the set of neighbours of x,x' respectively in H M.
The first property of M implies that N(x),N(y),V(M) are pairwise disjoint for all y โ V(M) {x,x'}, and similarly for x'.
However, we could have N(x) โฉ N(x') โ โ
(for example if H is a union of vertex-disjoint triangles).
First, we will define ฯ(x) and colours and candidate sets for every vertex in N(x).
Write e_i=jj' and U_h := V_h X_p for h=j,j'.
We have e(G_c^*[U_j,U_j']) โฅ d|V_j||V_j'| - 2 m(|V_j|+|V_j'|) โฅ d|V_j||V_j'|/2.
Let
Z(x) := {v โ U_j: d_G_c^*(v,U_j') โฅ d|U_j'|/4}.
A simple counting argument implies that |Z(x)| โฅ d|U_j|/4.
We will choose ฯ(x) โ Z(x) and candidate sets C_y for each y โ N(x), as follows.
Order N(x) as y_1,โฆ,y_โ, where โโค, and let a_1,โฆ,a_โ be such that ฯ(y_i)=a_i, so a_i โ [r].
Suppose we have found, for i โฅ 0,
* c_i โ_ja_i
such that c^*,c_1,โฆ,c_i are all distinct and disjoint from (E(M_p))
and
* a set Z_i(x) โ Z(x) with |Z_i(x)| โฅ (d/6)^i|Z(x)| and d_G_c_h(z,V_a_h) โฅ d|V_a_h|/6 for all h โ [i] and z โ Z_i(x).
Now, G^ja_i+1[Z_i(x),V_a_i+1] is (โ(),d/2)-regular by Lemmaย <ref>(i) (the slicing lemma),
so Lemmaย <ref>(ii) implies there are at least (1-โ())|_ja_i+1| colours c โ_ja_i+1 with e(G_c[Z_i(x),V_a_i+1]) โฅ (d/2-โ())|Z_i(x)||V_a_i+1| โฅ d|Z_i(x)||V_a_i+1|/3. Let c_i+1 be any such colour which does not lie in the set {c^*,c_1,โฆ,c_i}โช(E(M_p)), which exists since
the number of available colours is at least |_ja_i+1D| - 1 - โฅ d|_ja_i+1|/2 โฅ d m/2.
By a simple counting argument, the number of vertices z โ Z_i(x) with d_G_c_i+1(z,V_a_i+1) โฅ d|V_a_i+1|/6 is at least d|Z_i(x)|/6, and we let the set of these vertices be Z_i+1(x).
Thus we can complete the iteration and find Z_โ(x).
Now let ฯ(x) be an arbitrary vertex of Z_โ(x),
and for each i โ [โ], define the candidate set C_y_i := N_G_c_i(ฯ(x), V_a_i) and (xy_i):=c_i.
Next, we will define ฯ(x') and colours and candidate sets for every vertex in N(x').
For each y โ N(x) โฉ N(x'), the candidate set for y will be a subset of C_y.
For this, let Z(x') := N_G_c^*(ฯ(x),U_j'), so |Z(x')| โฅ d|U_j'|/4 since ฯ(x) โ Z(x).
Write N(x')={w_1,โฆ,w_k} and let b_1,โฆ,b_k be such that ฯ(w_i)=b_i.
For each w_i โ N(x), we already defined C_w_i;
for the other w_i, let C_w_i := V_b_i.
Using the fact that G^j'b_i[Z(x'),C_w_i] is (โ(),d/2)-regular, exactly the same argument for N(x') as for N(x) implies that we can find
* for each i โ [k], c_i' โ_j'b_i which are distinct and disjoint from {c^*,c_1,โฆ,c_โ}โช(E(M_p)) and
* Z_k(x') โ Z(x') with |Z_k(x')| โฅ (d/6)^k|Z(x')| and d_G_c_i'(z,C_w_i) โฅ d|C_w_i|/6 for all i โ [k] and z โ Z_k(x').
We let ฯ(x') be an arbitrary vertex of Z_k(x')
and for each i โ [k], let T_w_i := N_G_c_i'(ฯ(x'),C_w_i) and (x'w_i):=c_i'.
Note that ฯ(x') โ Z(x') โ N_G_c^*(ฯ(x)) so ฯ(x)ฯ(x') โ G_c^* = G_(xx').
For y_i โ N(x) N(x'), C_y_i has not changed since it was first defined, and we let T_y_i := C_y_i.
We have |T_y| โฅ d^2|V_ฯ(y)|/36 โฅ d^2m/36 for all y โ N(x) โช N(x').
Thus we can complete the iteration and embed the whole of M.
This completes the required extension of ฯ and , so we have obtained
ฯ: X โ V and : E(M) โช E(H[X,Y]) โ, with (E(M))=D.
Finally, we extend the embedding to the whole of H by applying Lemmaย <ref>.
Indeed, let F' be the subtemplate of F induced by V',',
where V' = {V_1',โฆ,V_r'} and V_i' := V_i ฯ(X), and '_e := _e ((E(M) โช E(H[X,Y])). Lemmaย <ref>(ii) implies that F' is a template with parameters
(m/2,2,d/2,/2). For each y โ Y,
we have |T_y โฉ V_ฯ(y)'| โฅ d^2|V_ฯ(y)|/36-|ฯ^-1(ฯ(y))| โฅ d^2m/36-_1 m โฅ d^2 m/37.
Similarly for each w โ W, we have |T_w โฉ V_ฯ(w)'| โฅ (ฮฝ-_1)m โฅฮฝ m/2.
For all vertices y of H for which T_y is not defined, let T_y := V_ฯ(y).
Thus we can apply Lemmaย <ref> with parameters m/2,2,2_1,min{ฮฝ,d^2/37},d/2,/2 playing the roles of m,,,ฮฝ,d,, target sets T_y and Y=โ
(so no candidate sets),
to find the desired embedding.
Most of the required properties are immediate but we justify whyย (iii) holds. If xx' โ E(M) then we already showed that ฯ(x)ฯ(x') โ G_(xx').
If xy โ E(H) E(M) where x โ X, we have y โ Y, and then the choice of M guaranteed that N_H(y,X) โ{x,x'} where xx' โ E(M) and, if this neighbourhood equals {x}, we chose ฯ(y) โ T_y โ N_G_(xy)(ฯ(x)),
while if it equals {x,x'} we chose ฯ(y) โ T_y and T_y โ N_G_(xy)(ฯ(x)) โฉ N_G_(x'y)(ฯ(x')).
If xy โ E(H)- X, then this follows from Lemmaย <ref>.
Next we state the usual blow-up lemma, which we will use in the proof of our transversal blow-up lemma.
Note that the lemma is usually stated in terms of the stronger `superregular' condition rather than `half-superregular', but the same proof applies.
Alternatively, one can apply Lemmaย <ref>.
[Blow-up lemmaย <cit.>]
Let 0 < 1/m โช,โชฮฝ,d,,1/,1/r โค 1.
* Let R be a graph on vertex set [r].
* Let G be a graph with vertex classes V_1,โฆ, V_r where mโค |V_i| โค m/ for all i โ [r],
such that G[V_i,V_j] is (, d)-half-superregular whenever ij โ E(R).
* Let H be a graph with (H) โค for which there is a graph
homomorphism ฯ: V(H) โ V(R)
such that |ฯ^-1(j)| โค |V_j| for all j โ [r].
* For each i โ [r], let U_i โฯ^-1(i) with |U_i| โค m
and suppose there is a set T_x โ V_i with |T_x| โฅฮฝ m for all x โ U_i.
Then there is an embedding of H inside G such that for every i โ [r], every
x โ U_i is embedded inside T_x.
The final embedding lemma that we prove in this section applies to embed a spanning graphH which consists of small components and such that a small fraction of its vertices have target sets.
The template F is required to be semi-super (every vertex has large total degree) and rainbow, since H could have many edges in every pair, and additionally F must contain many more colours than required for a transversal embedding.
Let 0 < 1/m โช,ฮผ,โชโชฮฝ,d,, 1/,1/r โค 1/2 where r is an integer.
* Let R be an r-vertex graph and let F=(V,,G) be a semi-super rainbow R-template with parameters (m,,d,).
* Let n:=|V| and
suppose that H is an n-vertex graph with (H) โค which is the union of vertex-disjoint components of size at most ฮผ n, and there is a graph homomorphism ฯ of H into R such that |ฯ^-1(j)| = |V_j| for all j โ [r] and e(H[ฯ^-1(i),ฯ^-1(j)])โค |_ij|- m for all ij โ E(R).
* For each i โ [r], let U_i โฯ^-1(i) with |U_i| โค m and suppose there is a set T_x โ V_i with |T_x| โฅฮฝ m for each x โ U_i.
Then there is a transversal embedding of H inside F such that for every i โ [r], every x โ U_i is embedded inside T_x.
Choose new parameters ฮผ',,, satisfying ,ฮผ,โชฮผ' โชโชโชโช.
Note that n and m are similar in size, since
rm โค n โค rm/,
so in particular n/r โค m โค |V_j| โค m/โค n/(r) for each j.
We have that H is the vertex-disjoint union of connected components H_1,โฆ,H_t each of size at most ฮผ n.
We will partition each ฯ^-1(j) into s parts of size roughly m and one part of size roughly ฮผ' m that respect components.
For each h โ [t] and
j โ [r], let A_hj := V(H_h) โฉฯ^-1(j) and a_hj:=|A_hj|.
Let t^* be the largest integer such that b_0j:=|B_0j| โฅฮผ'm for all j โ [r], where
B_0j := A_t^*+1,jโชโฆโช A_tj.
Obtain s โN and B_ijโฯ^-1(j) for i โ [s], j โ [r] iteratively as follows.
Let โ_0:=0 and a_0j:=0 for all j โ [r] and do the following for i โฅ 1.
If
|ฯ^-1(j)|-(a_1j+โฆ + a_โ_i-1j)>q:=2(+1)^r-1 m
for all j โ [r], let B_ij:= A_โ_i-1+1,jโชโฆโช A_โ_ij where โ_i is the smallest integer so that b_ij := |B_ij| is at least m for all j โ [r].
Otherwise, set s:= i, let B_sj:=A_โ_s-1+1,jโชโฆโช A_t^*j, and let b_sj:=|B_sj|.
This process always terminates since we remove at least rฮณ m vertices each time, so defines a partition V_j = B_0jโช B_1jโชโฆโช B_sj for each j โ [r]. We have
b_0j + b_1j+โฆ+b_sj=|ฯ^-1(j)| = |V_j|.
We claim that
ฮผ' m โค b_0j โค 2(+1)^r-1ฮผ' m
for all j โ [r],
m โค b_ij โค 2(+1)^2r-2 m for all i โ [s],j โ [r] and
s โค 1/().
To prove the claim, we first note the following.
Since every H_h is a connected graph with (H_h) โค,
|A_hj| โค |V(H_h)| โค (+1)^r-1|A_hj'| for every j,j' โ [r] and h โ [t].
(The second inequality can be proved by induction of the number of parts of H_h.)
By construction, for every j โ [r] we have
b_0jโฅฮผ'm.
There at least one j' โ [r] for which |B_0j'| โคฮผ'm+ฮผ n โค 2ฮผ'm, otherwise we would have chosen a larger t^*, and so
the required upper bound on every b_0j follows fromย (<ref>).
For the second part, suppose first that i โ [s-1].
Then every b_ijโฅ m by construction.
Also there is some j' โ [r] for which b_ij'โค m + ฮผ n โค 3 m/2,
otherwise โ_i would be smaller.
Thus b_ijโค 3(+1)^r-1 m/2 for all j โ [r] byย (<ref>).
Now consider s=i.
There is at least one j^* โ [r] which caused the partitioning to stop
during the s-th step due to
q โฅ |ฯ^-1(j^*)|-(a_1j^*+โฆ+a_โ_s-1j^*) = |ฯ^-1(j^*)|-(b_1j^*+โฆ+b_(s-1)j^*)=b_0j^*+b_sj^*.
Since the partitioning did not stop during step s-1, we similarly have
b_0j^*+b_(s-1)j^*+b_sj^*>q.
Thus b_sj^*โฅ q-2(+1)^r-1ฮผ' m -3(+1)^r-1 m/2 โฅ m.
Altogether, m โค b_sj^*โค q.
For any j โ [r] which did not cause the partitioning to stop, we have
b_0j+b_sj > q and hence b_sj >q-2(+1)^r-1ฮผ' m โฅ m.
Moreover,ย (<ref>) implies that
b_0j+b_sj=a_โ_s-1+1,j+โฆ +a_tjโค (+1)^r-1(a_โ_s-1+1,j^*+โฆ +a_tj^*) = (+1)^r-1(b_0j^*+b_sj^*) โค (+1)^r-1q.
Thus m โค b_sjโค (+1)^r-1q.
Thus for every j โ [r] we have m โค b_sjโค (+1)^r-1q, and hence
m โค b_ijโค (+1)^r-1q for all i โ [s] and j โ [r].
For the third part, the lower bound implies that s m โค |V_j| โค m/ and so d โค/,
completing the proof of the claim thatย (<ref>) holds.
Another related estimate that will be useful later is
b_0j-sr^1/3 m > ฮผ' m-(r|V_j|/ m^1/3 m) > ฮผ' m - r^1/3m/โฅฮผ' m/2.
Let B^i := โ_j โ [r]B_ij.
We have
e(H[B^i]) โค|B^i| โค 2(+1)^2r-1r m for all i โ [s] and
e(H[B^0]) โค|B^0| โค 2(+1)^r rฮผ' m.
In the next part of the proof, we will embed H[B^1],โฆ,H[B^s] in turn.
For this, we first partition each V_j into sets for each stage of the embedding, and find a buffer set of colours which will be used in the final part of the embedding.
Given an edge xy of G, write c(xy) := {c โ: xy โ G_c}.
For each jj' โ E(R) and x โ V_j, define
M_jj'(x) := {y โ V_j': |c(xy)| โฅ d|_jj'|/2}.
We claim that each such set satisfies
|M_jj'(x)| โฅ d |V_j'|/2.
Indeed,
semi-superregularity implies that
d|V_j'||_jj'| โคโ_c โ_jj'd_G_c(x,V_j') = โ_y โ V_j'|c(xy)| โค |M_jj'(x)||_jj'| + |V_j'|d|_jj'|/2,
as required.
For each j โ [r], let V^0_j โช V^1_j โชโฆโช V^s_j be a random
partition of V_j into parts
of size b_0j-sr^1/3 m, b_1j+r^1/3 m,โฆ,b_sj+r^1/3 m respectively.
For every i โ [s], j โ [r] and x โ B_ij, we will embed x into V^i_j, which has size slightly larger than necessary.
For each e โ E(R), let _e^0 be a uniform random subset of _e of size |_e|
and let _e := _e _e^0.
By a Chernoff bound, usingย (<ref>) andย (<ref>) to see that all parts are sufficiently large,
we may assume that the following hold:
* for all jj' โ E(R), x โ V_j and y โ M_jj'(x), we have |c(xy) โฉ_jj'^0| โฅ d|^0_jj'|/4;
* for all jj' โ E(R) and x โ V_j we have |M_jj'(x) โฉ V_j'^0| โฅ d|V^0_j'|/4;
* for all 0 โค i โค s and x โ V(B^i) we have |T_x'| โฅฮฝ |V^i_ฯ(x)|/2, where T_x' := T_x โฉ V_ฯ(x)^i.
Indeed, each of these lower bounds is at most half the expectation of the quantity in question.
We iteratively construct the transversal embedding of H[B^1 โชโฆโช B^s].
Suppose that we have obtained an embedding g_i-1 for some i โฅ 1,
such that
* g_i-1 is a transversal embedding of H[W_i-1], where W_i-1 := B^1 โชโฆโช B^i-1;
* g_i-1(x) โ V^i'_j for every x โ B_i'jโ W_i-1;
* for every jj' โ E(R) and every embedded pair x,y of vertices where xy โ E(H[V_j,V_j']), the colour of xy is in _jj' and these colours are distinct over the embedding; and
* if x โ U_j โฉ W_i-1, then g_i-1(x) โ T_x for every j โ [r].
That is, g_i-1 consists of injections ฯ:W_i-1โ V and : E(H[W_i-1]) โ
such that ฯ(x)ฯ(y) โ G_(xy), with the stated properties.
Now we will extend g_i-1 to a transversal embedding g_i of W_i := W_i-1โช B^i with the analogous properties for i.
First observe that the vertices we are about to embed, that is, B^i = W_i W_i-1, have no previously embedded neighbours.
Therefore it suffices to find a transversal embedding of B^i when B_ij is embedded into V^i_j for each j โ [r] which uses unused colours.
Obtain _e^i from _e by deleting any colour we have used in the embedding g_i-1.
Every colour cluster in the original template contains extra colours,
so for every jj' โ E(R) , we have
|^i_jj'| โฅ |_jj'| - โ_1 โคโโค i-1e(H[B_โ j,B_โ j'])- |_jj'|
โฅ (1-)|_jj'| - e(H[B^i]) โฅ m/2.
For each j โ [r] and j' โ N_R(j), let
V^i,j', bad_j := {v โ V^i_j : โ_c โ_jj'^id_G_c(v,V^i_j') < 2d|V^i_j'||_jj'^i|/3}.
Lemmaย <ref>(i) implies that G^i_jj' := (G_c[V^i_j,V^i_j']: c โ_jj'^i) is (โ(),d/2)-regular and thus, by Lemmaย <ref>(i) andย (<ref>), we have that |V^i,j', bad_j| < โ() |V^i_j| < ^1/3m.
For each i โ [s] and j โ [r], obtain a subset Y^i_j of V^i_j by removing V^i,j', bad_j for each j' โ N_R(j) and enough additional arbitrary vertices so that |Y^i_j|=|V^i_j|-r^1/3 m=b_ij.
Let Y^i := {Y^i_1,โฆ,Y^i_r} and let ^i:= โ_e โ E(R)^i_e.
We claim that the subtemplate F_i := (Y^i,^i,G^i) of F induced by Y^i, ^i is a semi-super rainbow R-template with parameters ( m,โ(),d/2,1/(2(+1)^2r-2)).
That vertex clusters have suitable sizes follows fromย (<ref>), and that colour clusters have suitable sizes follows fromย (<ref>).
We have seen that G^i_jj' is (โ(),d/2)-regular.
Moreover, for all jj' โ E(R) and v โ Y^i_j, since v โ V^i,j', bad_j, we have
โ_c โ^i_jj'd_G_c(v,Y^i_j')
โฅ 2d|V^i_j'||^i_jj'|/3-^1/3m|^i_jj'| > d|V^i_j'||^i_jj'|/2,
so G^i_jj' is (โ(),d/2)-semi-superregular.
This completes the proof of the claim.
Therefore, for each jj' โ E(R), we can apply Propositionย <ref> to
see that the -thick graph T^_F_i is such that T^i_jj' := T^_F_i[Y^i_j,Y^i_j'] is (โ(),d/4)-half-superregular for all jj' โ E(R).
Apply Theoremย <ref> (the blow-up lemma) with target sets T_x'
and parameters m,โ(),โ(),ฮฝ/2,d/4 playing the roles of m,,,ฮฝ,d
to embed H[B ^i] into T^i_jj'
such that for every j โ [r], every x โ U_j โฉ B^i is mapped to T_x' โ V^i_j, which is possible since, byย <ref>, |T_x'| โฅฮฝ m/2 and |U_j โฉ B^i| โค m.
Since for each edge of T^i_jj', the number of unused colour graphs it lies in is
|^i_jj'| (<ref>)> m/2 > 3(+1)^2r-1r m (<ref>)> e(H[B^i]),
we can greedily assign colours to the embedding.
This completes the construction of g_i which satisfiesย (i)โ(iv).
Thus we can obtain g_s with the required properties.
At the end of this process,
the unembedded part of H is H[B^0],
and the set of vertices of V_j which are not an image of any embedded vertex of H
is precisely Z_j := V^0_j โชโ_i โ [s](V^i_j Y^i_j), which has size b_0j.
We will use the colours in ^0:= โ_e โ E(R)^0_e to embed each B_0j into Z_j.
We claim that the subtemplate F_0 := (Z,^0,G_0) of F induced by Z:=(Z_j: j โ [r]),^0 is a semi-super rainbow template with parameters (ฮผ' m,โ(),d^2/16,1/(2(+1)^r-1)).
Indeed,ย (<ref>) implies that vertex clusters have sizes satisfying ฮผ' m โค b_0j = |Z_j| โค 2(+1)^r-1ฮผ' m.
Colour clusters have size |^0_jj'| = |_jj'| โฅ m > ฮผ' m.
Lemmaย <ref>(i) implies that G^0_jj':=(G_c[V^0_j,V^0_j']: c โ^0_jj') is (โ(),d/2)-regular for every jj' โ E(R).
Additionally, for all x โ Z_j โ V_j,
โ_c โ^0_jj'd_G_c(x,Z_j')
= โ_y โ V^0_j'|c(xy) โฉ^0_jj'| โฅโ_y โ M_jj'(x) โฉ V^0_j'|c(xy) โฉ^0_jj'| <ref>,<ref>โฅ d |V^0_j'|/4 ยท d|^0_jj'|/4
=d^2|V^0_j'||^0_jj'|/16,
so the template is semi-super.
As before, we can apply Propositionย <ref> to see that the -thick graph T^_F_0 is such that T^0_jj' := T^_F^0[Z^0_j,Z^0_j']
is (โ(),d/4)-half-superregular for each jj' โ E(R).
We have m < โ()|Z_j|,
andย <ref> implies that |T'_x| โฅฮฝ|V^0_ฯ(x)|/2 โฅฮฝ|Z_ฯ(x)|/3 for all x โ V(B^0) with a target set.
Thus we can apply Theoremย <ref> (the blow-up lemma)
with target sets T_x' and parameters ฮผ'm,โ(),โ(),ฮฝ/3,d/4
playing the roles of m,,,ฮฝ,d
to embed H[B^0] into T^0_jj'
such that for every jโ [r], every x โ U_j is mapped to T_x' โ V^0_j, which is possible byย <ref>.
The number of colours on each edge of T^0_jj' is at least
|^0_jj'| = |_jj'| โฅ m > 3(+1)^r rฮผ' m (<ref>)>e(H[B^0]).
Thus we can again greedily assign colours of ^0_jj' to obtain a transversal embedding of H[B^0] using colours untouched by the previous embedding, which completes the transversal embedding of H.
Finally we state a transversal blow-up lemma, which we prove in the next section by combining the embedding lemmas of this section.
Its statement is slightly stronger than Theoremย <ref> since the template is super rather than half-super, but the proof of Theoremย <ref> is a simple matter of applying Lemmaย <ref>(iv) (essentially Lemmaย <ref>) first.
The details can be found at the end of the next section.
[Transversal blow-up lemma]
Let 0 < 1/m โช,ฮผ,โชฮฝ,d,,1/,1/r โค 1/2 where r is an integer.
* Let R be a simple graph with vertex set [r] and let F=(V,,G) be a rainbow super R-template with parameters (m,,d,).
* Suppose that H is a ฮผ-separable graph with (H) โค and there is a graph homomorphism ฯ of H into R such that |ฯ^-1(j)| = |V_j| for all j โ [r] and e(H[ฯ^-1(i),ฯ^-1(j)])=|_ij| for all ij โ E(R).
* For each i โ [r], let U_i โฯ^-1(i) with |U_i| โค m and suppose there is a set T_x โ V_i with |T_x| โฅฮฝ m for each x โ U_i.
Then there is a transversal embedding of H inside F such that for every i โ [r], every x โ U_i is embedded inside T_x.
ยง PROOF OF THE TRANSVERSAL BLOW-UP LEMMA
ยง.ยง Sketch of the proof
The proof is a combination of the usual blow-up lemma applied to the thick graph, together with the rainbow partial embedding lemmas in the previous section
and the colour absorbing approach pioneered by Montgomery, Mรผyesser and Pehova inย <cit.>.
We begin by outlining the steps of the proof inย <cit.>, which is a common sequence of steps for embeddings using absorption, and which we shall also use.
The transversal bandwidth theorem inย <cit.> also follows this general outline, as well as using the (usual) blow-up lemma as a key tool.
Let H=H_โช H_โช H_โช H_โช H_ be a suitable partition, which will be carefully chosen.
These parts will be embedded in turn, each time using new colours to extend the transversal embedding.
Step 0.Embed a small `connecting graph'. Embed H_ whose vertex set, as guaranteed by separability, is such that its removal disconnects H into very small components (Lemmaย <ref>).
Step 1.Find a colour absorber. Embed H_ into the edge-coloured graph G of G,
and find disjoint sets A,Bโ of colours such that, given any e(H_)-|A| colours in B,
we can choose the colours of the edges of H_ using exactly those colours and the colours in A.
Step 2.Use most of the colours outside AโชB. Embed H_ using most of the colours outside AโชB (Lemmaย <ref>).
Step 3.Use the remaining colours outside AโชB. Embed H_ using every unused colour in (AโชB) as well as some colours of B (Lemmaย <ref>).
Step 4.Embed the remaining vertices of H using colours in B. Embed H_ using colours of B (Lemmaย <ref>).
Step 5.Use the colour absorber. The colours for H_ will consist of A along with the unused colours in B.
We recall that inย <cit.>, these steps were used to find transversal embeddings of spanning trees, and F-factors. When embedding an F-factor where F is a small graph, one can embed each copy of F in turn, ensuring that they are disjoint. Similarly, to embed a tree, one can embed small subtrees one by one, ensuring that the single vertex of the current tree that was already embedded matches up.
The new difficulty in our setting (and inย <cit.>) is that we would like to embed graphs which are more highly-connected.
Nevertheless, they have the property that a small fraction of edges can be removed to produce a graph which consists of small (but still linear) connected components.
One difficulty is that we do not have a minimum degree condition. Every vertex has large total degree and every colour graph is dense, but there could be a small proportion of vertices which do not have any edges of a given colour.
For Stepย 1, we use
the following lemma fromย <cit.> which is their key tool for colour absorption. Here we present a slightly weaker version with modified parameters for simplicity.
Let 0 < _1 โช_2 โช_3<1 and let โ,m,n be integers with โ=_1 m and 1โค m โค_3^2n/8. Suppose that G=(U,) is a bipartite graph with |U|=m and ||=n such that d(v,)โฅ_3 n for each vโ U. Then there exist disjoint subsets A, Bโ with |A|=m-โ and |B|= _2 n such that, for every subset B^0 of B with size โ, we can find a perfect matching between U and AโชB^0.
ยง.ยง Proof of Theoremย <ref>
Without loss of generality, we may assume that โชฮผโช.
We further define constants _1,p_,p_,p_,_3,ฮฝ' so that altogether
0<1/m โชโชฮผโชโช_1 โช p_โช p_โช p_โช_3 โชฮฝ' โชฮฝ,d,, 1/,1/r โค 1/2.
Let p_:=1-(p_+p_+p_).
Let R be a graph on vertex set [r], F=(V,,G) a rainbow super template with parameters (m,,d,) and H a graph as in the statement.
Let n := |V| be the number of vertices in the template, which equals v(H).
Note that
r m โค n โค rm/, and
m โค |ฯ^-1(i)| = |V_i| โค m/ for all i โ [r],
and m โค |_e| โค m/ for all e โ E(R),
where the final assertion follows from the fact that every |_ij| = e(H[ฯ^-1(i),ฯ^-1(j)]) โค |ฯ^-1(i)| โค m/.
Preparation of H.
First we will choose a partition H=H_โช H_โช H_โช H_โช H_
such that each part H_โ has roughly a given number p_โ n of vertices, and moreover
e(H_โ[ฯ^-1(i),ฯ^-1(j)]) โ p_โ e(H[ฯ^-1(i),ฯ^-1(j)]) for all ij โ E(R) and โโ{,,,}.
Since H is ฮผ-separable, there is a set X of size at most ฮผ n such that H-X
consists of disjoint components H_1,โฆ, H_t, each of size at most ฮผ n.
For all โโ [t] and ij โ E(R), let H^โ_ij := H_โ[ฯ^-1(i),ฯ^-1(j)]
and let n^โ_j := |V(H_โ) โฉฯ^-1(j)| and h^โ_ij := e(H^โ_ij).
Let H_ := H[X].
Note that
|X|=|V(H_)| โคฮผ n โค rฮผ m/โคโ(ฮผ)m.
Independently, for each โโ [t], add H^โ:=โ_ij โ E(R)H^โ_ij to H_โ with probability p_โ for โโ{, , , }.
Then, for all i โ [r] we have ๐ผ(|V(H_โ)โฉฯ^-1(i)|)=p_โ|V_i| ยฑ |X| โฅ (p_โ-โ(ฮผ)) m, and for all ij โ E(R) we have
๐ผ(e(H_โ[ฯ^-1(i),ฯ^-1(j)])) =p_โโ_โโ[t] h^โ_ij = p_โe(H[ฯ^-1(i),ฯ^-1(j)])ยฑ|X| =p_โ|_ij| ยฑโ(ฮผ)m
โฅ (p_โ-โ(ฮผ)) m.
Now, t โฅ (1-ฮผ)/ฮผโฅ 1/(2ฮผ) and e(H^โ_ij) โค n^โ_iโคฮผ n for all โโ [t] and ij โ E(R),
and each of these expectations is at least p_ m/2, so
a Chernoff bound implies that these values are within a multiplicative factor of (1ยฑ1/3) of their expectations
with probability at least
1-4(r+r2)exp(-( p_ m/2)^2/9/1/2ฮผ(ฮผ n)^2)
โฅ 1 - 4r^2exp(-p_^2^4/18rฮผ) โฅ1/2.
Thus we may assume that, for all โโ{,,,},
n^โ_i := |V(H_โ) โฉฯ^-1(i)| =(1ยฑ12)p_โ|V_i| โ
[p_โ m/2,3p_โ m/(2)] for all i โ V(R);
h^โ_ij := e(H_โ[ฯ^-1(i),ฯ^-1(j)]) =
(1ยฑ12)p_โ |_ij| โ
[p_โ m/2, 3 p_โm/(2)] for all ij โ E(R).
We write H^โ_ij := H_โ[ฯ^-1(i),ฯ^-1(j)] for all ij โ E(R).
Every vertex x โ V(H) V(H_) lies in some H_โ where โโ{, , , }, and x can only have neighbours inside H_โ or V(H_).
Note here that for each โโ{,,,}, H_โ is the union of vertex-disjoint components of size at most ฮผ n โค (2ฮผ/p_โ)|V(H_โ)| โคโ(ฮผ)|V(H_โ)|, byย (<ref>).
To find a transversal embedding of H, we need to define two injective maps ฯ: V(H)โ V and : E(H)โ
where ฯ(x)ฯ(y) โ G_(xy) for all xy โ E(H).
For this, we will define ฯ(x) for every vertex in H_, followed by every vertex in H_, H_, H_, H_.
When defining ฯ for vertices in H_โ, we simultaneously define for all future incident edges
except for โ=, for which colours are defined at the end.
Whenever we define ฯ(y) for some y we always choose ฯ(y) โ V_ฯ(y) and if y has a target set T_y, we ensure ฯ(y) โ T_y. We also always choose (xy) โ_ฯ(x)ฯ(y).
Step 0. We first embed H_ into F by applying Lemmaย <ref>, as follows.
Recall that X=V(H_), and define Y := N_H(X) X.
Let H'_ be the graph with vertex set X โช Y and edge set E(H[X โช Y]) E(H[Y]).
Thenย (<ref>) implies that
|V(H'_)| โค (+1)|V(H_)| (<ref>)โค 2โ(ฮผ) m and hence e(H'_) โคฮผ^1/3m.
For each w โ V(H'_) where T_w is not defined, let T_w := V_ฯ(w).
Lemmaย <ref> applied with 2โ(ฮผ) playing the role of (and the other parameters the same) implies that there are injective maps ฯ: X โ V and : E(H'_) โ with
ฯ(x)ฯ(x') โ G_(xx') for all xx' โ E(H_), and
so that ฯ(x) โ T_x for all x โ X, and so that there are candidate sets C_y for each y โ Y where C_y โ V_ฯ(y)ฯ(X) such that C_y โโ_x โ N_H(y) โฉ XN_G_(xy)(ฯ(x)) โฉ T_y and |C_y| โฅฮฝ' m.
We update the target sets, by defining T^1_y := C_y โ T_y for y โ Y where this set is defined, and
T_y^1 := T_y if y โ V(H_') and this set is defined. This updates target sets T_y^1 for all unembedded vertices y, and all target sets have size at least ฮฝ'm.
Note that for all y โ Y, if we choose ฯ(y) โ T_y^1, then ฯ(x)ฯ(y) โ G_(xy) for all x โ N_H(y) โฉ X, so the colour (xy) will be used as desired.
For each i โ [r], let U^1_i be the set of vertices y in ฯ^-1(i) with a target set T_y^1.
We have
|U^1_i| โค |U_i|+|Y| โค m+2โ(ฮผ) m โค 2 m.
This set will only shrink during the rest of the proof as vertices with target sets are embedded.
Some target sets of vertices in U^1_i will also shrink, but will remain large enough.
Let n^_i := |V(H_) โฉฯ^-1(i)| for all i โ [r]
and h^_ij := e(H'_[ฯ^-1(i),ฯ^-1(j)]) for all ij โ E(R).
We have
|V_i| =n^_i+n^_i+n^_i+n^_i+n^_i for all i โ [r] and
|_ij| =e(H[ฯ^-1(i),ฯ^-1(j)])=h^_ij+h^_ij+h^_ij+h^_ij+h^_ij for all ij โ E(R).
Let โฑ' = (V',',G') be the subtemplate of F obtained by deleting the vertices ฯ(X) and the colours (E(H_')) from โฑ, so, defining for all i โ [r] and e โ E(R)
V'_i := V_i ฯ(X), '_e := _e (E(H'_)),
we have (1-ฮผ^1/4)|_e| (<ref>),(<ref>)โค |_e'|(<ref>)=h^_e+h^_e+h^_e+h^_e.
By Lemmaย <ref>(ii), F' is a rainbow super R-template with parameters (m/2,2,d/2,/2).
Step 1.
Next we embed (the vertices but not the colours of) H_ into โฑ'.
For each i โ [r],
let V^_i โ V_i' be a uniform random subset of size n^_i.
Lemmaย <ref>(iii) applied with F',p_/3,6 playing the roles of F,,k implies that the subtemplate F_ of F' induced by (V^_i: i โ [r]) is a rainbow super R-template with parameters (p_ m/3,โ(),d^2/64,/6). Furthermore, a Chernoff bound implies that we may assume
|T^1_y โฉ V_ฯ(y)^| โฅ 2ฮฝ' n_i^/3 โฅฮฝ'|V_i^|/3 for all y โ V(H_) with a target set T_y^1.
Let V_iโ := V_i' V^_i.
We have
|V_iโ| โฅ |V_i|-2โ(ฮผ) m-n^_i (<ref>)โฅ (1-2p_)|V_i| โฅ m/2.
Lemmaย <ref>(ii) applied with F,2p_ playing the roles of F, implies that the subtemplate Fโ induced by V_iโ is a rainbow super R-template with parameters (m/2,2,d/2,/2),
and a Chernoff bound implies that we may assume
|T^1_y โฉ V_ฯ(y)โ| โฅ 2(ฮฝ'm-n^_i)/3 โฅฮฝ' m/2
for all y with a target set T^1_y.
We will embed H_ into the _3-thick graph T^_3_โฑ_. By Propositionย <ref>, for every ij โ E(R), T^_3_โฑ_[V_i^,V_j^] is (โ(),d^2/128)-half-superregular. Apply Theoremย <ref> (the blow-up lemma) with target sets T^1_yโฉ V_ฯ(y)^ and parameters โ(),โ(),โ(ฮฝ'),d^2/128 playing the roles of ,,ฮฝ,d to find an embedding ฯ of H_ into T^_3_F_ such that ฯ(x) โ V_ฯ(x) for all x โ V(H_) and ฯ(y)โ T_y^1 for every y โ U^1_i with i โ [r].
Note that we haven't yet defined (e) for any eโ E(H_). However, by our definition, for every such e=xy we have ฯ(x)ฯ(y)โ E(G_c) for at least _3 |_e'| colours cโ_e'.
For each e โ E(R), let G_, e be the auxiliary bipartite graph with vertex classes Z_e:= {ฯ(x)ฯ(y): xy โ E(H_^e)} and '_e, where {ฯ(x)ฯ(y),c} is an edge whenever ฯ(x)ฯ(y) โ E(G_c).
Thenย (<ref>) implies that |Z_e|=h^_e โค 3p_ |_e|/2 โค 2p_|_e'|,
and by construction, every ฯ(x)ฯ(y) in Z_e has degree at least _3|_e'| in G_,e.
For each e โ E(R), define constants โ_e and p_e via
โ_e := _1 h^_e and
p_e|'_e| := (1-p_)|_e'|-(h^_e-โ_e)-h^_e = h^_e+h^_e-p_|_e'|+_1h^_e
(<ref>)= (1ยฑ14)h^_e โ [p_|'_e|/3,3p_|'_e|].
Thus for each e โ E(R) we can apply Lemmaย <ref> with G_,e,Z_e,'_e,_1,p_e,_3
playing the roles of G,U,,_1,_2,_3 to obtain disjoint sets A_e, B_eโ_e' such that
* |A_e|=h^_e -โ_e (<ref>)โค 2p_|_e'|;
* |B_e|=p_e|_e'|;
* for any set B^0_eโB_e of size โ_e, there exists a colouring using colours from A_eโชB^0_e that makes the embedding ฯ of H^_e rainbow. That is, there is a bijection _e: E(H^_e) โA_e โชB_e^0 such that ฯ(x)ฯ(y) โ G__e(xy) for all xy โ E(H^_e).
Preparation for Stepsย 2โ4.
Recall that the template Fโ=(Vโ,',Gโ) has vertex clusters Vโ := (V_iโ := V_i' V^_i: i โ [r]), colour clusters ('_e: e โ E(R)) and graphs (Gโ)^ij = (G_c[V_iโ,V_jโ]: c โ'_ij), and Fโ is a rainbow super R-template with parameters (m/2,2,d/2,/2).
Let n^_i := n^_i+n^_i for each i โ [r] and let p_ := p_+p_.
During the rest of the proof, for each โขโ{,} we will identify pairwise disjoint vertex sets V^โข_i in the vertex clusters V_iโ, i โ [r], so that
|V^โข_i|=n^โข_i, and
V_iโ = V^_i โช V^_i, where V^โข := {V^โข_1,โฆ,V^โข_r} โโขโ{,}.
We will choose ฯ(x) โ V^_ฯ(x)
for all x โ V(H_) and ฯ(x) โ V^_ฯ(x) for all x โ V(H_) โช V(H_).
We will identify pairwise disjoint colour sets ^โ, so that
^โ = โ_e โ E(R)^โ_e โโโ{,,}, _e' = โ_โโ{,,}^โ_e
and for each xy โ E(H_โ), we will ensure (xy) โ G_c for some c โ^โ_ฯ(x)ฯ(y).
Given such sets of vertices and colours for โโ{,,}, we define F_โ := (V^โ',^โ,G^โ) to be the subtemplate of Fโ induced by V^โ',^โ, where '= and '='=.
Note that these templates are always rainbow R-templates with pairwise disjoint sets of colours, but F_ and F_ share the same vertex clusters.
We will now define the vertex sets V^โ'_i for โ' โ{,},
but colour sets will be defined sequentially, given the colours used in each step.
For each i โ [r], do the following.
For each j โ N_R(i),
recall that (G_c[V_iโ,V_jโ]: c โ'_ij) is (2,d/2)-superregular.
Byย (<ref>) and <ref>, we have |B_ij| โฅ p_|'_ij|/2, so by Lemmaย <ref>(i), (G_c[V_iโ,V_jโ]:c โB_ij) is (4/p_,d/4)-regular and hence (โ()/r,d/4)-regular.
By Lemmaย <ref>(i), there is a set V^j_i โ Vโ_i with |V^j_i| โคโ()|V_iโ|/r such that
โ_c โB_ijd_G_c(v) โฅ d|V_jโ||B_ij|/5 for all v โ V_iโV^j_i.
Let V_i := โ_j โ N_R(i)V^j_i, so |V_i| โคโ()|V_iโ|,
and let V^_i be a uniform random subset of V_iโV_i of size n^_i,
and let V^_i := V_iโ V^_i. So |V^_i| =n^_i.
Let T_y^2 := T_y^1 โฉ V^โ'(y)_ฯ(y) for all y โ U^1_i and i โ [r] (i.e.ย those y for which T_y^1 has been defined), where โ'(y)= if y โ V(H_) and โ'(y)= if y โ V(H_) โช V(H_).
For all c โ'_ij, the superregularity of our graph collections implies that
E(e(G_c[V^_i,V^_j])) โฅ(n^_i/|V_iโ|)(n^_j/|V_jโ|) (e(G_c[V_iโ,V_jโ])-โ()|V_iโ||V_jโ|)
โฅ d n^_i n^_j/3(<ref>)โฅ d p_^2^2 m^2/12.
For all i โ [r] and y โ U^1_i we have
E(|T^2_y|) โฅ (n^โ'(y)_i/|Vโ_i|)(|T^1_y โฉ Vโ_i|-โ()|Vโ_i|) (<ref>),(<ref>),(<ref>)โฅ p_โ'(y)ฮฝ'm/5.
Further, for all ij โ E(R) and v โ V^_i, since v โV^j_i we have
E(โ_c โB_ijd_G_c(v,V^_j))โฅ(n^_j/|V_jโ|)โ_c โB_ij(d_G_c(v,V_jโ)-โ()|V_jโ|) (<ref>)โฅ d n^_j|B_ij|/11.
By Chernoff bounds, we may assume that each of the above quantities are close to their expectations, so
* |T_y^2| โฅ p_โ'(y)ฮฝ'm/6 for all y โ U^1_i and i โ [r] (i.e.ย all those y which have a target set);
* e(G_c[V^_i,V^_j]) โฅ dn^_i n^_j/13 for all ij โ E(R) and c โ'_ij;
* โ_c โB_ijd_G_c(v,V^_j) โฅ dn^_j|B_ij|/12 for all ij โ E(R) and v โ V^_i.
Let m' := p_ m/8. Nowย (<ref>) andย <ref> imply that, for all i โ [r],
the number |U^1_i| of vertices y โฯ^-1(i) with a target set T^2_y satisfy
|U^1_i| โค 2 m โคโ()m', and
|T^2_y| โฅ p_ฮฝ'm/6 > ฮฝ'm'.
For all i โ [r] we have
|V^_i|=n^_i=n^_i+n^_i, so
p_|V_iโ|/3 (<ref>)โค p_|V_i|/2 (<ref>)โค |V^_i| (<ref>)โค 2p_|V_i| (<ref>)โค 3p_|V_iโ|.
We claim that the following properties about templates F_โ with colour sets ^โ_e for โโ{,,} hold.
* Let
^_e := C'_e (A_e โชB_e) for all e โ E(R).
Then F_, with vertex clusters (V^_i: i โ [r]), is a rainbow super R-template with parameters (m/4,4,d/4,/4).
* Suppose
^_e = B_e โชD_e where D_e โ^ for all e โ E(R).
Then
F_, with vertex clusters (V^_i: i โ [r]), is a rainbow R-template with parameters (m',โ(),d/4,/24) with the property that e(G_c[V^_i,V^_j]) โฅ d|V^_i||V^_j|/22 for all c โD_e and ij โ E(R).
* Suppose
^_e โB_e and |B_e ^_e| โค 2p_ |_e'| for all e โ E(R).
Then F_, with colour clusters (V^_i: i โ [r]), is a rainbow super R-template with parameters (m',โ(),d/13,/24).
These properties follow from Lemmaย <ref> applied to F_โ as a subtemplate of Fโ=(Vโ,',Gโ) (which has parameters (m/2,2,d/2,/2)), as follows.
First,ย <ref> holds by Lemmaย <ref>(ii) since F_ is a small perturbation of Fโ. Indeed,ย <ref> andย <ref> implies that
|'_e ^_e| โค (2p_+3p_)|'_e| โค 4p_|'_e| โคโ(p_)m/2,
and
|V_iโ V^_i| = |V^_i| โค 3p_|V_iโ| โคโ(p_)m/2 byย (<ref>),
so we can apply the lemma with 2,โ(p_),d/2 playing the roles of ,,d
to obtainย <ref>.
Forย <ref>,ย <ref> andย (<ref>) imply that
|^_e| โฅ |B_e| = p_e|'_e| > p_|_e'|/4.
Together withย (<ref>), this means that we can
apply Lemmaย <ref>(i) with parameters m/2,2,p_/4,d/2,/2,12 playing the roles of m,,,d,,k to see that
F_ is a rainbow R-template with parameters (m',โ(),d/4,/24).
The second property follows fromย <ref> since D_e โ^_e โ'_e.
For propertyย <ref>, we have |^_e| โฅ |B_e|-2p_|'_e| โฅ p_|_e'|/4 byย (<ref>). Together withย (<ref>), this means we can apply Lemmaย <ref>(i)
with the same parameters as previously to see that
F_ is a rainbow R-template with the given parameters.
For superregularity, we have for all e=ij โ E(R) and v โ V^_i that
โ_c โ^_ijd_G_c(v,V^_j) โฅโ_c โB_ijd_G_c(v,V^_j)-2p_|'_ij||V_jโ| (<ref>),(<ref>),<ref>โฅ dn^_j|B_ij|/12-2p_ m^2/^2
โฅ dn^_j|^ vx_ij|/13,
and e(G_c[V^_i,V^_j]) โฅ dn^_i n^_j/13 for all c โ^_ij byย <ref>.
Thusย <ref>โ<ref> all hold.
Now it is a matter of embedding each H_โ into its corresponding template,
using suitable (unused) colours at each step.
The image of V(H_) in each V_i will be the remaining vertices of V^_i
after embedding H_,
which is most of this set since H_ is much smaller than H_.
Step 2.
We embed H_ into โฑ_ using Lemmaย <ref> (embedding lemma with extra colours) with parameters
m/4,4,โ(ฮผ),โ(),2p_,ฮฝ'/7,d/4,/4 playing the roles of
m,,ฮผ,,,ฮฝ,d,.
For this, we recall that H_ is the union of components of size at most โ(ฮผ)|V(H_)|, and there is a graph homomorphism from H_ into R.
Byย <ref> andย <ref>, it now suffices to check that colour clusters have a suitable size to apply the lemma.
Indeed, for all e โ E(R) we have
|^_e|-h^_e <ref>= |'_e|-|A_e|-|B_e|-h^_e
<ref>,<ref>=|_e'|-(h^_e-โ_e)-p_e|_e'|-h_e^(<ref>)= p_|_e'|
โ [p_ m/2,p_ m/].
Thus we can apply Lemmaย <ref> to obtain ฯ(V(H^)) and (E(H^))
where each y with a target set T^2_y is embedded inside it.
For each e โ E(R), let D_e := ^_e (E(H_)) and let ^_e := B_e โชD_e.
Step 3.
We embed H_ into F_ by applying Lemmaย <ref> (embedding lemma with target sets and prescribed colours) with m',โ(),โ(),โ(p_),โ(p_),p_,ฮฝ',d/4,/24 playing the roles of
m,,,,_1,_2,ฮฝ,d,.
To see that this is possible, we have |V(H_) โฉฯ^-1(i)|=n^_i โค 2p_ m/โคโ(p_)m'.
We also have
e(H_[ฯ^-1(i),ฯ^-1(j)]) = h^_ij(<ref>)โฅ p_ m/2 โฅ p_ m'.
Equation (<ref>) implies that the number of vertices with target sets and the size of these target sets are suitable.
Byย (<ref>), for every e โ E(R) we have
|D_e|=|_e^|-h_e^โค p_ m/โคโ(p_)m'.
Byย <ref>, it remains to check that |^_e D| โฅ d|^_e|/4. Since the original template was rainbow, ^_e D = ^_e D_e = B_e.
Equationย (<ref>) implies that |B_e| โฅ |^_e| - โ(p_)m' โฅ |^_e|/2 โฅ d|^_e|/4, as required.
Thus we can apply Lemmaย <ref> to obtain ฯ(V(H_)) and (E(H_))
where each y with a target set T^2_y (i.e.ย those in U^1_ฯ(y)) is embedded inside it.
Let ^_e := B_e (E(H_))
and V^_i := V^_i ฯ(V(H_)), so V^_i = n^_i.
Step 4.
Let F_' be the subtemplate of F_ induced by (V^_i: i โ [r]).
It is a small perturbation:
|V^_i V^_i| = n^_i โค 2p_m/โค p_ m',
so Lemmaย <ref>(ii) with m',โ(),p_,d/13,/24
playing the roles of m,,,d, implies that F'_ is a rainbow super template with parameters (m'/2,2โ(),d/26,/48).
Let T^3_y := T^2_y โฉ V^_i for all i โ [r] and y โ V^_i โฉ U^1_i.
We embed H_ into F'_ by applying Lemmaย <ref>
with parameters
โ(ฮผ),โ(),_1^2,ฮฝ'/3
playing the roles of ฮผ,,,ฮฝ (and template parameters as above).
For this, we recall that H_ is the union of vertex disjoint components of size at most โ(ฮผ)|V(H_)| and there is a graph homomorphism from H_ into R.
Byย (<ref>), there are suitably few vertices with target sets T^3_y, and all target sets are suitably large.
Byย <ref>, it now suffices to check that every
|^_e| - h^_e is large.
We have
|^_e|-h^_e = |B_e|-h^_e-h^_e <ref>= |_e'|-|A_e|-h^_e-h^_e-h^_e
=h^_e-|A_e|=โ_e=_1 h^_e
โฅ_1p_ m/2 โฅ_1^2 m'/2.
Thus we can apply Lemmaย <ref>
to obtain ฯ(V(H_)) and (E(H_))
where each y with a target set T^2_y is embedded inside it.
Let ^_e := A_e โช (^_e (E(H_))).
Step 5.
Since _e' = ^_e โช^_e โช^_e โช^_e is a disjoint union,
and A_e โ^_e โA_e โชB_e, the set ^_e โฉB_e must have exactly the right size:
|^_e โฉB_e| = |'_e|-h^_e-h^_e-h^_e-|A_e|=|'_e|-(|_e'|-h^_e)-|A_e|=โ_e.
Thus, byย <ref>, for each e โ E(R) we can find _e with image _e(E(H_))=^_e.
Extending by all of these _e completes the transversal embedding.
ยง.ยง Proof of Theoremย <ref>
We may assume without loss of generality that 1/m โช,ฮผ,โชฮฝ,d,,1/,1/r โค 1/2.
Choose an additional constant ' with โช' โชฮฝ,d,,1/,1/r such that the conclusion of Lemmaย <ref> holds with ,',d,,3 playing the roles of ,',d,,k.
We may further assume that the conclusion of Theoremย <ref> holds with ',d^2/2 playing the roles of ,d and the other parameters unchanged.
Let G,,R,V:=(V_1,โฆ,V_r) be as in the statement of Theoremย <ref>. Let ฯ:V(H)โ V(R) be such that ฯ(x)=j whenever x โ A_j. So ฯ is a graph homomorphism.
Our hypothesis is that F=(V,,G) is a rainbow half-super R-template with parameters (m,,d,).
Lemmaย <ref>(iv) implies that for all c โ there is G_c' โ G_c such that, writing G' := (G_c':c โ), the template F'=(V,,G') is super with parameters (m,',d^2/2,).
Apply Theoremย <ref> to F' to obtain the required embedding.
ยง PROOF OF THEOREMSย <REF> ANDย <REF>
In this section we prove two applications of our transversal blow-up lemma, to super uniformly dense graph collections (Theoremย <ref>) and super uniformly dense 3-graphs (Theoremย <ref>).
The latter is an easy consequence of the former.
Let ,d,,>0 be given and let r := +1.
Choose additional constants such that
0<1/n_0 โชฮท,ฮผโชโช_1 โช_2 โชโฆโช_r2+1โชฮฝ' โชโช,d,1/,
where we are assuming without loss of generality that โช,d/1/, and
so that the conclusion of Lemmaย <ref> holds with 3,ฮท^1/4,,^2/6 playing the roles of k,,',d,
and Lemmaย <ref> holds with n_0/r,,โ(_1),^4/72 playing the roles of m,,,d
and Theoremย <ref> holds with
_2^2 n_0/2,โ(),2ฮผ,_1^1/3,ฮฝ',^9,/2 playing the roles of
m,,ฮผ,,ฮฝ,d,.
Let n โฅ n_0 be an integer and let G=(G_c: c โ) be a graph collection on a vertex set V of size n and H a graph on n vertices satisfying the conditions of the theorem.
By assumption, n โค e(H)=|| โค n.
Let m := โ n/rโ.
The Hajnal-Szemerรฉdi theorem implies that there is a partition V(H) = A_1 โชโฆโช A_r into parts of size m and m+1 such that all edges go between different parts.
Let V=V_1 โชโฆโช V_r be a random partition of V with |V_i|=|A_i| for all i โ [r], and let V := {V_1,โฆ,V_r}. We claim that with high probability, the following hold for any ij โ E(K_r)=[r]2:
* for all V_h' โ V_h with |V_h'| โฅฮท^1/4|V_h| for h=i,j and ' โ with |'| โฅฮท^1/4||, we have โ_c โ'e_G_c(V_i',V_j') โฅ d|'||V_i'||V_j'|/2;
* for each labelling {g,h}={i,j} and v โ V_g, we have โ_c โd_G_c(v,V_h) โฅ^2|V_h|||/6;
* e(G_c) โฅ^2|V_i||V_j|/6 for all c โ.
Forย <ref>, we have
ฮท n^3 < (d/2) ยทฮท^3/4 n^3/(2r)^2 < dฮท^3/4 nm^2/2
โค d(ฮท^1/4)^3|||V_i||V_j|/2 โค d|'||V_i'||V_j'|/2.
The statement then follows directly from the definition of (d,ฮท)-dense.
To proveย <ref>, for each vertex v โ V, there are at least ||/2 colours c โ for which d_G_c(v) โฅ n/2, and a Chernoff-type bound implies that, with high probability d_G_c(v,V_j) โฅ|V_j|/3.
Partย <ref> is proved similarly by swapping colour and vertex.
Partsย <ref>โ<ref> imply that the 3-graph G^(3),ij of G^ij := (G_c[V_i,V_j]: c โ)
is (ฮท^1/4,^2/6)-half-superregular.
Lemmaย <ref> and our choice of parameters implies that every G^(3),ij contains a spanning subhypergraph J^(3),ij which is (,^4/72)-superregular.
Thus F := (V,,J) is a super K_r-template with parameters (m,,^4/72,), where J^ij is the graph collection whose 3-graph is J^(3),ij
and _ij = for all ij โ E(K_r).
Let d_ij:=e(H[A_i,A_j])/n for all ij โ E(K_r). We have โ_ij d_ij =e(H)/n โฅ.
Thus there is at least one ij such that d_ijโฅ_r2+1.
By the pigeonhole principle, there is โโ [r2] such that for all ij,
either d_ijโค_โ, or d_ijโฅ_โ+1.
Let P^< := {ij โ[r]2: d_ijโค_โ}.
We will embed those sparse H[A_i,A_j], with indices ij in P^<, using Lemmaย <ref> (Embedding lemma with target and candidate sets), while the remaining (dense) pairs will be embedded using Theoremย <ref> (Transversal blow-up lemma) with target sets from the initial embedding of sparse pairs.
Let X be the union of non-isolated vertices in H[A_i,A_j] over all ij โ P^<,
let Y := N_H(X) X, and let H^< := H[X โช Y] H[Y], whose edge set is precisely those edges of H incident to X.
We have v(H^<) โค 2r2_โ n โคโ(_โ)m.
Apply Lemmaย <ref> to F
with m,,โ(_โ),ฮฝ',1,^4/72,,,r playing the roles of m,,,ฮฝ',ฮฝ,d,,,r, and T_w:=V_i for all w โ V(H^<) โฉ A_i and i โ [r], to find injective maps ฯ:X โ V and : E(H^<) โ such that
ฯ(x)ฯ(x') โ J_(xx') for all xx' โ E(H[X]),
for all i โ [r] we have
ฯ(x) โ V_i for all x โ A_i, and for all y โ Y โฉ A_i there exists C_y โ V_i ฯ(X) such that C_y โโ_x โ N_H^<(y) โฉ XN_G_(xy)(ฯ(x)) and |C_y| โฅฮฝ' m.
Let ':= (E(H^<)) and let V':={V_1',โฆ,V_r'} where V_i':= V_iฯ(X) for each iโ [r].
By Lemmaย <ref>(ii) applied with = โ(_โ), the template (V',',J') induced by V',' is super with parameters (m/2,2,^4/144,/2).
Let H^> := H-X.
Note that H^> (with respect to its slightly smaller vertex set) is 2ฮผ-separable.
Now we randomly partition ' into r2 (some perhaps empty) parts '=โ_ijโ E(K_r)โ_ij such that |_ijโ|=e(H^>[V_i',V_j'])
for all ij โ[r]2.
This is possible since |'|=||-e(H^<)=e(H)-e(H^<)=e(H^>).
Let Fโ := (V',โ,J'),
where each โ_ij is as above (so โ = ').
Each colour set is either large or empty, indeed, if ij โ P^<, then _ijโ=โ
,
but otherwise we have
|_ijโ| โฅ e(H[A_i,A_j])-e(H^<) โฅ_โ+1n-โ(_โ)m โฅ_โ+1n/2 โฅ_โ+1|'|/(2) โฅ_โ+1^2|'|.
Lemmaย <ref>(iii) applied with _โ+1^2,1 playing the roles of ,k
implies that Fโ
is a rainbow super template with parameters (_โ+1^2 m/2,โ(),^9,/2).
Apply Theoremย <ref> (transversal blow-up lemma) with target sets C_y of size at least ฮฝ'm for the at most โ(_โ)m โค_โ^1/3_โ+1^2 m/2 vertices y โ Y,
and parameters
_โ+1^2 m/2,โ(),2ฮผ,_1^1/3,ฮฝ',^9,/2 playing the roles of
m,,ฮผ,,ฮฝ,d,
to obtain a transversal embedding of H^> inside Fโ such that every y โ Y is embedded inside C_y.
Together with the embedding of H^<, this gives a transversal embedding of H.
Let ,d,>0 be given.
Without loss of generality, we may assume that โช d,1/.
Choose constants ฮท',ฮผ,n_0>0 such that the conclusion of Theoremย <ref> holds, applied with ,d,1/5,^3 playing the roles of ,d,,.
Let ฮท := ฮท'/(2).
Let n โฅ n_0 be an integer and let G be a (d,ฮท)-dense 3-graph on n vertices with d_G(v) โฅ n^2 for all v โ V(G).
Let H be a ฮผ-separable graph with (H) โค and v(H)+e(H) โค n.
First, if H is small, we will enlarge it, as follows.
If e(H) > n/4-1, let H' := H.
Otherwise, obtain H' from H by successively adding an edge between isolated vertices until e(H)>n/4-1.
Let H' be the obtained graph restricted to non-isolated vertices.
We have e(H') โค n/4, which implies that
v(H')+e(H') โค 4e(H') โค n.
In both cases, we have (H') โค, and H' is ฮผ-separable;
e(H')>n/4-1 and v(H') โฅ 3e(H')/ > 3(n-4)/(4)>n/(2);
and v(H')+e(H') โค n.
Let V, be disjoint subsets of V(G) chosen uniformly at random subject to
|V|=v(H') > n/(2) and ||=e(H') > n/4-1.
Standard Chernoff-type bounds imply that, with high probability,
for every v โ V we have
e(G[{v},V,]) โฅ^2|V|||/6 โฅ^2n^2/(49) โฅ^3n^2 for all v โ V
|{xyc โ E(G): x,y โ V}| โฅ^2|V|^2/12 โฅ^2n^2/(48^2) โฅ^3 n^2
for all c โ.
For each c โ, let G_c be the 2-graph with vertex set V and edge set {xy: x,y โ V and xyc โ E(G)}, and let G:=(G_c: c โ). Thus
โ_c โd_G_c(v,V)=e(G[{v},V,]) โฅ^3n^2, and for every c โ we have e(G_c) โฅ^3n^2.
Since G is (d,ฮท)-dense and |V| โฅ n/(2), we have that G is (d,ฮท')-dense.
Theoremย <ref> applied with ,d,1/5,^3 playing the roles of ,d,, implies that G
contains a transversal copy of H' and thus a transversal copy of H, that is, there are injective maps ฯ: V(H) โ V and : E(H) โ so that ฯ(x)ฯ(y) โ G_(xy) for every xy โ E(H).
Define ฯ: V(H) โช E(H) โ V(G) by setting ฯ(x):=ฯ(x) for x โ V(H) and ฯ(e):=(e) for e โ E(H).
Then ฯ is injective and ฯ(x)ฯ(y)ฯ(xy) โ E(G) for all xy โ E(H); that is, ฯ defines a copy of the 1-expansion of H in G, as required.
ยง CONCLUDING REMARKS
In this paper, we have proved a transversal blow-up lemma that embeds separable graphs into graph collections, which can be used to apply the regularity-blow-up method to transversal embedding problems. We conclude with some remarks on future directions.
Separability.
In our proof of the transversal blow-up lemma, the separability condition is necessary because we need to divide the graph into linear-sized pieces and then use the blow-up lemma to embed them piece by piece. For this embedding process to work, the number of edges between different pieces should not be too large. We wonder if our transversal blow-up lemma can be generalised to embed any graph with bounded maximum degree. If such a version can be proven, it would directly generalise the original blow-up lemma for graphs (since a collection of identical superregular pairs is a superregular collection).
Future applications to transversal embedding.
In a subsequent paper, we will utilise the transversal blow-up technique developed in this paper in combination with the absorption method to provide a new proof of the transversal version of the approximate Pรณsa-Seymour conjecture, which was recently established in <cit.>, and is a special case of the yet more recent main result ofย <cit.>. Additionally, we will prove a stability result for transversal Hamilton cycles that has not been demonstrated before.
However, using our method to obtain transversal versions of embedding results proved using the regularity blow-up method does not seem quite as straightforward as simply following the same proof. The regularity lemma produces some exceptional vertices which need to be incorporated into the structure built between vertex clusters: they are `absorbed' by clusters.
For transversal embedding of a spanning graph H, one needs to insert the exceptional vertices as well as some exceptional colours into the structure built between vertex clusters and colour clusters. Such an insertion may make some new colours exceptional.
Right at the end of the process, inserting vertices and colours simultaneously becomes difficult, when there are no more `spare colours'.
Thus, we need to construct an absorption set for the remaining vertices and colours prior to embedding H.
Applications to hypergraph embedding.
We state the full 3-graph version of our transversal blow-up lemma. [weak 3-graph blow-up lemma]
Let 0 < 1/m โช,ฮผ,โชฮฝ,d,,1/,1/r โค 1.
* Let R be a 2-graph with vertex set [r].
* Let G be a 3-graph with parts V_1,โฆ,V_r and V_ij for ij โ E(R) where m โค |V_i|โค m/ for all i โ [r],
and |V_ij|โฅ m for all ij โ [r].
Suppose that G[V_i,V_j,V_ij] is weakly (,d)-(half-)superregular for all ij โ E(R).
* Let H be a ฮผ-separable 2-graph with (H) โค for which there is a graph homomorphism ฯ: V(H) โ V(R) with |ฯ^-1(i)|=|V_i| for all i โ [r] and e(H[ฯ^-1(i),ฯ^-1(j)]) = |V_ij| for all ij โ E(R).
* Suppose that for each i โ [r] there is a set U_i โฯ^-1(i) with |U_i| โค m and T_x โ V_ฯ(x) with |T_x| โฅฮฝ m for all x โ U_i.
Then G contains a copy of the 1-expansion of H, where each vertex x โ V(H) is mapped to V_ฯ(x) and the new vertex for xy โ E(H) is mapped to V_ฯ(x)ฯ(y); and moreover, for every i โ [r] every x โ U_i is mapped to T_x.
This is a reformulation of Theoremย <ref> since for each ij โ E(R), the 3-graph G^(3)_ij of (G_c: c โ_ij) in Theoremย <ref> is weakly half-superregular, so we simply take V_ij := _ij.
It would be interesting to extend Theoremย <ref> on embeddings in uniformly dense 3-graphs beyond 1-expansions of separable 2-graphs.
It is not clear which 3-graphs one should expect to be able to embed in a weakly superregular triple (or in a uniformly dense 3-graph).
Even the case of 3-partite 3-graphs seems difficult; that is, to extend Theoremย <ref> (simplified weak hypergraph blow-up lemma).
Given 0<1/n โชโช d โชโค 1, which J are subhypergraphs of any weakly (,d)-superregular triple G[V_1,V_2,V_3], where n โค |V_1|,|V_2|,|V_3| โค n/?
A 3-partite 3-graph J is equivalent to an edge-coloured bipartite 2-graph J (where each edge can receive multiple colours) using our usual identification of one part with a set of colours.
If J has (J) โค, then (J) โค and the colouring is -bounded.
(If J is linear, then the edge-colouring is proper.)
Thus we would like to extend Theoremย <ref> (simplified weak transversal blow-up lemma). to find an embedding of J with a given edge-colouring up to permutation of colours.
If every colour only appears on a single edge, that is, J is an expansion of some 2-graph H where the 2-edge e is replaced by some bounded number t_e โฅ 1 of 3-edges (which is not linear if some t_e>1), it seems plausible that our techniques would work.
However, the colour absorption we use fromย <cit.> does not seem to be able to deal with colours playing multiple roles.
Let F be a 3-partite 3-graph of fixed size.
Recall that an old result of Erdลsย <cit.> implies that any large 3-graph of positive density contains a copy of F.
However, the results ofย <cit.> imply that any large uniformly dense 3-graph G (whose number of vertices is a multiple of v(F) and where every vertex sees a positive fraction of pairs) contains an F-factor if and only if there is a vertex v^* โ V(F) such that for any two edges e,e' where e contains v^* and e' does not,
e and e' share at most one vertex.
Probably, the characterisation for weakly superregular triples with balanced classes is the same.
Another instructive case is the tight Hamilton cycle (whose number of vertices is divisible by 6). As a graph collection problem, this corresponds to finding a (2-graph) Hamilton cycle on 4n vertices with 2n colours, where, cyclically labelling the edges e_1,โฆ,e_4n, the consecutive edges e_2c-1,e_2c,e_2c+1 are all in G_c, for each c โ [2n] where indices are taken modulo 4n. A construction inย <cit.> (see alsoย <cit.>) shows that a weakly superregular triple need not contain a tight Hamilton cycle, even if its density is close to 1/8.
Let V be a vertex set of size 6n, and let V=V_1 โช V_2 โช V_3 be a partition, where V_1,V_2,V_3 have equal size 2n, and let X โ V where |X โฉ V_1| is odd.
Independently for each distinct i,j โ [3] add uniform random edges between parts V_i,V_j with probability 1/2, to obtain a 3-partite 2-graph J.
Now form a 3-partite 3-graph G by, for each triple abc โ A ร B ร C, as follows:
* add abc to G if |{a,b,c}โฉ X| is even and {a,b,c} spans a triangle in J.
* add abc to G if |{a,b,c}โฉ X| is odd and {a,b,c} spans an independent set in J.
Suppose v^1_1v^1_2v^1_3โฆ v^2n_1v^2n_2v^2n_3 is a tight cycle H in J, where without loss of generality, v^j_i โ V_i for all j. For any 1 โค j < 2n, Since v^j_1v^j_2v^j_3, v^j_2v^j_3v^j+1_1 are edges of H, we have that
|{v^j_1,v^j_2,v^j_3}โฉ X|, |{v^j+1_1,v^j_2,v^j_3}โฉ X| are both odd or both even. This implies that v^j_1,v^j+1_1 are either both in X or neither in X.
By considering the disjoint pairs v_1^1v_1^2,v_1^3v_1^4,โฆ,v_1^2n-1v_1^2n,
this is a contradiction to |X โฉ V_1| odd.
It can easily be checked using Chernoff bounds (seeย <cit.>) that, if every |X โฉ V_i| has size about n, then with high probability G is weakly (1/8,o(1))-superregular.
plain
ยง APPENDIX
ยง.ยง Weak hypergraph regularity lemmas
We first introduce the following special form of the weak hypergraph regularity lemma for k-partite k-graphs. Its proof follows easily from the original lemma of Chungย <cit.>, which is very similar to the 2-graph regularity lemma of Szemerรฉdiย <cit.>.
For every L_0,kโฅ 1 and every ,>0, there is an n_0>0 such that for every k-partite k-graph G on n โฅ n_0 vertices with parts V_1,โฆ,V_k where nโค |V_i|โค n/, there exists a refined partition V_i=โ_j=1^t_iV_i,j for each iโ [k] such that the following properties hold:
(i)L_0 โค t:=t_1+โฆ+t_k โค n_0;
(ii)| |V_i,j|-|V_i',j'|| โค 1 for any i,i'โ[k], jโ [t_i] and j'โ [t_i'];
(iii) all but at most t^kk-tuples (V_1,j_1,โฆ,V_k,j_k) are -regular.
The following weak hypergraph regularity lemmaย <cit.> is proved in the same way as the original
degree form of the regularity lemma (seeย <cit.>), which in turn can be derived from the standard regularity lemma via some cleaning.
[Degree form of the weak hypergraph regularity lemma]
For all integers L_0 โฅ 1 and every >0, there is an n_0=n_0(,L_0) such that
for every d โ [0,1) and for every 3-graph G=(V,E) on n โฅ n_0 vertices there
exists a partition of V into V_0,V_1,โฆ,V_L and a spanning subhypergraph G' of G such
that the following properties hold:
(i)L_0 โค L โค n_0 and |V_0| โค n;
(ii)|V_1|=โฆ=|V_L|=: m;
(iii)d_G'(v) > d_G(v)-(d+)n^2 for all v โ V;
(iv) every edge of G' with more than one vertex in a single cluster V_i for some i โ[L]
has at least one vertex in V_0;
(v) for all triples {h,i,j}โ[L]3, we have that
(V_h,V_i,V_j)_G' is either empty or (,d)-regular.
The proof of Lemmaย <ref> (the regularity lemma for graph collections) is a routine but tedious consequence of this theorem applied to the 3-graph G^(3) of a graph collection G.
By increasing L_0 and decreasing , as necessary, we may assume without loss of generality that 0 < 1/L_0 โชโชโช 1.
Let L_0' := 2L_0/.
Choose new constants ',, which we may assume satisfy 0 < 1/L_0' โช' โชโช.
Let n_0' be obtained from Theoremย <ref> applied with parameters ',L_0'.
We may assume that 1/n_0' โช 1/L_0' by increasing n_0'.
Altogether,
0 < 1/n_0' โช 1/L_0' โช' โชโชโชโช 1.
Let n_0 := 2n_0'/.
Let n โฅ n_0 be an integer and suppose that G is a graph collection on a vertex set V of size n and with colour set , where n โค || โค n/.
Let d โ [0,1) and let G^(3) be the 3-graph of G with vertex set U := V โช.
Theoremย <ref> implies that there is a partition U_0,U_1,โฆ,U_K of U
and a spanning subhypergraph G^(3)' of G^(3) such that
* L_0' โค K โค n_0' and |U_0| โค'|U|;
* |U_1|=โฆ=|U_K|=: m';
* d_G^(3)'(u) > d_G^(3)(u)-(2d+')|U|^2 for all u โ U;
* every edge of G^(3)' with more than one vertex in a single cluster U_i has at least one vertex in U_0;
* for all triples {h,i,j}โ[K]3, we have that (U_h,U_i,U_j)_G^(3)' is either empty or (',2d)-regular.
Partition each cluster U_i, i โ [K], into 1/ subclusters of size at most m so that all but at most two subclusters of U_i have size exactly m and the property that they lie entirely within V, or within .
If a subcluster does not have this property, add it to U_0.
The new exceptional set has size at most |U_0|+2 m'K, and we let V_0 be its intersection with V, and _0 its intersection with .
Relabel the subclusters so that those which are subsets of V are V_1,โฆ,V_L and those which are subsets of are _1,โฆ,_M.
Let G'_c be the graph with vertex set V and edge set {xy: xyc โ G^(3)'} for all c โ.
We claim that the properties of the lemma are satisfied.
Forย (i), we have m'K โค |U| so
|V_0|+|_0| โค |U_0|+2 m'K <ref>โค ('+2)|U| โค ('+2)(n+n/) โค n.
Also,
L_0 โค L_0' โค L_0'/โค K/โค L+M โค 2K/โค 2n_0'/=n_0,
proving the required upper bound for both L and M.
Furthermore, m(L+M) โค n+||
so
L โฅn-|V_0|/m โฅ(n-|V_0|)(L+M)/n+||โฅ(n-|V_0|)L_0'/n+n/โฅ L_0'/2 = L_0
and similarly for M.
Partย (ii) follows by construction.
Forย (iii), we have โ_c โd_G_c'(v)=d_G^(3)'(v) for all v โ V, and e(G_c')=d_G^(3)'(c) for all c โ,
and similarly for G_c. Further, (2d+')(n+||)^2<(2d+')(1+1/)^2n^2โค(3d/^2+2'/^2)n^2 โค (3d/^2+)n^2. Thusย <ref> implies the required.
Forย (iv), if G'_c has an edge xy with x,y โ V_i for some i โ [L], then x,y โ U_i' for the cluster U_i' containing V_i. Since xyc โ E(G^(3)'),ย <ref> implies that c โ U_0. Thus c โ_0, as required.
Forย (v), suppose that ({h,i},j) โ[L]2ร M and G'_hi,j is non-empty.
Let h',i',j' โ [K] be such that V_h โ U_h', V_i โ U_i' and _j โ U_j'.
Since G'_hi,j is non-empty, we have that (U_h',U_i',U_j')_G^(3)' is non-empty.
So h',i',j' are distinct byย <ref>.
Partย <ref> implies that (U_h',U_i',U_j')_G^(3)' is (',2d)-regular.
Therefore, for each c โ U_j', letting J_c be the bipartite graph with partition (U_h',U_i') and edge set {xy: x โ U_h', y โ U_i', xyc โ G^(3)'}, we have that J := (J_c: c โ U_j') is (',2d)-regular.
Lemmaย <ref>(i) (the slicing lemma) implies that G'_hi,j is ('/,d)-regular and thus (,d)-regular.
This completes the proof.
We conclude the appendix with a proof of Lemmaย <ref>, which states that a half-superregular (k-graph) k-tuple contains a spanning superregular subhypergraph. The proof is the same as the one inย <cit.> for k=2.
Choose a new parameter โ such that 0<โชโโช'.
Apply Lemma <ref> to G with parameters L_0 := 1, โ, to obtain n_0>0. Increasing n_0,n and decreasing if necessary, we may assume that 1/n โชโช 1/n_0 โชโ.
Now let G=(V_1,โฆ,V_k)_G be an (,d)-half-superregular k-graph with n โค |V_i| โค n/ for all i โ [k].
Lemmaย <ref> implies that there is a refinement V_i=โ_j=1^t_iV_i,j for each iโ [k] such that
t := t_1+โฆ+t_k โค n_0, every pair of subparts differ in size by at most one, and
all but at most โ t^3k-tuples (V_1,j_1,โฆ,V_k,j_k) are โ-regular.
Obtain a spanning subhypergraph G' of G by, for each โ-regular triple (X_1,โฆ,X_k) of subparts,
removing every k-edge in (X_1,โฆ,X_k) independently at random with probability 1-d/d', where d' is the density of (X_1,โฆ,X_k).
(Note that by |X_i|โฅ |V_i|/n_0 โฅ |V_i| for each iโ [k], we have d'โฅ d due to half-superregularity.)
We claim that, with high probability,
* d_G'(v)โฅ d^2(|V_1|โฆ|V_k|)/(2|V_i|) for any iโ[k] and vโ V_i,
* any โ-regular k-tuple (of subparts) in G is 2โ-regular in G',
* any โ-regular k-tuple (of subparts) in G has density d ยฑ 2โ in G'.
This follows from Chernoff bounds.
Now we claim that, given that the above hold, G' is (',d^2/2)-superregular. We only need to check that G' is (',d^2/2)-regular. Let (A_1,โฆ,A_k) be any k-tuple where A_iโ V_i and |A_i|โฅ'|V_i| for all iโ [k].
For each iโ [k], we have partition A_i=โ_j=1^t_i(A_iโฉ V_i,j). We say a part A_i,j := A_iโฉ V_i,j is big if |A_iโฉ V_i,j|โฅโ |V_i,j| and otherwise it is small. Note that in total we have at most โ (|V_1|+โฆ+|V_k|)โคโ kn/ vertices in small parts.
Also note that we have at most โ t^k non-regular k-tuples that contain in total at most โ (n/)^k edges.
Thus
e_G'(A_1,โฆ,A_k) = โ_(A_1,j_1,โฆ,A_k,j_k) all big and
(V_1,j_1,โฆ,V_k,j_k)ย โ-regulare_G'(A_1,j_1,โฆ,A_k,j_k)
ยฑ(2โ kn/ ยท (n/)^k-1 + โ (n/)^k).
For every summand index (j_1,โฆ,j_k), we have e_G'(A_1,j_1,โฆ,A_k,j_k) = (dยฑ 2โ)|A_1,j_1|โฆ|A_k,j_k| by 2โ-regularity.
Thus
e_G'(A_1,โฆ,A_k) = (d ยฑ 2โ)|A_1|โฆ|A_k| ยฑ 3โ kn^k/^k
= (d ยฑ')|A_1|โฆ|A_k|.
This implies that G' is (',d^2/2)-superregular and thus we finish the proof.
|
http://arxiv.org/abs/2306.02159v1
|
20230603170513
|
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
|
[
"Arya Akhavan",
"Evgenii Chzhen",
"Massimiliano Pontil",
"Alexandre B. Tsybakov"
] |
math.ST
|
[
"math.ST",
"stat.ML",
"stat.TH"
] |
Gradient-free
optimization of highly smooth functions: improved analysis and a new algorithm
Arya Akhavan [email protected]
CSML, Istituto Italiano di Tecnologia
CMAP, รcole Polytechnique, IP Paris
Evgenii Chzhen [email protected]
CNRS, LMO,
Universitรฉ Paris-Saclay
Massimiliano Pontil [email protected]
CSML, Istituto Italiano di Tecnologia
University College London
Alexandre B. Tsybakov [email protected]
CREST, ENSAE, IP Paris
July 31, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This work studies minimization problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth and possibly satisfies additional properties. We consider two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator. The first algorithm uses a gradient estimator based on randomization over the โ_2 sphere due to <cit.>. We present an improved analysis of this algorithm on the class of highly smooth and strongly convex functions studied in the prior work, and we derive rates of convergence for two more general classes of non-convex functions. Namely, we consider highly smooth functions satisfying the Polyak-ลojasiewicz condition and the class of highly smooth functions with no additional property. The second algorithm is based on randomization over the โ_1 sphere, and it extends to the highly smooth setting the algorithm that was recently proposed for Lipschitz convex functions in <cit.>. We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than the โ_2 randomization and the commonly used Gaussian randomization algorithms, while in the noisy case both โ_1 and โ_2 algorithms benefit from similar improved theoretical guarantees.
The improvements are achieved thanks to a new proof techniques based on Poincarรฉ type inequalities for uniform distributions on the โ_1 or โ_2 spheres. The results are established under weak (almost adversarial) assumptions on the noise. Moreover, we provide minimax lower bounds proving optimality or near optimality of the obtained upper bounds in several cases.
ยง INTRODUCTION
In this work, we study the problem of gradient-free optimization for certain types of smooth functions. Let f: ^d โ and โ^d. We are interested in solving the following optimization problem
f^โโinf_โ f(),
and we assume that f^โ is finite. One main theme of this paper is to exploit higher order smoothness properties of the underlying function f in order to improve the performance of the optimization algorithm. We consider that the algorithm has access to a zero-order stochastic oracle, which, given a point โ^d returns a noisy value of f(), under a general noise model.
We study two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator.
Both algorithms can be written as an iterative update of the form
_1 โ and _t + 1 = _(_t - ฮท_t _t) t=1, 2,โฆ,
where _t is a gradient estimator at the point _t, ฮท_t>0 is a step size, and _(ยท) is the Euclidean projection operator onto the set .
In either case, the gradient estimator is built from two noisy function values, that are queried at two random perturbations of the current guess for the solution, and it involves an additional randomization step. Also, both algorithms invoke smoothing kernels in order to take advantage of higher order smoothness, following the approach initially suggested in <cit.>. The first algorithm uses a form of โ_2 randomization introduced by <cit.> and it has been studied in the context of gradient-free optimization of strongly convex functions in <cit.>. The second algorithm is an extension, by incorporating smoothing kernels, of the approach proposed and analysed inย <cit.> for online minimization of Lipschitz convex functions. It is based on an alternative randomization scheme, which uses โ_1-geometry in place of the โ_2 one.
A principal goal of this paper is to derive upper bounds on the expected optimization error of both algorithms under different assumptions on the underlying function f. These assumptions are used to set the step size in the algorithms and the perturbation parameter used in the gradient estimator. Previous works on gradient-free optimization of highly smooth functions considered mostly the strongly convex caseย <cit.>.
In this paper, we provide a refined analysis of the strongly convex case, improving the dependence on the dimension d and the strong convexity parameter ฮฑ for the algorithm with โ_2 randomization, and showing analogous results for the new method with โ_1 randomization.
For the special case of strongly convex functions with Lipschitz gradient, we find the minimax optimal dependence of the bounds on all the three parameters of the problem
(namely, the horizon T, the dimension d and ฮฑ) and we show that both algorithms attain the minimax rate, which equals ฮฑ^-1d/โ(T). This finalizes the line of work starting from <cit.>, where it was proved that optimal dependence on T is of the order 1/โ(T), and papers proving that optimal dependence on d and T scales as d/โ(T) (<cit.> establishing a lower bound and <cit.> giving a complete proof, see the discussion in Section <ref>). Furthermore, we complement these results by considering highly smooth but not necessary convex functions f, and highly smooth functions f, which additionally satisfy the gradient dominance (Polyak-ลojasiewicz) condition. To this end, we develop unified tools that can cover a variety of gradient estimators, and then apply them to the algorithm with โ_2 randomization and to our new algorithm based on โ_1 randomization. We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than its โ_2 counterpart, while in the noisy case, both algorithms benefit from similar theoretical guarantees.
The improvements in the analysis are achieved thanks to a new method of evaluating the bias and variance of both algorithms based on Poincarรฉ type inequalities for uniform distributions on the โ_1 or โ_2 spheres. Moreover, we establish all our upper bounds under very weak (almost adversarial) assumptions on the noise.
ยง.ยง Summary of the upper bounds
In this subsection, we give a high-level overview of the main contributions of this work. Apart from the improved guarantees for the previously studied function classes,
one of the main novelties of our work is the analysis in the case of a non-convex smoothness objective function f, for which we provide a convergence rate to a stationary point.
Furthermore, we study the case of ฮฑ-gradient dominant f, a popular relaxation of strong convexity, which includes non-convex functions. To the best of our knowledge, the analysis of noisy zero-order optimization in these two cases is novel.
In Sectionย <ref> we derive minimax lower bounds and discuss the optimality or near optimality of our convergence rates.
In the following we highlight the guarantees that we derive for the two analysed algorithms.
Each of the guarantees differs in the dependency on the main parameters of the problem, which is a consequence of the different types of available properties of the objective function. Let us also mention that we mainly deal with the unconstrained optimization case, = ^d. This is largely due to the fact that the Polyak-ลojasiewicz inequality is mainly used in the unconstrained case and the exertion of this condition to the constrained case is still an active area of researchย <cit.>. Meanwhile, for the strongly convex case, as in previous works <cit.>, we additionally treat the constrained optimization. In this section, we only sketch our results in the case = ^d.
We would like to emphasize that we do not assume the measurement noise to have zero mean. Moreover, the noise can be non-random and no independence between noises on different time steps is required, so that the setting can be considered as almost adversarial.
Rate of convergence under only smoothness assumption.Assume that f is a ฮฒ-Hรถlder function with Lipschitz continuous gradient, where ฮฒโฅ2. Then, after 2T oracle queries both considered algorithms provide a point _S satisfying
[โ f(_S)^2] โฒd^2/T^ฮฒ - 1/2ฮฒ - 1 under the assumption that T โฅ d^1/ฮฒ,
where S is a random variable with values in {1,โฆ,T}, ยท denotes the Euclidean norm, and the sign โฒ conceals a multiplicative constant that does not depend on T and d.
To the best of our knowledge, this result is the first convergence guarantee for the zero-order stochastic optimization under the considered noise model. In a related development,
<cit.> study zero-order optimization of non-convex objective function with Lipschitz gradient, which corresponds to ฮฒ=2. They assume querying two function values with identical noises, in which case the analysis and the convergence rates are essentially analogous to the particular case of our setting with no noise (see the discussion in Section <ref> below). The work of <cit.> studies noiseless optimization of highly smooth functions assuming that the derivatives up to higher order are observed, and <cit.> consider stochastic optimization with first order oracle. These papers cannot be directly compared with our work as the settings are different.
Rate of convergence under smoothness and Polyak-ลojasiewicz assumptions. Assume that f is a ฮฒ-Hรถlder function with Lipschitz continuous gradient, with a Lipschitz constant . Additionally, let ฮฒโฅ 2, and suppose that f satisfies the Polyak-ลojasiewicz inequality with a constant ฮฑ. Then, after 2T oracle queries, both considered algorithms provide a point _T, for which the expected optimization error satisfies
[f(_T)-f^โ] โฒ1/ฮฑ(ฮผd^2/T)^ฮฒ-1/ฮฒ under the assumption that T โณ d^2 - ฮฒ/2,
where ฮผ = /ฮฑ, and the signs โฒ and โณ conceal multiplicative constants that do not depend on T, d and ฮฑ. The Polyak-ลojasiewicz assumption was introduced in the context of first order optimization byย <cit.> who showed that it implies linear convergence of the gradient descent algorithm. Years later, this condition received attention in the machine learning and optimization community following the work ofย <cit.>. To the best of our knowledge, zero-order optimization under the considered noise model with the Polyak-ลojasiewicz assumption was not previously studied. Very recentlyย <cit.> studied a related problem under the Polyak-ลojasiewicz assumption when querying two function values with identical noises, which can be compared with the analysis in the particular case of our setting with no noise (see the discussion in Section <ref>). Unlike our work, ย <cit.>
do not deal with higher order smoothness and do not derive the dependency of the bounds on d, ฮผ and ฮฑ.
Rate of convergence under smoothness and strong convexity.
Assume that f is a ฮฒ-Hรถlder function with Lipschitz continuous gradient, where ฮฒโฅ2, and satisfies ฮฑ-strong convexity condition. Then, after 2T oracle queries, both considered algorithms provide a point _T such that
[f(_T)-f^โ] โฒ1/ฮฑ(d^2/T)^ฮฒ-1/ฮฒ under the assumption that T โฅ d^2 - ฮฒ/2,
where โฒ conceals a multiplicative constant that does not depend on T, d and ฮฑ.
The closest result to ours is obtained inย <cit.> and it splits into two cases: ฮฒ = 2 (Lipschitz continuous gradient) and ฮฒ > 2 (higher order smoothness). For ฮฒ = 2, <cit.> deal with a compact ฮ and prove a bound with optimal dependence on the dimension (linear in d) but sub-optimal in ฮฑ, while for ฮฒ > 2 they derive (for ฮ=โ^d and for compact ฮ) the rate with sub-optimal dimension factor d^2. Later,ย <cit.> andย <cit.> improved the dimension factor to d^2 - 1ฮฒ for ฮฒ > 2, which still does not match the linear dependence as ฮฒโ 2.
In contrast, by considering a slightly different definition of smoothness, we provide below a unified analysis leading to the dimension factor d^2-2ฮฒ for any ฮฒโฅ 2, under constrained and unconstrained ฮ; the improvement is both in the rate and in the proof technique.
ยง.ยง Notation
Throughout the paper, we use the following notation. For any k โ we denote by [k], the set of first k positive integers. For any โ^d we denote by โฆ() the component-wise sign function (defined at 0 as 1). We let โจยท, ยทโฉ and ยท be the standard inner product and Euclidean norm on โ^d, respectively.
For every close convex set โโ^d and โโ^d we denote by _() = argmin{- : โ} the Euclidean projection of onto .
For any p โ [1, +โ] we let ยท_p be the โ_p-norm in ^d and we introduce the open โ_p-ball
and โ_p-sphere, respectively, as
^d_p โโ^d_p < 1 and ^d_p โโ^d_p = 1.
For any ฮฒโฅ 2 we let โฮฒโ be the largest integer strictly less than ฮฒ. Given a multi-index =(m_1,โฆ,m_d) โโ^d, we set !โ m_1! โฏ m_d!, || โ m_1+ โฏ+m_d.
ยง.ยง Structure of the paper
The paper is organized as follows.
In Sectionย <ref>, we recall some preliminaries and introduce the classes of functions considered throughout.ย In Sectionย <ref>, we present the two algorithms that are studied in the paper.
In Section <ref>, we present the upper bounds for both algorithms, and in each of the considered function classes.
In Sectionย <ref>, we establish minimax lower bounds for the zero-order optimization problem. The proofs of most of the results are presented in the appendix.
ยง PRELIMINARIES
For any multi-index โ^d, any || times continuously differentiable function f : ^d โ, and every =(h_1,โฆ, h_d)^โคโโ^d we define
D^f() โโ ^||f()/โ ^m_1x_1โฏโ ^m_dx_d ,
^โ h_1^m_1โฏ h_d^m_d.
For any k-linear form A: ^d^k โ, we define its norm as
AโsupA[_1, โฆ, _k]_jโค 1, j โ [k].
Whenever _1 = โฆ = _k = we write A[]^k to denote A[, โฆ, ].
Given a k times continuously differentiable function f :
^d โ and โ^d we denote by f^(k)() : ^d^k โ the following k-linear form:
f^(k)()[_1, โฆ, _k]
=
โ_|_1| = โฏ = |_k| = 1 D^_1 + โฏ + _kf() _1^_1โฏ_k^_k , โ_1, โฆ, _k โ^d,
where _1, โฆ, _k โ^d. We note that since f is k times continuously differentiable in ^d, then f^(k)() is symmetric for all โ^d.
ยง.ยง Classes of functions
We start this section by stating all the relevant definitions and assumptions related to the target function f.
Followingย <cit.>, we recall the definition of higher order Hรถlder smoothness.
Fix some ฮฒโฅ 2 and L > 0.
We denote by โฑ_ฮฒ(L) the set of all functions f:โ^dโโ that are โ=โฮฒโ times continuously differentiable and satisfy, for all ,โโ^d, the Hรถlder-type condition
f^(โ)() - f^(โ)()โค L - ^ฮฒ - โ.
Definition <ref> of higher order smoothness was used byย <cit.> who considered only integer ฮฒ, while
<cit.> use a slightly different definition. Namely, they consider a class F_ฮฒ'(L) defined as the set of all โ times continuously differentiable functions f satisfying for all , โ^d, the condition
|f() - T^โ_()| โค L - ^ฮฒ,
where T^โ_(ยท) is the Taylor polynomial of order โ=โฮฒโ of f around . If f โF_ฮฒ(L), then f โF_ฮฒ'(L/โ!) (cf. Appendixย <ref>). Thus, the results obtained for classes F_ฮฒ' hold true for f โF_ฮฒ modulo a change of constant. Moreover, if f is convex and
ฮฒ = 2 (Lipschitz continuous gradient) the properties defining the two classes are equivalent to within constants, cf. <cit.>.
Since we study the minimization of highly smooth functions, in what follows, we will always assume that f belongs to โฑ_ฮฒ(L) for some ฮฒโฅ 2 and L > 0. We additionally require that f โโฑ_2(Lฬ
) for some > 0, that is, the gradient of f is Lipschitz continuous.
The function f โโฑ_ฮฒ(L) โฉโฑ_2(Lฬ
) for some ฮฒโฅ 2 and L, Lฬ
> 0.
We will start our analysis
by providing rates of convergence to a stationary point of f under Assumption <ref>. The first additional assumption that we consider is the Polyak-ลojasiewicz condition, which is also referred to as ฮฑ-gradient dominance. This condition became rather popular since it leads to linear convergence of the gradient descent algorithm without convexity as shown byย <cit.> and further discussed byย <cit.>.
Let ฮฑ>0. Function f:โ^dโโ is called ฮฑ-gradient dominant on โ^d, if f is differentiable on โ^d and satisfies Polyak-ลojasiewicz inequality,
2ฮฑ(f()-f^โ)โคโ f()^2 , โโ^d .
Finally, we consider the second additional condition, which is the ฮฑ-strong convexity.
Let ฮฑ>0. Function f:โ^dโโ is called ฮฑ-strongly convex on ^d, if it is differentiable on โ^d and satisfies
f() โฅ f(') + โ f (') - ' + ฮฑ/2 - '^2 , โ, ' โ^d .
We recall that ฮฑ-strong convexity implies (<ref>) ย <cit.>, and thus it is a more restrictive property than ฮฑ-gradient dominance.
An important example of family of functions satisfying the ฮฑ-dominance condition is given by composing strongly convex functions with a linear transformation. Let n โ, โโ^n ร d and define
โฑ()= {f : f()=g() , g is ฮฑ-strongly convex}.
Note that if ^โค is not invertible then the functions in โฑ() are not necessarily strongly convex.
However, it can be shown that
any f โโฑ() is an ฮฑฮณ-gradient dominant function, where ฮณ is the smallest non-zero singular value of A <cit.>. Alternatively, we can consider the following family of functions
โฑ'() = {f ย :ย f()=g(), g โ C^2(โ^d), g ย is strictly convex},
which is a set of ฮฑ-gradient dominant functions on any compact subset of โ^d, for some ฮฑ>0. A popular example of such a function appearing in machine learning applications is the logistic loss defined as
g()=โ_i=1^nlog(1+exp(_i^โค)),
where for 1โค i โค n, _i is i-th row of , and โโ^d. For this and more examples, see e.g.ย <cit.> and references therein.
In what follows, we consider three different scenarios: (i) the case of only smoothness assumption on f, (ii) smoothness and ฮฑ-gradient dominance assumptions, (iii) smoothness and ฮฑ-strong convexity assumptions.
Let be an output of any algorithm. For the first scenario, we obtain stationary point guarantee, that is, a bound on [โ f ()^2]. For the second and the third scenarios, we provide bounds for the optimization error [f() - f^โ].
Note that under ฮฑ-strong convexity and the fact that โ f(^โ) = 0, as well as under ฮฑ-gradient dominanceย <cit.>, for any โ^d we have
f() - f^โโฅฮฑ/2 - ^*^2,
where ^* is the Euclidean projection of onto the solution set _โ^d f() of the considered optimization problem, which is a singleton in case of strong convexity.
Thus, our upper bounds on [f() - f^โ] obtained under ฮฑ-strong convexity or ฮฑ-gradient dominance imply immediately upper bounds for [- ^*^2] with an extra factor 2/ฮฑ.
ยง.ยง Classical stochastic optimization versus our setting
The classical zero-order stochastic optimization (CZSO) setting considered by <cit.> among others assumes that there is a function F:^d รโ such that our target function is its expectation over the second argument,
f()= [F(,ฮพ)],
where ฮพ is a random variable. In order to find a minimizer of f, at each step t of the algorithm one makes two queries and gets outputs of the form (F(_t,ฮพ_t), F(_t+ h_t_t,ฮพ_t)), t=1,2,โฆ, where ฮพ_t's are independent identically distributed (iid) realizations of random variable ฮพ, and _t, _t+ h_t_t are query points at step t. Here, h_t>0 is a perturbation parameter and _t is a random or deterministic perturbation.
We emphasize two features of this setting:
(a) the two queries are obtained with the same random variable ฮพ_t;
(b) the random variables ฮพ_t are iid over t[Some papers, for example, <cit.> relax this assumption.].
Both (a) and (b) are not assumed in our setting. On the other hand, we assume additive noise structure. That is, at step t, we can only observe the values F(_t,ฮพ_t) = f(_t) + ฮพ_t for any choice of query points _t depending only on the observations at the previous steps, but we do not assume that two queries are obtained with the same noise ฮพ_t or are iid. We deal with almost adversarial noise, see Assumption <ref> below. In particular, we do not assume that the noise is zero-mean. Thus, in general, we can have [F(,ฮพ_t)] f().
In the CZSO setting, the values F(_t,ฮพ_t) and F(_t+ h_t_t,ฮพ_t) are used to obtain gradient approximations involving the divided differences (F(_t+ h_t_t,ฮพ_t) - F(_t,ฮพ_t))/h_t. A popular choice is the gradient estimator with Gaussian randomization suggested by <cit.>:
_t^G โ1/h_t(F(_t+ h_t_t,ฮพ_t) - F(_t,ฮพ_t))_t , with _tโผ๐ฉ(0,I_d),
where ๐ฉ(0,I_d) denotes the standard Gaussian distribution in ^d.
In the case of additive noise, the divided differences are equal to (f(_t+ h_t_t) - f(_t))/h_t, that is, the analysis of these algorithms reduces to that of noiseless (deterministic) optimization setting. When ฮพ_t's are not additive, the assumptions that are often made in the literature on CZSO are such that the rates of convergence are the same as in the additive noise case, which is equivalent to noiseless case due to the above remark.
We can summarize this discussion as follows:
* the results obtained in the literature on CZSO, as well as some toolsย <cit.>, do not apply in our framework;
* our upper bounds on the expected optimization error in the case of no noise (ฯ=0) imply identical bounds in the CZSO setting when the noise is additive. Under some assumptions made in the CZSO literatureย <cit.>, these bounds for ฯ=0 also extend to the general CZSO setting, with possible changes only in constants and not in the rates. In other cases the rates in CZSO setting can only be slower than oursย <cit.>.
ยง ALGORITHMS
Given a closed convex set โ^d, we consider the following optimization scheme
_1 โฮ and _t+1= _(_t- ฮท_t _t) t โฅ 1,
where _t is an update direction,
approximating the gradient direction โ f(_t) and ฮท_t > 0 is a step-size.
Allowing one to perform two function evaluations per step, we consider two
gradient estimators _t which are based on different randomization schemes. They both
employ a smoothing kernel K: [-1,1] โโ which we assume to satisfy, for ฮฒโฅ 2 and โ = ฮฒ, the conditions
โซ K(r) แน=0, โซ r K(r) แน=1, โซ r^j K(r) แน=0, j=2,โฆ, โ,ย ฮบ_ฮฒโโซ |r|^ฮฒ |K(r)| แน<โ.
In <cit.> it was suggested to construct such kernels employing Legendre polynomials, in which case ฮบ_ฮฒโค 2โ(2)ฮฒ, cf. .
We are now in a position to introduce the two estimators.
Similarly to earlier works dealing with โ_2-randomized methodsย <cit.> we use gradient estimators based on a result, which is sometimes referred to as Stokes' theorem. A general form of this result, not restricted to the โ_2 geometry, can be found inย .
randomization.
At time t โฅ 1, let _t^โ be distributed uniformly on the โ_2-sphere ^d_2, let r_t be uniformly distributed on [-1, 1], and h_t > 0. Query two points:
y_t = f(_t+h_tr_t_t^โ)+ ฮพ_t and y_t' = f(_t-h_tr_t_t^โ)+ ฮพ_t' ,
where ฮพ_t,ฮพ'_t are noises.
Using the above feedback, define the gradient estimator as
_t^โโd/2h_t(y_t - y'_t)_t^โ K(r_t).
We use the superscript โ to emphasize the fact that _t^โ is based on the โ_2 randomization.
Gradient estimator based on randomization.
At time t โฅ 1, let _t^โข be distributed uniformly on the โ_1-sphere ^d_1, let r_t be uniformly distributed on [-1, 1], and h_t > 0. Query two points:
y_t = f(_t+h_tr_t_t^โข)+ ฮพ_t and y_t' = f(_t-h_tr_t_t^โข)+ ฮพ_t' .
Using the above feedback, define the gradient estimator as
_t^โขโd/2h_t(y_t - y'_t)(_t^โข) K(r_t).
We use the superscript โข reminiscent of the form of the โ_1-sphere in order to emphasize the fact that _t^โข is based on the โ_1 randomization. The idea of using an โ_1 randomization (different from (<ref>)) was probably first invoked byย <cit.>. We refer toย <cit.> who highlighted the potential computational and memory gains of another โ_1 randomization gradient estimator compared to its โ_2 counterpart, as well as its advantages in theoretical guarantees. The estimator ofย <cit.> differs from (<ref>) as it does not involve the kernel K but the same computational and memory advantages remain true for the estimator (<ref>).
Throughout the paper, we impose the following assumption on the noises ฮพ_t,ฮพ_t' and on the random variables that we generate in the estimatorsย (<ref>) andย (<ref>).
For all t โ{1,โฆ,T}, it holds that:
(i) the random variables ฮพ_t and ฮพ_t' are independent from _t^โ (_t^โข) and from r_t conditionally on _t, and the random variables _t^โ (_t^โข) and r_t are independent;
(ii) [ฮพ_t^2]โคฯ^2 and [(ฮพ_t')^2]โคฯ^2, where ฯโฅ 0.
Let us emphasize that we do not assume ฮพ_t and ฮพ_t' to have zero mean. Moreover, they can be non-random and no independence between noises on different time steps is required, so that the setting can be considered as almost adversarial.
Nevertheless, the first part of Assumptionย <ref> does not permit a completely adversarial setup. Indeed, the oracle is not allowed to choose the noise variable depending on the current query points (i.e., the two perturbations of _t). However, Assumptionย <ref> encompasses the following protocol: before running the algorithm,
the oracle fixes an arbitrary bounded (by ฯ) sequence (ฮพ_t, ฮพ_t')_t = 1^T of noise pairs, possibly with full knowledge of the algorithm employed by the learner, and reveals this sequence query by query.
In the next two subsections, we study the bias and variance of the two estimators.
As we shall see, the โ_1 randomization can be more advantageous in the noiseless
case than its โ_2 counterpart (cf. Remarkย <ref>).
ยง.ยง Bias and variance of randomization
The next results allow us to control the bias and the second moment of gradient estimators _1^โ,โฆ,_T^โ, and play a crucial role in our analysis.
Let Assumptionย <ref> be fulfilled.
Suppose that f โโฑ_ฮฒ(L) for some ฮฒโฅ 2 and L>0. Let _t^โ be defined inย (<ref>) at time t โฅ 1. Let โ = ฮฒ. Then,
[_t^โ|_t]-โ f(_t)โคฮบ_ฮฒL/(โ - 1)!ยทd/d+ฮฒ-1h_t^ฮฒ-1.
Intuitively, the smaller h_t is, the more accurately _t estimates the gradient. A result analogous to Lemmaย <ref> with a bigger constant was claimed inย <cit.> but the proof was not provided. The proof of Lemmaย <ref> is presented in the appendix. It relies on the fact that _t^โ is an unbiased estimator of some surrogate function, which is strongly related to the original f. The factor in front of h_t^ฮฒ-1 in Lemmaย <ref> is O(1) as function of d. It should be noted that for ฮฒ > 2 the bounds on the bias obtained inย <cit.> andย <cit.>, where the factors scale as O(d) and O(โ(d)), respectively, cannot be directly compared to Lemmaย <ref>. This is due to the fact that those bounds are proved under a different notion of smoothness, cf.
Remarkย <ref>. Nevertheless, if f is convex and ฮฒ = 2 both notions of smoothness coincide, and
Lemmaย <ref> improves upon the bounds in <cit.> and <cit.> by factors of order d and โ(d), respectively.
Let Assumptionย <ref> hold and f โ โฑ_2(Lฬ
) for some Lฬ
>0. Then, for any d โฅ 2,
[_t^โ^2]
โคd^2ฮบ/d-1[โ f (_t) + h_t^2] + d^2 ฯ^2ฮบ/h^2_t,
where ฮบ = โซ_-1^1 K^2(r) แน.
The result of Lemmaย <ref> can be simplified as
[_t^โ^2]
โค
4dฮบ[โ f (_t)^2] + 4dฮบ^2 h_t^2 + d^2 ฯ^2ฮบ/h^2_t , d โฅ 2.
Let us provide some remarks about this inequality. First, the leading term of order d^2h_t^-2 in (<ref>) is the same as inย <cit.> and inย <cit.>, but we obtain a better constant.
The main improvement to both works lies in the lower order term. Indeed, unlike in those papers, the term h_t^2 is multiplied by d instead of d^2.
This improvement is crucial for the condition T โฅ d^2-ฮฒ/2, under which we obtain our main results below. In particular, there is no condition on the horizon T whenever ฮฒโฅ4. On the contrary, we would need T โฅ d^3 if we would used the previously known versions of the variance bounds <cit.>. The proof of Lemmaย <ref> presented below relies on the Poincarรฉ inequality for the uniform distribution on ^d_2.
For simplicity we drop the subscript t from all the quantities.
Using Assumptionย <ref> and the fact that
[f( + hr ^โ)-f( - hr ^โ) |, r] = 0
we obtain
[^โ^2]
=
d^2/4h^2[f( + hr ^โ)-f( - hr ^โ) + (ฮพ - ฮพ')^2K^2(r)]
โคd^2/4h^2[(f( + hr ^โ)-f( - hr ^โ)^2 K^2(r)] + 4ฮบฯ^2.
Since f โโฑ_2() and (<ref>) holds, then
using Wirtinger-Poincarรฉ inequalityย <cit.>
we get
[(f(+ hr ^โ)-f(- hr ^โ))^2 |ย , r]
โคh^2/d-1[โ f(+hr ^โ) +โ f(- hr ^โ)^2 |ย , r].
The fact that f โF_2() and the triangle inequality imply that
[โ f( +hr ^โ) + โ f( - hr ^โ)^2 ย |ย , r]
โค
4โ f() + h^2.
Combiningย (<ref>) โ (<ref>) proves the lemma.
Note that
Eqs.ย (<ref>)โ(<ref>) imply, in particular, Lemmaย 9 ofย <cit.>. Our method based on Poincarรฉ's inequality yields explicitly the constants in the bound. In this aspect, we improve uponย <cit.>, where a concentration argument leads only to non-specified constants.
ยง.ยง Bias and variance of randomization
This section analyses the gradient estimate based on the โ_1 randomization. The results below display a very different bias and variance behavior compared to the โ_2 randomization.
Let Assumptionย <ref> be fulfilled and
f โโฑ_ฮฒ(L) for some ฮฒโฅ 2 and L>0. Let _t^โข be defined inย (<ref>) at time t โฅ 1. Let โ = ฮฒ. Then,
[_t^โข|_t]-โ f(_t)โค
Lc_ฮฒฮบ_ฮฒโ^ฮฒ-โ d^1-ฮฒ/2h_t^ฮฒ-1.
When 2โคฮฒ <3, then c_ฮฒ=2^ฮฒ-1/2, and if ฮฒโฅ 3 we have c_ฮฒ=1.
Notice that Lemmaย <ref> gives the same dependence on the discretization parameter h_t as Lemmaย <ref>.
However, unlike the bias bound in Lemmaย <ref>, which is dimension independent, the result of Lemmaย <ref> depends on the dimension in a favorable way. In particular, the bias is controlled by a decreasing function of the dimension and this dependence becomes more and more favorable for smoother functions.
Yet, the price for such a favorable control of the bias is an inflated bound on the variance, which is established below.
Let Assumptionย <ref> be fulfilled and
f โโฑ_2(Lฬ
) for some Lฬ
>0. Then, for any d โฅ 3,
[_t^โข^2]
โค8d^3ฮบ/(d+1)(d-2)[โ f(_t) + h_tโ(2/d)^2] + d^3ฯ^2ฮบ/h^2_t,
where ฮบ = โซ_-1^1 K^2(r) แน.
Combined with the facts that a^2/((a-2)(a+1)) โค 2.25 for all a โฅ 3 and (a+b)^2 โค 2a^2 + 2b^2, the inequality of Lemmaย <ref> can be further simplified as
[_t^โข^2]
โค16d^3ฮบ/(d+1)(d-1)โ f(_t)^2 + 32d^2ฮบ^2 h^2_t/(d+1)(d-1) + d^3ฯ^2ฮบ/h^2_t
โค
36dฮบโ f(_t)^2 +
72ฮบ^2 h^2_t + d^3ฯ^2ฮบ/h^2_t , dโฅ 3.
The proof of Lemmaย <ref> is based on the following lemma, which gives a version of Poincarรฉ's inequality improving upon the previously derived inย <cit.>. We provide sharper constants and an easier to use expression.
Let d โฅ 3. Assume that G : ^d โ is a continuously differentiable function, and ^โข is distributed uniformly on ^d_1. Then
(G(^โข)) โค4/d-2[โ G(^โข)^2^โข^2].
Furthermore, if G : ^d โ is an L-Lipschitz function w.r.t. the โ_2-norm then
(G(^โข)) โค8L^2/(d+1)(d-2)โค18L^2/d^2.
The proof of Lemmaย <ref> is given in the Appendix.
For simplicity we drop the subscript t from all the quantities.
Similarly to the proof of Lemmaย <ref>, using Assumptionย <ref> we deduce that
[^โข^2]
โคd^3/4h^2[(f( + hr ^โข) - f( - hr ^โข))^2K^2(r)] + 4ฯ^2ฮบ.
Consider G : ^d โ defined for all โ^d as G() = f( + hr) - f( - hr). Using the fact that f โโฑ_2() we obtain for all โ^d
โ G()^2 โค 4h^2โ f() + h^2.
Applying Lemmaย <ref> to the function G defined above we deduce that
[(G(^โข))^2 |, r]
โค16h^2/d-2[โ f() + h^โข^2^โข^2].
Lemmaย <ref> provided in the Appendix gives upper bounds on the expectations appearing in the above inequality for all d โฅ 3. Its application yields:
[(f( + hr ^โข) - f( - hr ^โข) )^2 |, r]
โค32h^2/(d+1)(d-2)โ f() + hโ(2/d)^2.
We conclude the proof by combining this bound withย (<ref>).
Modulo absolute constants, the leading term h_t in Lemmaย <ref> is the same as for the โ_2 randomization in Lemmaย <ref>. However, for the โ_2 randomization this term involves only a quadratic dependence on the dimension d, while in the case of โ_1 randomization the dependence is cubic. Looking at the h_t^2 term that essentially matters only when ฯ=0, we note that
the factor in front of this term in Lemmaย <ref> is bounded as the dimension grows. In contrast, the corresponding term in Lemmaย <ref> depends linearly on the dimension. We summarize these observations in the following remark focusing on the noiseless case.
In the noiseless case (ฯ = 0) both bias and variance under the โ_1 randomization are strictly smaller than under the โ_2 randomization. Indeed, if ฯ = 0
[_t^โ|_t]-โ f(_t)โฒ h_t^ฮฒ-1
[_t^โ^2]
โฒ
d[โ f (_t)^2] + (โ(d)h_t)^2
and [_t^โข|_t]-โ f(_t)โฒh_t/โ(d)^ฮฒ-1
[_t^โข^2]
โฒ d[โ f (_t)^2] + h_t^2
,
where the signs โฒ hide multiplicative constants that do not depend on h_t and d.
Notice that given an h_t for โ_2 randomization, โ(d)h_t used with โ_1 randomization gives the same order of magnitude for both bias and variance. This is especially useful in the floating-point arithmetic, where very small values of h_t (on the level of machine precision) can result in high rounding errors. Thus, in the absence of noise, โ_1 randomization can be seen as a more numerically stable alternative to the โ_2 randomization.
For comparison, the corresponding bounds for gradient estimators with Gaussian randomization defined in (<ref>) are proved for ฮฒ=2 and have the form (cf. <cit.> or <cit.>):
[_t^G|_t]-โ f(_t)โฒ d^3/2 h_t
[_t^G^2]
โฒ
d[โ f (_t)^2] + d^3h_t^2,
where the signs โฒ hide multiplicative constants that do not depend on h_t and d.
Setting ฮฒ=2 in Remark <ref> we see that the dependence on h_t in (<ref>) is of the same order as for โ_1 and โ_2 randomizations with ฮฒ=2, but the dimension factors are substantially bigger. Also, the bounds in (<ref>) for the Gaussian randomization are tight. Thus, the Gaussian randomization is less efficient than its โ_1 and โ_2 counterparts in the noiseless setting.
ยง UPPER BOUNDS
In this section, we present convergence guarantees for the two considered gradient estimators and for three classes of objective functions f. Each of the following subsections is structured similarly: first, we define the choice of ฮท_t and h_t involved in both algorithms and then, for each class of the objective functions, we state the corresponding convergence guarantees.
Throughout this section, we assume that f โโฑ_2(Lฬ
) โฉโฑ_ฮฒ(L) for some ฮฒโฅ 2. Under this assumption, in Sectionย <ref> we establish a guarantee for the stationary point. In Sectionย <ref> we additionally assume that f is ฮฑ-gradient dominant and provide upper bounds on the optimization error.
In Sectionย <ref> we additionally assume that f is ฮฑ-strongly convex and provide upper bounds on the optimization error for both constrained and unconstrained cases.
Unless stated otherwise, the convergence guarantees presented in this section hold under the assumption that the number of queries T is known before running the algorithms.
ยง.ยง Only smoothness assumptions
In this subsection, we only assume that the objective function f : ^d โ satisfies Assumptionย <ref>. In particular, since there is no guarantee of the existence of the minimizer, our goal is modest โ we only want to obtain a nearly stationary point.
The plan of our study is as follows. We first obtain guarantees for algorithmย (<ref>)
with gradient estimator _t satisfying some general assumption, and then concretize the results for the
gradient estimatorsย (<ref>) andย (<ref>). We use the following assumption.
Assume that there exist two positive sequences b_t, v_t : โ [0, โ) and V_1 โฅ 0 such that for all t โฅ 1 it holds almost surely that
[_t |_t] - โ f(_t)โค b_t and [_t^2] โค V_1 [โ f(_t)^2]+v_t.
Note that Assumption <ref> holds for the
gradient estimatorsย (<ref>) andย (<ref>) with b_t, v_t and V_1 specified in Lemmasย <ref>โ<ref> (see also Assumption <ref> and Table <ref> below).
The results of this subsection will be stated on a randomly sampled point along the trajectory of the algorithm. The distribution over the trajectory is chosen carefully, in order to guarantee the desired convergence.
The distribution that we are going to use is defined in the following lemma.
Let f โโฑ_2(Lฬ
) for some Lฬ
>0, = ^d and f^โ > -โ.
Let _t be defined by algorithmย (<ref>) with _t satisfying Assumption <ref>.
Assume that ฮท_t inย (<ref>) is chosen such that ฮท_t V_1 < 1.
Let S be a random variable with values in [T], which is independent from _1, โฆ, _T, _1, โฆ, _T and distributed with the law
(S = t) = ฮท_t1 - ฮท_t V_1/โ_t = 1^Tฮท_t1 - ฮท_t V_1, tโ [T] .
Then,
[โ f(_S)^2] โค2([f(_1)] - f^โ) + โ_t = 1^Tฮท_tb_t^2 + ฮท_t v_t/โ_t = 1^Tฮท_t1 - ฮท_t V_1.
Lemma <ref> is obtained by techniques similar toย <cit.>. However, the paper <cit.> considers only a particular choice of _t defined via a Gaussian randomization, and a different setting (cf. the discussion in Section <ref>), under which v_t does not increase as the discretization parameter h_t (ฮผ in the notation ofย <cit.>) decreases. In our setting, this situation happens only when there is no noise (ฯ=0), while in the noisy case v_t increases as h_t tends to 0.
Note that the distribution of S in Lemmaย <ref> depends on the choice of ฮท_t and V_1.
In the following results, we are going to specify the exact values of ฮท_t. We also provide the values of V_1 for the gradient estimatorsย (<ref>) andย (<ref>). Regarding these two estimators, it will be convenient to use the following instance of Assumption <ref>.
There exist positive numbers b, V_1, V_2, V_3 such that for all t โฅ 1 the gradient estimators _t satisfy almost surely the inequalities
[_t |_t] - โ f(_t)โค bLh_t^ฮฒ-1 and [_t^2] โค V_1[โ f(_t)^2] + V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2.
It follows from Lemmasย <ref>โ<ref> that Assumptionย <ref> holds for gradient estimatorsย (<ref>) andย (<ref>) with the values that are indicated in Tableย <ref>.
Note that the bounds for the variance in those lemmas do not cover the case d=1 for the โ_2 randomization
and d=1,2 for the โ_1 randomization. Nevertheless, it is straightforward to check that in these cases Assumptionย <ref> remains valid with V_j's given in Table 1.
The next theorem requires a definition of algorithm-dependent parameters, which are needed as an input to our algorithms. We set
(๐ถ, ๐ฅ)
=
((8ฮบ)^-1, d^1/2ฮฒ - 1)
for โ_2 randomization,
((72ฮบLฬ
)^-1, d^2ฮฒ + 1/4ฮฒ - 2)
for โ_1 randomization.
Let Assumptions <ref> and <ref> hold, and = ^d. Let _t be defined by algorithmย (<ref>) with gradient estimatorย (<ref>) orย (<ref>), where the parameters ฮท_t and h_t are set for t = 1, โฆ, T, as
ฮท_t =min(๐ถ/d, d^-2(ฮฒ-1)/2ฮฒ-1T^-ฮฒ/2ฮฒ-1) and
h_t = ๐ฅ T^-1/2(2ฮฒ-1),
and the constants ๐ถ and ๐ฅ are given inย (<ref>).
Assume that _1 is deterministic and Tโฅ d^1/ฮฒ. Then, for the random variable S defined in Lemmaย <ref>, we have
[โ f(_S)^2] โค(_1(f(_1)-f^โ)+_2)(d^2/T)^ฮฒ-1/2ฮฒ-1,
where the constants _1, _2>0 depend only on ฯ, L, Lฬ
, ฮฒ, and on the choice of the gradient estimator.
In the case ฯ = 0, the result of this theorem can be improved. As explained in Section <ref>, this case is analogous to the CZSO setting, and it is enough to assume that ฮฒ=2 since higher order smoothness does not lead to improvement in the main term of the rates. Due to Remarkย <ref> (or Assumptionย <ref>) one can set h_t for both methods as small as one
wishes, and thus sufficiently small to make the sum over t in the numerator of the inequality of Lemmaย <ref>
less than an absolute constant. Then, choosing ฮท_t = (2 V_1)^-1 and recalling that, for both algorithms, V_1
scales as d up to a multiplicative constant (cf. Table <ref>) we get the following result.
Let f be a function belonging to โฑ_2(Lฬ
) for some Lฬ
>0, = ^d, and let Assumptions <ref> and <ref> hold with ฯ=0.
Let _t be defined by algorithmย (<ref>) with deterministic _1, and gradient estimatorsย (<ref>) orย (<ref>) for ฮฒ=2, where ฮท_t = (2 V_1)^-1 and h_t is chosen sufficiently small.
Then we have
[โ f(_S)^2] โค(f(_1) - f^โ+1) d/T,
where >0 is an absolute constant depending only the choice of the gradient estimator.
The rate O(d/T) in Theorem <ref> coincides with the rate derived inย <cit.> for ฮฒ=2 under the classical zero-order stochastic optimization setting, where the authors were using Gaussian rather than โ_1 or โ_2 randomization.
In a setting with non-additive noise,ย <cit.> exhibit a slower rate of O(โ(d/T)).
ยง.ยง Smoothness and -gradient dominance
We now provide the analysis of our algorithms under smoothness and -gradient dominance (Polyak-ลojasiewicz) conditions.
Let f be an ฮฑ-gradient dominant function, = ^d, and let Assumptions <ref> and <ref> hold, with ฯ >0.
Let _t be defined by algorithmย (<ref>) with _t satisfying Assumptionย <ref>,
deterministic _1 and
ฮท_t = min((2Lฬ
V_1)^-1, 4/ฮฑ t),
h_t = ( 4Lฬ
ฯ^2 V_3/b^2L^2ฮฑ)^1/2ฮฒยท
t^-1/2ฮฒ if ฮท_t = 4/ฮฑ t
T^-1/2ฮฒ if ฮท_t = 1/2Lฬ
V_1.
Then
[f(_T)-f^โ]
โค_1 V_1/ฮฑ T(f(_1)-f^โ)
+_2/ฮฑ( V_3( V_3/b^2L^2)^-1/ฮฒ+ V_2^2( V_3/b^2L^2)^1/ฮฒ(ฮฑ T/ฯ^2)^-2/ฮฒฯ^-2)(ฮฑ T/ฯ^2)^-ฮฒ-1/ฮฒ,
where _1, _2>0 depend only on ฮฒ.
Theoremย <ref> provides a general result for any gradient estimator that satisfies Assumptionย <ref>. By taking the values V_j from Tableย <ref> we immediately obtain the following corollary for our โ_1- and โ_2-randomized gradient estimators.
Let f be an ฮฑ-gradient dominant function, = ^d, and let Assumptions <ref> and <ref> hold, with ฯ >0.
Let _t be defined by algorithmย (<ref>) with deterministic _1 and gradient estimatorsย (<ref>) orย (<ref>). Set the parameters ฮท_t and h_t as in Theorem <ref>, where b, V_1, V_2, V_3 are given in Table <ref> for each gradient estimator, respectively. Then for any Tโฅ d^2-ฮฒ/2ฯ^2/ฮฑ L^2 we have
[f(_T) - f^โ] โค_1 d/ฮฑ T(f(_1)-f^โ) +
(_2 + _3^2/ฯ^2)
(ฯ^2d^2/ฮฑ T)^ฮฒ-1/ฮฒL^2/ฮฒ/ฮฑ,
where _1,_2,_3>0 depend only on ฮฒ and on the choice of the gradient estimator.
Note that here we consider ฯ and L as numerical constants. The condition Tโณ d^2-ฮฒ/2/ฮฑ mentioned in Corollary <ref> is satisfied in all reasonable cases since it is weaker than the condition Tโณ d^2/ฮฑ guaranteeing non-triviality of the bounds.
Recall that, in the context of deterministic optimization with first order oracle, the ฮฑ-gradient dominance allows one to obtain the rates of convergence of gradient descent algorithm, which are similar to the case of strongly convex objective function with Lipschitz gradient (<cit.>). A natural question is whether the same property holds in our setting of stochastic optimization with zero-order oracle and higher order smoothness. Theoremย <ref> shows the rates are only inflated by a multiplicative factor ฮผ^(ฮฒ-1)/ฮฒ, where ฮผ = Lฬ
/ฮฑ, compared to the ฮฑ-strongly convex case that will be considered in Section <ref>.
Consider now the case ฯ=0, which is analogous to the CZSO setting as explained in Section <ref>. In this case, we assume that ฮฒ=2 since higher order smoothness does not lead to improvement in the main term of the rates. We set the parameters ฮท_t,h_t as follows:
ฮท_t = min((2Lฬ
V_1)^-1, 4/ฮฑ t),
h_t โค(โจ 1/ฮฑโง 1T(2b^2 + 8 ^2 V_2/ฮฑ))^-1/2.
Let f be an ฮฑ-gradient dominant function belonging to โฑ_2(Lฬ
) for some Lฬ
>0, = ^d, and let Assumptions <ref> and <ref> hold with ฯ=0.
Let _t be defined by algorithmย (<ref>) with deterministic _1, and gradient estimatorsย (<ref>) orย (<ref>) for ฮฒ=2. Set the parameters ฮท_t and h_t as in (<ref>).
Then we have
[f(_T) - f^โ] โค_1 d/ฮฑ T((f(_1)-f^โ) + _2),
where _1,_2>0 are absolute constants depending only the choice of the gradient estimator.
Note that, in the CZSO setting, <cit.> proved the rate O(T^-1) for the optimization error under ฮฑ-gradient dominance by using an โ_2 randomization gradient estimator. However, unlike Theorem <ref> the bound obtained in that paper does not provide the dependence on the dimension d and on variables , ฮฑ.
ยง.ยง Smoothness and strong convexity
In this subsection, we additionally assume that f is a strongly convex function and denote by ^โ its unique minimizer.
We provide a guarantee on the weighted average point _T along the trajectory of the algorithm defined as
_T= 2/T(T+1)โ_t =1^T t _t.
We consider separately the cases of unconstrained and constrained optimization.
ยง.ยง.ยง Unconstrained optimization
In this part we assume that = ^d and the horizon T is known to the learner.
Similar to Section <ref>, we first state a general result that can be applied to any gradient estimator satisfying Assumptionย <ref>.
Let f be an ฮฑ-strongly convex function, = ^d, and let Assumptions <ref> and <ref> hold.
Let _t be defined by algorithmย (<ref>) with _t satisfying Assumptionย <ref>,
deterministic _1 and
ฮท_t = min(ฮฑ/8Lฬ
^2 V_1, 4/ฮฑ (t+1)),
h_t = (4ฯ^2 V_3/b^2L^2)^1/2ฮฒยท
t^-1/2ฮฒ if ฮท_t = 8/ฮฑ (t+1)
T^-1/2ฮฒ if ฮท_t = ฮฑ/4Lฬ
^2 V_1.
Then
[f(_T)-f^โ] โค_1^2 V_1/ฮฑ T_1-^โ^2+{_2 (bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ
+
_3 V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ}T^-ฮฒ-1/ฮฒ/ฮฑ ,
where the constants _1, _2, _3>0 depend only on ฮฒ.
Subsequently, in Corollary <ref>, we customize the above theorem for gradient estimatorsย (<ref>) andย (<ref>), with assignments of ฮท_t, h_t that are again selected based on Table <ref>. We also include a bound for [_T-^โ^2], which comes as an immediate consequence due to (<ref>).
Let f be an ฮฑ-strongly convex function, = ^d, and let Assumptions <ref> and <ref> hold.
Let _t be defined by algorithmย (<ref>) with gradient estimatorย (<ref>) orย (<ref>), and parameters ฮท_t, h_t as in Theorem <ref>, where b, V_1, V_2, V_3 are given in Table <ref> for each gradient estimator, respectively. Let _1 be deterministic.
Then for any T โฅ d^2-ฮฒ/2ฯ^2/L^2 we have
[f(_T) - f^โ] โค_1 ^2d/ฮฑ T_1-^โ^2 + (_2+_3^2/ฯ^2)(d^2ฯ^2/T)^ฮฒ-1/ฮฒL^2/ฮฒฮฑ^-1,
[_T-^โ^2] โค 2_1 ^2d/ฮฑ^2 T_1-^โ^2 + 2(_2+_3^2/ฯ^2)(d^2ฯ^2/T)^ฮฒ-1/ฮฒL^2/ฮฒฮฑ^-2,
where _1,_2,_3>0 depend only on ฮฒ and on the choice of the gradient estimator.
With a slightly different definition of smoothness class (which coincides with ours for ฮฒ=2, cf. Remark <ref>), a result comparable to Corollaryย <ref>
is derived inย <cit.>. However, that result imposes an additional condition on ฮฑ (i.e., ฮฑโณโ(d/T)) and provides a bound with the dimension factor d^2 rather than d^2-2/ฮฒ in Corollaryย <ref>. We also note that earlier <cit.> analyzed the case โ_2-randomized gradient estimator with integer ฮฒ>2 and proved a bound with a slower (suboptimal) rate T^-ฮฒ-1/ฮฒ+1.
ฮท_t = min(ฮฑ/2Lฬ
^2 V_1, 4/ฮฑ t),
h_t โค(T/d(b^2 +4^2 V_2))^-1/2.
ยง.ยง.ยง Constrained optimization
We now assume that โ^d is a compact convex set.
In the present part, we do not need the knowledge of the horizon T to define the updates _t.
We first state the following general theorem valid when _t is any gradient estimator satisfying Assumptionย <ref>.
Let โโ^d be a compact convex set. Assume that f is an ฮฑ-strongly convex function, Assumptions <ref> and <ref> hold, and max_โโ f()โค G.
Let _t be defined by algorithmย (<ref>) with gradient estimator _t satisfying Assumptionย <ref> and
ฮท_t = 4/ฮฑ (t+1),
h_t = (ฯ^2 V_3/b^2L^2 t)^1/2ฮฒ.
Then
[f(_t)-f^โ] โค4^2 V_1 G^2/ฮฑ T + _1/ฮฑ( V_3ฯ^2( V_3ฯ^2/b^2L^2)^-1/ฮฒ + V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^-ฮฒ-1/ฮฒ,
where the constant _1>0 depends only on ฮฒ.
Using the bounds on the variance and bias of gradient estimatorsย (<ref>) andย (<ref>) from Sectionย <ref>, Remarkย <ref> and the trivial bounds [f(_T)-f^โ]โค GB, [_T-^โ^2]โค B^2, where B is the Euclidean diameter of , we immediately obtain the following corollary.
Let โโ^d be a compact convex set. Assume that f is an ฮฑ-strongly convex function, Assumptions <ref> and <ref> hold, and max_โโ f()โค G.
Let _t be defined by algorithmย (<ref>) with gradient estimatorย (<ref>) orย (<ref>), and parameters ฮท_t, h_t as in Theorem <ref>, where b, V_1, V_2, V_3 are given in Table <ref> for each gradient estimator, respectively.
Then for any T โฅ d^2-ฮฒ/2ฯ^2/L^2 we have
[f(_T)-f^โ] โคmin(GB, 4^2 V_1G^2/ฮฑ T+(_1+_2^2/ฯ^2)(d^2ฯ^2/T)^ฮฒ-1/ฮฒL^2/ฮฒฮฑ^-1),
[_T-^โ^2] โคmin(B^2, 2GB/ฮฑ, 8^2 V_1 G^2/ฮฑ^2 T+2(_1+_2^2/ฯ^2)(d^2ฯ^2/T)^ฮฒ-1/ฮฒL^2/ฮฒฮฑ^-2),
where B is the Euclidean diameter of , and _1,_2> 0 depends only on ฮฒ and on the choice of the gradient estimator.
In a similar setting, but assuming independent zero-mean ฮพ_t's, <cit.> considered the case of โ_2 randomization and proved, for integer ฮฒ>2, a bound with suboptimal rate T^-ฮฒ-1/ฮฒ+1.
Corollaryย <ref> can be also compared toย <cit.> (โ_2 randomization and coordinatewise radomization) and, for ฮฒ>2, toย <cit.> (โ_2 randomization). However, those papers use a slightly different definition of ฮฒ-smoothness class (both definitions coincide if ฮฒ = 2, see Remarkย <ref>). Their bounds guarantee the rate O(d^2-1/ฮฒฮฑ T) for ฮฒ>2ย <cit.>, <cit.> and O(dโ(ฮฑ T)) for ฮฒ=2ย <cit.> by using two different approaches for the two cases. In contrast, Corollaryย <ref> yields O(d^2-2/ฮฒฮฑ T) and O(dฮฑโ( T)), respectively, and obtains these rates by a unified approach for all ฮฒโฅ2, and simultaneously under โ_1 and โ_2 randomizations. Note that, under the condition Tโฅ d^2 guaranteeing non-triviality of the bound, and ฮฑโฅ 1 the rate O(dฮฑโ( T)) that we obtain in Corollaryย <ref> for ฮฒ=2 matches the minimax lower bound (cf. Theorem <ref> below)
as a function of all the three parameters T, d, and ฮฑ.
ยง LOWER BOUNDS
In this section we prove minimax lower bounds on the optimization error over all sequential
strategies with two-point feedback that allow the query points depend on the past.
For t=1,โฆ, T, we assume that the values y_t=f(_t)+ฮพ_t and y'_t=f('_t)+ฮพ'_t are observed, where (ฮพ,ฮพ'_t) are random noises, and
(_t, '_t) are query points.
We consider all strategies of choosing the query points as
_t= ฮฆ_t((_i,y_i)_i=1^t-1,('_i,y'_i)_i=1^t-1,_t) and '_t= ฮฆ'_t((_i,y_i)_i=1^t-1,('_i,y'_i)_i=1^t-1,_t) for tโฅ 2, where ฮฆ_t's and ฮฆ'_t's are measurable functions, _1, '_1 โโ^d
are any random variables, and {_t} is a sequence of random variables with values in a measurable space (๐ต, ๐ฐ), such that _t is independent of ((_i,y_i)_i=1^t-1,('_i,y'_i)_i=1^t-1). We denote by ฮ _T the set of all such strategies of choosing query points up to t=T.
The class ฮ _T includes the sequential strategy of Algorithmย (<ref>) with either of the two
considered gradient estimatorsย (<ref>) andย (<ref>). In this case, _t=(_t,r_t), _t=_t+h_t_tr_t and '_t=_t-h_t_tr_t, where _t=_t^โ or _t=_t^โข.
To state our assumption on the noises (ฮพ,ฮพ'_t), we introduce the squared Hellinger distance H^2(ยท, ยท) defined for two probability measures ๐, ๐' on a measurable space (ฮฉ, ๐) as
H^2(๐, ๐') โโซ (โ(P) - โ(P'))^2.
For every tโฅ 1, the following holds:
* The cumulative distribution function F_t : ^2 โ of random variable (ฮพ_t,ฮพ'_t)
is such that
H^2 (P_F_t(ยท,ยท), P_F_t(ยท+v,ยท+v))โค I_0v^2 , |v|โค v_0,
for some 0<I_0<โ, 0<v_0โคโ. Here, P_F(ยท,ยท) denotes the probability measure corresponding to the c.d.f. F(ยท,ยท).
* The random variable (ฮพ_t,ฮพ'_t) is independent of ((_i,y_i)_i=1^t-1,('_i,y'_i)_i=1^t-1,_t).
Conditionย (<ref>) is not restrictive and encompasses a large family of distributions. It is satisfied with small enough v_0 for distributions that correspond to regular statistical experiments, <cit.>. If F_t is a Gaussian c.d.f. conditionย (<ref>)
is satisfied with v_0=โ.
To state the lower bounds, we consider a subset of the classes of functions, for which we obtained the upper bounds in Sectionย <ref>. Let ={โโ^d : โค 1}. For ฮฑ, L, Lฬ
>0, ฮฒโฅ 2, let โฑ_ฮฑ,ฮฒ denote the set of all ฮฑ-strongly convex functions f that satisfy Assumptionย <ref>, attain their minimum over โ^d in and such that max_โโ f()โค G, and the condition G> ฮฑ is satisfied
[The condition Gโฅฮฑ is necessary for the class โฑ_ฮฑ,ฮฒ to be non-empty. Indeed, due to (<ref>) and (<ref>), for all fโโฑ_ฮฑ,ฮฒ and xโ we have - ^*โค G/ฮฑ, and thus 2G/ฮฑโฅ diam()=2.].
theoremlowerB
Let ={โโ^d : โค 1} and let Assumptionย <ref> hold.
Then, for any estimator _T based on the observations ((_t,y_t), ('_t,y'_t), t=1,โฆ, T), where ((_t,'_t), t=1,โฆ, T) are obtained by any strategy in the class ฮ _T we have
sup_f โโฑ_ฮฑ,ฮฒ[f(_T)-f^โ]โฅ Cmin(max(ฮฑ, T^-1/2+1/ฮฒ), d/โ(T), d/ฮฑT^-ฮฒ-1/ฮฒ),
and
sup_f โโฑ_ฮฑ,ฮฒ[_T-^*(f)^2]โฅ Cmin(1, d/T^1/ฮฒ, d/ฮฑ^2T^-ฮฒ-1/ฮฒ),
where C>0 is a constant that does not depend of T,d, and ฮฑ, and ^โ(f) is the minimizer of f onย .
Some remarks are in order here. First, note that the threshold T^-1/2+1/ฮฒ on the strong convexity parameter ฮฑ plays an important role in bounds (<ref>) and (<ref>). Indeed, for ฮฑ below this threshold, the bounds start to be independent of ฮฑ. Intuitively, it seems reasonable that ฮฑ-strong convexity should be of no added value for small ฮฑ. Theorem <ref> allows us to quantify exactly how small such ฮฑ should be, namely, ฮฑโฒ T^-1/2+1/ฮฒ. In particular, for ฮฒ=2 the threshold occurs at ฮฑโ 1. Also, quite naturally, the threshold becomes smaller when the smoothness ฮฒ increases.
In the regime below the T^-1/2+1/ฮฒ threshold, the rate of (<ref>) becomes min(T^1/ฮฒ,d)/โ(T), which is asymptotically d/โ(T) independently of the smoothness index ฮฒ and on ฮฑ. Thus, we obtain that d/โ(T)
is a lower bound over the class of simply convex functions. On the other hand, the achievable rate for convex functions is shown to be d^16/โ(T) in <cit.> and improved to d^4.5/โ(T)
in <cit.>
(both results are up to poly-logarithmic factors, and under sub-Gaussian noise ฮพ_t).
The gap between our lower bound d/โ(T) and these upper bounds is only in the dependence on the dimension, but this gap is substantial.
In the regime where ฮฑ is above the T^-1/2+1/ฮฒ threshold, our results imply that the gap between upper and lower bounds is much smaller. Thus, our upper bounds in this regime scale as d^2-2/ฮฒ/ฮฑ T^-ฮฒ-1/ฮฒ while the lower bound of
Theoremย <ref> is of the order d/ฮฑT^-ฮฒ-1/ฮฒ.
Consider now the case ฮฒ=2. Then the lower bounds (<ref>) and (<ref>) are of order d/(max(ฮฑ, 1)โ(T)) and d/(max(ฮฑ^2, 1)โ(T)), respectively, under the condition Tโฅ d^2 guaranteeing non-triviality of the rates. If, in addition, ฮฑโณ 1 (meaning that ฮฑ is above the threshold ฮฑโ 1) we obtain
the lower rates d/(ฮฑโ(T)) and d/(ฮฑ^2โ(T)), respectively. Comparing this remark to Corollaryย <ref> we obtain the following result.
Let ฮฒ=2 and let the assumptions of Theoremย <ref> and Corollaryย <ref> hold. If ฮฑโฅ 1 and Tโฅmax(d^2,^2d,^4G^4) then there exist positive constants c,C that do not depend on T,d, and ฮฑ such that we have the following bounds on the minimax risks:
cd/ฮฑโ(T)โคinf__Tsup_f โโฑ_ฮฑ,ฮฒ[f(_T)-f^โ] โค Cd/ฮฑโ(T),
and
cd/ฮฑ^2โ(T)โคinf__Tsup_f โโฑ_ฮฑ,ฮฒ[_T-^*(f)^2]โค C d/ฮฑ^2โ(T),
where ^โ(f) is the minimizer of f onย , and
the infimum is over all estimators _T based on query points obtained via strategies in the classย ฮ _T. The minimax rates in (<ref>) and (<ref>) are attained by the
estimator _T=_T with parameters as in Corollaryย <ref>.
Thus, the weighted average estimator _T as in Corollaryย <ref> is minimax optimal with respect to all the three parameters T, d, and ฮฑ, both in the optimization error and in the estimation risk. Note that we introduced the condition Tโฅmax(^2d,^4G^4) in Corollaryย <ref> to guarantee that the upper bounds are of the required order, cf. Corollaryย <ref>. Thus, G is allowed to be a function of T that grows not too fast. Since G>ฮฑ this condition also prevents ฮฑ from being too large, that is, Corollaryย <ref> does not hold if ฮฑโณ T^1/4.
The issue of finding the minimax optimal rates in gradient-free stochastic optimization under strong convexity and smoothness assumptions has a long history. It was initiated in <cit.> and more recently developed in <cit.>.
It was shown in <cit.> that the minimax optimal rate on the class of ฮฑ-strong convex and ฮฒ-Hรถlder functions scales as c(ฮฑ,d)T^-(ฮฒ-1)/ฮฒ for ฮฒโฅ 2, where c(ฮฑ,d) is an unspecified function of ฮฑ and d (for d=1 and integer ฮฒโฅ 2 an upper bound of the same order was earlier derived in
<cit.>). The issue of establishing non-asymptotic fundamental limits as function of the main parameters of the problem (ฮฑ, d and T) was first addressed in <cit.> giving a lower bound ฮฉ(โ(d/T)) for ฮฒ=2, without specifying the dependency on ฮฑ. This was improved to ฮฉ(d/โ(T)) when ฮฑโ 1 by <cit.> who also claimed that the rate d/โ(T) is optimal for ฮฒ=2
referring to an upper bound in <cit.>. However, invoking <cit.> in the setting with random noise ฮพ_t (for which the lower bound of <cit.> was proved) is not legitimate because in <cit.> the observations are considered as a Lipschitz function of t. The complete proof of minimax optimality of the rate d/โ(T) for ฮฒ=2 under random noise was later provided in <cit.>. However, the upper and the lower bounds in <cit.> still differ in their dependence on ฮฑ. Corollaryย <ref> completes this line of work by establishing the minimax optimality as a function of the whole triplet (T, d, ฮฑ) for ฮฒ=2.
The main lines of the proof of Theoremย <ref> followย <cit.>. However, inย <cit.> the assumptions on the noise are much more restrictive โ the random variables ฮพ_t are assumed iid and instead ofย (<ref>) a much stronger condition is imposed, namely, a bound on the KullbackโLeibler divergence.
In particular, in order to use the KullbackโLeibler divergence between two distributions we need one of them to be absolutely continuous with respect to the other. Using the Hellinger distance allows us to drop this restriction. For example, if F_t is a distribution with bounded support then the KullbackโLeibler divergence between F_t(ยท) and F_t(ยท+v) is +โ while the Hellinger distance is finite.
Appendix
In this appendix we first provide some auxiliary results and then prove the results stated in the main body of the paper.
Additional notation
Let _1, _2 be two random variables, we write _1 d=_2 to denote their equality in distribution. We also denote by ฮ : _+ โ_+ the gamma function defined, for every z > 0, as
ฮ(z) = โซ_0^โ x^z - 1exp(-x) xฬฃ.
ยง CONSEQUENCES OF THE SMOOTHNESS ASSUMPTION
Let us first provide some immediate consequences of the smoothness assumption that we consider.
For all k โโ{0} and all โ^d it holds that
f^(k)()[]^k
=
โ_|_1| = โฏ = |_k| = 1 D^_1 + โฏ + _kf() ^_1 + โฏ + _k
=
โ_|| = kk!/!D^f() ^.
ย
The first equality of the remark follows from the definition. For the second one it is sufficient to show that for each = (m_1, โฆ, m_d)^โคโ^d with || = k there exist exactly k! / ! distinct choices of (_1, โฆ, _k) โ (^d)^k with |_1| = โฆ = |_k| = 1 and _1 + โฆ + _k =.
To see this, we map โ^d into a word containing letters from {a_1, a_2, โฆ, a_d} as
โฆ W() โa_1โฆ a_1_m_1-timesa_2โฆ a_2_m_2-timesโฆa_dโฆ a_d_m_d-times.
By construction, each letter a_j is repeated exactly m_j-times in W().
Furthermore, if || = k, then W() contains exactly k letters.
From now on, fix an arbitrary โ^d with || = k.
Given (_1, โฆ, _k) โ (^d)^k such that |_1| = โฆ = |_k| = 1 and _1 + โฆ + _k =, define[The summation of words is defined as concatenation.]
(_1, โฆ, _k) โฆ W(_1) + W(_2) + โฆ + W(_k).
We observe that the condition _1 + โฆ + _k =, implies that the word W(_1) + W(_2) + โฆ + W(_k) is a permutation of W(). A standard combinatorial fact states that the number of distinct permutations of W() is given by the multinomial coefficient, i.e., by k! / !. Since the mapping (_1, โฆ, _k) โฆ W(_1) + W(_2) + โฆ + W(_k) is invertible, we conclude.
Assume that f โโฑ_ฮฒ(L) for some ฮฒโฅ 2 and L > 0. Let โโ^d with = 1 and defined the function g_ : โ^d โโ as g_(x) โกโ f(x), xโโ^d. Then g_โโฑ_ฮฒ - 1(L).
Set โโฮฒ.
Note that since f is โ times continuously differentiable, then g_ is โ - 1 times continuously differentiable.
Furthermore, for any ^1, โฆ, ^โ - 1โ^d
g^(โ - 1)_()[^1, โฆ, ^โ - 1]
=
โ_|_1| = โฆ = |_โ - 1| = 1 D^_1 + โฆ + _โ-1 g_() h_1^_1ยทโฆยท_โ - 1^m_โ - 1
=
โ_|_1| = โฆ = |_โ| = 1 D^_1 + โฆ + _โ f() h_1^_1ยทโฆยท_โ - 1^_โ - 1^_โ
=
f^(โ)()[^1, โฆ, ^โ - 1, ].
Hence, for any , โ^d we can write by definition of the norm of a โ-1-linear form
g^(โ - 1)_() - g^(โ - 1)_()
โค= supg^(โ - 1)_()[^1, โฆ, ^โ-1] - g^(โ - 1)_()[^1, โฆ, ^โ-1]^j = 1 j โ [โ-1]
โค=
supf^(โ)()[^1, โฆ, ^โ - 1, ] - f^(โ)()[^1, โฆ, ^โ - 1, ]^j = 1 j โ [โ-1]
โคโคf^(โ)() - f^(โ)()โค L - ^ฮฒ - โ.
Fix some real ฮฒโฅ 2 and assume that f โF_ฮฒ(L).
Then, for all , โ^d
|f()-โ_0โค ||โคโ1/!D^f()(-)^|โคL/โ!-^ฮฒ.
Fix some , โ^d. Taylor expansion yields that, for some c โ (0, 1),
f() = โ_0โค ||โคโ -11/!D^f()(-)^ + โ_|| = โ1/!D^f( + c( - ))(-)^.
Thus, using Remarkย <ref> and the fact that f โF_ฮฒ(L), we get
|f()-โ_||โคโ1/!D^f()(-)^m|
=
โ_ || = โ1/!(D^f( + c( - )) - D^f())(-)^
=
1/โ!f^(โ)( + c( - ))[ - ]^โ - f^(โ)()[ - ]^โ
โคL/โ! - ^โc( -
)^ฮฒ - โโคL/โ! - ^ฮฒ.
ยง BIAS AND VARIANCE OF THE GRADIENT ESTIMATORS
ยง.ยง Gradient estimator with randomization
In this section we prove Lemmaย <ref> that provides a bound on the
bias of the gradient estimator with โ_2 randomization. The variance of this estimator is evaluated in Lemmaย <ref> in the main body of the paper.
We will need the following auxiliary lemma.
Let f : โ^d โโ be a continuously differentiable function.
Let r, ^โ, ^โ be uniformly distributed on [-1,1], ^d_2, and ^d_2, respectively. Then, for any h >0, we have
[โ f(+hr^โ)rK(r)]=d/h[f( + hr^โ)^โ K(r)].
Fix r โ [-1,1] โ{0}. Define ฯ:โ^d โโ as ฯ () = f( + hr) K(r) and note that โฯ() = hrโ f( + hr)K(r). Hence, we have
[โ f(+hr^โ)K(r) | r] = 1/hr[โฯ(^โ) | r]
=
d/hr[ฯ(^โ)^โ| r]
=
d/hrK(r)[f(+hr^โ)^โ| r],
where the second equality is obtained from a version of Stokes' theoremย <cit.>. Multiplying by r from both sides, using the fact that r follows a continuous distribution, and taking the total expectation concludes the proof.
Using Lemmaย <ref>, the fact that โซ_-1^1 rK(r)แน = 1, and the variational representation of the Euclidiean norm, we can write
[_t^โ|_t]-โ f(_t) = sup_โ^d_2[(โ_ f(+h_tr_t^โ) - โ_f())r_tK(r_t)],
where we recall that ^โ is uniformly distributed on ^d_2. Lemmaย <ref> asserts that for any โ^d_2 the directional gradient โ_f(ยท) is (ฮฒ-1, L)-Hรถlder.
Thus, due to Lemmaย <ref> we have the following Taylor expansion:
โ_f(_t+h_tr_t^โ) = โ_ f(_t) +โ_1โค||โคโ -1(r_th_t)^||/!D^โ_f(_t)(^โ)^+R(h_tr_t^โ),
where the residual term R(ยท) satisfies |R()|โคL(โ - 1)!^ฮฒ - 1.
Substitutingย (<ref>) inย (<ref>) and using the โzeroing-outโ properties of the kernel K, we deduce that
[_t^โ|_t]- โ f(_t)โคฮบ_ฮฒh_t^ฮฒ-1L/(โ - 1)!^โ^ฮฒ-1 = ฮบ_ฮฒh_t^ฮฒ-1L/(โ - 1)!d/d+ฮฒ-1,
where the last equality is obtained from the fact that ^โ^q = d/d+q, for any q โฅ0.
ยง.ยง Gradient estimator with randomization
In this section, we prove Lemmaย <ref> that gives a bound
on the bias of our gradient estimator with โ_1 randomization, and Lemmaย <ref>,
that provides a Poincarรฉ type inequality crucial for the control of its variance.
Let ฮถ be a real valued random variable with [ฮถ^2] โค 4ฯ^2 and let ^โข be distributed uniformly on ^d_1. Assume that ฮถ and ^โข are independent from each other and from the random variable r, which is uniformly distributed on [-1,1]. In order to control the bias and variance of the gradient estimator (<ref>) for any fixed t, it is sufficient to do it for the random vector
_, h^โข = d/2hf( + hr^โข) - f( - hr^โข) + ฮถ(^โข) K(r),
where ฮถ = ฮพ_t - ฮพ'_t.
ยง.ยง.ยง Control of the bias
Let ^โข be uniformly distributed on ^d_1 and ^โข be uniformly distributed on ^d_1.
Fix some โ^d and h > 0, and let Assumptionย <ref> be fulfilled, then the estimator inย (<ref>) satisfies
[_, h^โข] = [โ f( + hr^โข)rK(r)].
The proof is analogous to that of Lemmaย <ref> usingย <cit.>.
In order to obtain a bound on the bias of the estimator inย (<ref>) we need the following result, which controls the moments of the Euclidean norm of ^โข.
Let ^โขโ^d be distributed uniformly on ^d_1. Then for any ฮฒโฅ 1 it holds that
[^โข^ฮฒ ]โคc_ฮฒ+1d^ฮฒ/2ฮ(ฮฒ + 1)ฮ(d + 1)/ฮ(d + ฮฒ + 1),
where c_ฮฒ+1=2^ฮฒ/2 for 1โคฮฒ <2 and c_ฮฒ+1=1 for ฮฒโฅ 2.
Let = (W_1, โฆ, W_d), W_d+1 be random variables following Laplace distribution with mean 0 and scale parameter 1.
Then, following <cit.> we have
^โขd=/_1 + |W_d+1|,
where the sign d= stands for equality in distribution. Furthermore, it follows from <cit.> (see alsoย <cit.>) that
(, |W_d+1|)/_1 + |W_d+1| and _1 + |W_d+1|,
are independent.
Hence,
[^โข^ฮฒ]= [โ_j = 1^d W_j^2/_1 + |W_d+1|^2^ฮฒ / 2]
= [^ฮฒ]/[(, W_d+1)_1^ฮฒ],
where the equality follows from the independence recalled above. Note that |W_j| is exp(1) random variable for any j = 1, โฆ, d.
Thus, if 1 โคฮฒ < 2 by Jensen's inequality we can write
[^ฮฒ] = โ_j = 1^d W_j^2^ฮฒ/2โคโ_j = 1^d [W_j^2]^ฮฒ/2 = d^ฮฒ/2[W_1^2]^ฮฒ/2=d^ฮฒ/2ฮ(3)^ฮฒ/2.
If ฮฒโฅ 2, again by Jensen's inequality we have
[^ฮฒ] = d^ฮฒ/21/dโ_j = 1^d W_j^2^ฮฒ/2โค d^ฮฒ/2 - 1โ_j = 1^d [W_j^ฮฒ] = d^ฮฒ/2[W_1^ฮฒ] = d^ฮฒ/2ฮ(ฮฒ + 1).
It remains to provide a suitable expression for [(, W_d+1)_1^ฮฒ]. We observe that (, W_d+1)_1 follows the Erlang distribution with parameters (d+1, 1) (as a sum of d+1 exp(1) random variables).
Hence, using the expression for the density of the Erlang distribution we get
[(, W_d+1)_1^ฮฒ] = 1/ฮ(d+1)โซ_0^โx^d + ฮฒexp(-x) xฬฃ = ฮ(d + ฮฒ + 1)/ฮ(d + 1).
Combiningย (<ref>)โ(<ref>) proves the lemma.
Using Lemmaย <ref> and following the same lines as in the proof of Lemmaย <ref> we deduce that
[_t^โข|_t]- โ f(_t)โคฮบ_ฮฒh_t^ฮฒ-1L/(โ - 1)!^โข^ฮฒ-1โคฮบ_ฮฒh_t^ฮฒ-1L/(โ - 1)!c_ฮฒd^ฮฒ - 1/2ฮ(ฮฒ)ฮ(d + 1)/ฮ(d + ฮฒ),
where the last inequality is due to Lemmaย <ref>.
Next, recall that the Gamma function satisfies ฮ(z+1) = zฮ(z) for any z>0. Applying this relation iteratively and using the fact that โ=โฮฒโ we get:
ฮ(d + 1)/ฮ(d+ฮฒ) = ฮ(d+1)/ฮ(d+(ฮฒ-โ)_โ (0, 1])โ_i = 1^โ(d+ฮฒ-i) โค(d + ฮฒ - โ)^1 - (ฮฒ -โ)/โ_i = 1^โ(d+ฮฒ-i) โค1/d^ฮฒ-1,
where the first inequality is obtained from <cit.>. Proceeding analogously we obtain that ฮ(ฮฒ)/(โ-1)!โคโ^ฮฒ-โ. Combining this bound with the two preceding displays yields the lemma.
ยง.ยง.ยง Poincarรฉ inequality for the control of the variance
We now prove the Poincarรฉ inequality of Lemmaย <ref> used to control the variance of the โ_1-randomized estimator.
The beginning of the proof is the same as inย <cit.>.
In particular, without loss of generality we assume that [G()] = 0, and
consider first the case of continuously differentiable G.
Let = (W_1, โฆ, W_d) be a vector such that the components W_j are i.i.d. Laplace random variables with mean 0 and scale parameter 1. Set () = / _1.
Lemmaย 1 inย <cit.> asserts that, for uniformly distributed on โ B_1^d,
() d= and ()
is independent of _1 .
Furthermore, in the proof of Lemmaย 3 inย <cit.>, it is shown that
(G()) โค4/d(d-2)[ - ()(())^โค^2โ G(())^2],
where is the identity matrix, ยท applied to matrices denotes the spectral norm, and (ยท) applied to vectors denotes the vector of signs of the coordinates.
From this point, the proof diverges from that of Lemmaย 3 inย ofย <cit.>.
Instead of bounding the spectral norm of - ^โค by 1 + as it was done in that paper, we compute it exactly, which leads to the main improvement. Namely, Lemmaย <ref> proved below gives
- ()(())^โค^2 = d()^2.
Combining this equality withย (<ref>) we obtain the first bound of the lemma.
The second bound of the lemma (regarding Lipschitz functions G) is deduced from the first one by the same argument as inย <cit.>.
Let โ^d be such that _1 = 1.
Then,
- (())^โค = โ(d).
Let = /, = () / โ(d), and ฮณ = โ(d). Then, since 1 = _1 = () we have = 1/ฮณ.
Consider the matrix = [, _2, โฆ, _d], such that ^โค = ^โค =. Let _1 = (1, 0,
โฆ, 0)^โค.
For any matrix Bโ^dร d we have B = ^โค B and B^2 = B B^โค.
Using these remarks and the fact that ^2 = 1, ^โค = _1, we deduce that
- (())^โค^2
=
( - ฮณ_1 (^โค)^โค)( - ฮณ_1 (^โค)^โค)^โค
=
- ฮณ_1 (^โค)^โค - ฮณ(^โค)_1 + ฮณ^2_1_1^โค = ,
where
= [[ ฮณ^2 - 1 -ฮณ^โค; -ฮณ ]],
with ()_j = _j+1 for j = 1, โฆ, d-1.
Let us find the eigenvalues of . For any ฮปโ, using the expression for the determinant of a block matrix we get
( - ฮป) = (1 - ฮป)^d - 1ฮณ^2 - 1 - ฮป -ฮณ^2^2/1-ฮป.
Note that 1 = ^2 = 1/ฮณ^2 + ^2.
Hence,
( - ฮป)
=
(1 - ฮป)^d - 2(1 - ฮป)(ฮณ^2 - 1 - ฮป) - (ฮณ^2 - 1)
= (1 - ฮป)^d - 2(ฮป - ฮณ^2)ฮป.
Thus, - (())^โค = max{ฮณ, 1} = max{โ(d), 1}. We conclude the proof by observing that โ(d)โฅ_1 = 1.
Finally, we provide the following auxiliary lemma used in the proof of Lemmaย <ref>.
For all d โฅ 3 and all โ^d, h>0 it holds that
( + h^โข)^2^โข^2 โค2/d+1 + hโ(2/d)^2.
Observe that the vector |^โข| โ (|ฮถ^โข_1|, โฆ, |ฮถ^โข_d|)^โค follows the Dirichlet distribution (i.e., the uniform distribution of the probability simplex on d atoms).
In what follows we will make use of the following expression for the moments of the Dirichlet distribution:
[(^โข)^] = ฮ(d)/ฮ(d + ||)โ_i = 1^dฮ(m_i + 1) = (d-1)!!/(d - 1 + ||)!,
for any multi-index = (m_1, โฆ, m_d) โ^d with even coordinates.
Usingย (<ref>) we get
^โข^2 = 2/d+1.
Furthermore, using the multinomial identity and the expression for the moments inย (<ref>) we find
^โข^4
=
โ_||=22/![(^โข)^2]
=
โ_||=22/!ยท(d-1)!(2)!/(d+3)!
=
2(d-1)!/(d+3)!โ_|| = 2(2)!/!.
Direct calculations show that
โ_|| = 2(2)!/! = 2d(d + 5).
Hence, we deduce that, for all d โฅ 1,
^โข^4 = 4d!(d+5)/(d+3)!.
Note that d(d+5)/(d+2)(d+3)โค 1 for all d โฅ 1. Thus,
^โข^4 = 4d!(d+5)/(d+3)! = 4(d+5)/(d+1)(d+2)(d+3)โค4/d(d+1).
Finally, observe that by the Cauchy-Schwarz inequality,
( + h^โข)^2^โข^2
โค
h^2^โข^4 + 2hโ(^โข^2^โข^4) + ^2^โข^2
= โ(^โข^2) + hโ(^โข^4)^2.
Combining this bound withย (<ref>) andย (<ref>) concludes the proof.
ยง A TECHNICAL LEMMA
In this section, we provide a lemma, which will be useful to handle recursive relations in the main proofs. It is a direct extension ofย <cit.>.
Let {ฮด_t}_t โฅ 1 be a sequence of real numbers such that for all integers t > t_0โฅ 1,
ฮด_t+1โค(1 - c/t)ฮด_t + โ_i=1^Na_i/t^p_i+1,
where cโฅ 1, p_iโ (0,c) and a_iโฅ0 for i โ [N]. Then for t โฅ t_0โฅ c+1, we have
ฮด_tโค2(t_0-1)ฮด_t_0/t+โ_i=1^Na_i/(c-p_i)t^p_i.
For any fixed t > 0 the convexity of the mapping uโฆ g(u)=(t+u)^-p implies that g(1)-g(0)โฅ g'(0), i.e., 1t^p-1(t+1)^pโคpt^p+1.
Thus, using the fact that 1t^p - pt^p+1 = (c - p) + (t - c)t^p+1โค1(t+1)^p,
a_i/t^p+1โคa_i/c-p{1/(t+1)^p - (1-c/t)1/t^p}.
Usingย (<ref>), (<ref>) and rearranging terms, for any t โฅ t_0 we get
ฮด_t+1-โ_i=1^Na_i/(c-p_i)(t+1)^p_iโค(1-c/t){ฮด_t - โ_i=1^Na_i/(c-p_i)t^p_i}.
Letting ฯ_t = ฮด_t - โ_i=1^Na_i(c-p_i)t^p_i we have ฯ_t+1โค (1-ct) ฯ_t. Now, if ฯ_t_0โค 0 then ฯ_tโค 0 for any t โฅ t_0 and thus
(<ref>) holds. Otherwise, if ฯ_t_0>0 then for tโฅ t_0+1 we have
ฯ_tโคฯ_t_0โ_i=t_0^t-1(1-c/i)โคฯ_t_0โ_i=t_0^t-1(1-1/i)โค(t_0-1)ฯ_t_0/tโค2(t_0-1)ฮด_t_0/t.
Thus, (<ref>) holds in this case as well.
ยง UPPER BOUNDS
ยง.ยง Upper bounds: Only smoothness assumption
For brevity we write _t[ยท] in place of [ยท|_t].
Using Lipschitz continuity of โ f and the definition of the algorithm inย (<ref>) we can write
_t[f(_t+1)]
โค
f(_t) - ฮท_t โ f(_t)_t[_t] + ฮท_t^2/2_t[_t^2]
โค
f(_t) - ฮท_t โ f(_t)^2 + ฮท_tโ f(_t)_t[_t] - โ f(_t)
+
ฮท_t^2/2_t[_t^2 ].
Furthermore, invoking the assumption on the bias and the variance of _t and using the fact that 2ab โค a^2 + b^2 we deduce
_t[f(_t+1)] - f(_t)
โค
- ฮท_t โ f(_t)^2 + ฮท_tb_tโ f(_t)
+
ฮท_t^2/2v_t + V_1 โ f(_t)^2
โค
- ฮท_t โ f(_t)^2 + ฮท_t/2b_t^2 + โ f(_t)^2
+
ฮท_t^2/2v_t + V_1 โ f(_t)^2
=
- ฮท_t/21 - ฮท_t V_1โ f(_t)^2 + ฮท_t/2b_t^2 + ฮท_t v_t.
Let S be a random variable with values in {1, โฆ, T}, which is independent from _1, โฆ, _T, _1, โฆ, _T and such that
(S = t) = ฮท_t1 - ฮท_t V_1/โ_t = 1^Tฮท_t1 - ฮท_t V_1.
Assume that ฮท_t inย (<ref>) is chosen to satisfy ฮท_t m < 1 and that f^โ > -โ.
Taking total expectation in (<ref>) and summing up these inequalities for t โค T, combined with the fact that f(_T + 1) โฅ f^โ, we deduce that
[โ f(_S)^2] โค2([f(_1)] - f^โ) + โ_t = 1^Tฮท_tb_t^2 + ฮท_t v_t/โ_t = 1^Tฮท_t1 - ฮท_t V_1.
The proof will be split into two parts: for gradient estimators (<ref>) and (<ref>), respectively.
Both of these proofs follow from Lemma <ref>, which states that
[โ f(_S)^2] โค2ฮด_1+ โ_t = 1^Tฮท_tb_t^2 + ฮท_t v_t/โ_t = 1^Tฮท_t1 - ฮท_t V_1,
where ฮด_1 = [f(_1)] - f^โ.
Using the corresponding bounds on the bias b_t and variance v_t, we substitute these values in the above inequality with ฮท_t and h_t obtained by optimizing the obtained expressions.
We start with the part of the proof that is common for both gradient estimators.
Introduce the notation
ฮ_T := d^-2(ฮฒ-1)/2ฮฒ-1T^-ฮฒ/2ฮฒ-1.
Using this notation, we consider algorithmย (<ref>) with gradient estimators (<ref>) or (<ref>) such that
ฮท_t =min(๐ถ/d, ฮ_T) and
h_t = ๐ฅ T^-1/2(2ฮฒ-1),
where
(๐ถ, ๐ฅ)
=
((8ฮบ)^-1, d^1/2ฮฒ - 1)
for estimator (<ref>)
((72ฮบLฬ
)^-1, d^2ฮฒ + 1/4ฮฒ - 2)
for estimator (<ref>).
Given the values of V_1 in Table <ref>,
the choice of ฮท_t for both algorithms ensures that
1/2โค 1 - ฮท_t V_1.
Thus we get fromย (<ref>) that both algorithms satisfy
[โ f(_S)^2] โคโ_t = 1^Tฮท_t^-1(4ฮด_1 + 2โ_t = 1^Tฮท_tb_t^2 + 2โ_t = 1^Tฮท_t^2v_t).
Furthermore, since ฮท_t = min (๐ถ / d, ฮ_T), then in both cases we have
โ_t = 1^Tฮท_t^-1 = maxd/T๐ถ, 1/Tฮ_Tโคd/T๐ถ + 1/Tฮ_T.
Using this bound inย (<ref>) we deduce that
[โ f(_S)^2] โคd/T๐ถ + 1/Tฮ_T(4ฮด_1 + 2โ_t = 1^Tฮท_tb_t^2 + 2โ_t = 1^Tฮท_t^2v_t).
Finally, by the definition of ฮท_t we have ฮท_t โคฮ_T for all t = 1, โฆ, T, which yields
[โ f(_S)^2] โคd/๐ถ + 1/ฮ_T4ฮด_1/T +
2dฮ_T/T๐ถ + 1/Tโ_t = 1^T{b_t^2 + ฮ_Tv_t}.
In the rest of the proof, we use the algorithm specific bounds on b_t and v_t as well as the particular choice of ๐ถ and ๐ฅ in order to get the final results.
ยง.ยง Bounds for the gradient estimator (<ref>) - โ_2 randomization
Lemma <ref> for the bias and Lemma <ref> for the variance
imply that
b_t^2 โค(ฮบ_ฮฒ L/(โ-1)!)^2h_t^2(ฮฒ-1) and
v_t = 4dฮบ^2 h_t^2+d^2ฯ^2ฮบ/2h_t^2 ,
and V_1 = 4dฮบ.
Using these bounds inย (<ref>) we get
[โ f(_S)^2] โคd/๐ถ + ฮ_T^-14ฮด_1/T
โค+
dฮ_T/T๐ถ + 1/Tโ_t = 1^T{_3h_t^2(ฮฒ-1)
+
ฮ_Td^2(_4d^-1h_t^2+_5h_t^-2) }
โค( d/๐ถ+ฮ_T^-1)4ฮด_1/T + dฮ_T+1/Tโ_t=1^T{_6h_t^2(ฮฒ-1)+_7d^2ฮ_T(d^-1h_t^2+h_t^-2)}
where _3 = (ฮบ_ฮฒ L(โ-1)!)^2, _4 = 4ฮบ^3, _5 = ฮบฯ^2/2, and _6 = 2_3(๐ถ^-1+1), _7 = 2(๐ถ^-1+1)(_4 + _5).
Since h_t = h_T for t = 1,โฆ,T, inequalityย (<ref>) has the form
[โ f(_S)^2] โค(d/๐ถ+ฮ_T^-1)4ฮด_1/T + (dฮ_T+1)(_6h_T^2(ฮฒ-1)+_7d^2ฮ_T(d^-1h_T^2+h_T^-2)).
After substituting the expressions for ฮ_T and h_T into the above bound, the right hand side ofย (<ref>) reduces to
4d/T๐ถฮด_1 +{4ฮด_1 + ((d/T^ฮฒ)^1/2ฮฒ-1+1)(_6+_7(1+d^5-2ฮฒ/2ฮฒ-1T^-2/2ฮฒ-1))}(d^2/T)^ฮฒ-1/2ฮฒ-1.
To conclude, we note that the assumption T โฅ d^1/ฮฒ, implies that for all ฮฒโฅ 2 we have d^5-2ฮฒ2ฮฒ-1T^-2/2ฮฒ-1โค 1 and (d/T^ฮฒ)^1/2ฮฒ-1โค 1. Therefore, the final bound takes the form
[โ f(_S)^2]
โค4d/T๐ถฮด_1 +(4ฮด_1 + 2(_6+2_7))(d^2/T)^ฮฒ-1/2ฮฒ-1โค(_1ฮด_1 + _2)(d^2/T)^ฮฒ-1/2ฮฒ-1,
where _1 = 4(๐ถ^-1+1) and _2 = 2(_6+2_7).
ยง.ยง Bounds for the gradient estimatorย (<ref>) - โ_1 randomization
Lemma <ref> for the the bias and the boundย (<ref>) for the variance imply that
b_t^2โค (c_ฮฒฮบ_ฮฒโ L)^2h_t^2(ฮฒ-1)d^1-ฮฒ
,
v_t = 72ฮบLฬ
^2h_t^2+ d^3ฯ^2ฮบ/h^2_t
, and V_1 = 36dฮบ,
with โ = ฮฒ. Using these bounds inย (<ref>) we get
[โ f(_S)^2]โค(dฮ_T+1) (_6d^1-ฮฒh_T^2(ฮฒ-1)+ฮ_T(_7h_T^2+_8d^3h_T^-2))
+(d/๐ถ+ฮ_T^-1)4ฮด_1/T,
where the constants are defined as
_6 = 2(c_ฮฒฮบ_ฮฒโ L)^2(๐ถ^-1+1), _7 = 144ฮบLฬ
^3(๐ถ^-1+1), _8 = 2ฯ^2ฮบ(๐ถ^-1+1).
Substituting the expressions for ฮ_T and h_T inย (<ref>), we deduce that
[โ f(_S)^2]โค4 d/T ๐ถฮด_1+{4ฮด_1+((d/T^ฮฒ)^1/2ฮฒ-1 +1)(_6+_8+_7(d^5-2ฮฒ/T^2)^1/2ฮฒ-1)}(d^2/T)^ฮฒ-1/2ฮฒ-1.
Finally, we assumed that Tโฅ d^1/ฮฒ, which implies that both d/T^ฮฒ and d^5-2ฮฒ/T^2 are less than or equal to one. Thus, we have
[โ f(_S)^2]โค(_1ฮด_1+_2)(d^2/T)^ฮฒ-1/2ฮฒ-1,
where _1 = 4(๐ถ^-1+1), and _2=2(_6+_7+_8).
As in the proof of Theorem <ref> we use Lemma <ref>, cf. (<ref>):
[โ f(_S)^2] โค2ฮด_1 + โ_t = 1^Tฮท_t(b_t^2 + ฮท_t v_t)/โ_t = 1^Tฮท_t(1 - ฮท_t V_1).
From this inequality and the fact that, by assumption, ฮท_t = (2 V_1)^-1 we obtain:
[โ f(_S)^2] โค8 V_1ฮด_1 + 2โ_t = 1^T(b_t^2 + (2 V_1)^-1 v_t)/T.
Since the gradient estimatorsย (<ref>) andย (<ref>) satisfy Assumptionย <ref> and we
consider the case ฯ = 0, ฮฒ=2, the values b_t=bLh_t and v_t= V_2^2h_t^2 can be made as small as possible by choosing h_t small enough. Thus, we can take h_t sufficiently small to have โ_t = 1^T(b_t^2 + (2 V_1)^-1 v_t) โค V_1. Under this choice of h_t,
[โ f(_S)^2] โค (8ฮด_1 + 2) V_1/T .
Using the values of V_1 for the gradient estimatorsย (<ref>) andย (<ref>) (see Tableย <ref>) we obtain the result.
ยง.ยง Upper bounds: Smoothness and ฮฑ-gradient dominance
For brevity, we write _t[ยท] in place of [ยท|_t].
Using Lipschitz continuity of โ f <cit.> and the definition of the algorithm inย (<ref>) with ฮ=โ^d we have
_t[f(_t+1)]
โค
f(_t) - ฮท_t โ f(_t)_t[_t] + ฮท_t^2/2_t[_t^2]
โค
f(_t) - ฮท_t โ f(_t)^2 + ฮท_tโ f(_t)_t[_t] - โ f(_t)
+
ฮท_t^2/2_t[_t^2 ].
Next, invoking Assumptionย <ref> on the bias and variance of _t and using the elementary inequality 2ab โค a^2 + b^2 we get that, for the iterative procedureย (<ref>) with ฮ=โ^d,
ฮด_t+1โคฮด_t-ฮท_t/2(1-Lฬ
ฮท_t V_1)[โ f(_t)^2]+ฮท_t/2(b^2L^2h_t^2(ฮฒ-1)+Lฬ
ฮท_t( V_2^2h_t^2+ V_3ฯ^2h_t^-2)),
where ฮด_t = [f(_t)-f^โ]. Furthermore, our choice of the step size ฮท_t ensures that 1 - ฮท_t V_1 โฅ12. Using this inequality and the fact that f is ฮฑ-gradient dominant we deduce that
ฮด_t+1โคฮด_t(1 - ฮท_tฮฑ/2)+ฮท_t/2(b^2L^2h_t^2(ฮฒ-1)+Lฬ
ฮท_t( V_2^2h_t^2+ V_3ฯ^2h_t^-2)).
We now analyze this recursion according to the cases T > T_0 and T โค T_0,
where T_0: = 8Lฬ
V_1/ฮฑ is the value of t, where ฮท_t switches its regime.
First case: T > T_0.
In this case, the recursion (<ref>) has two different regimes, depending on the value of ฮท_t.
In the first regime, for any t = T_0+1,โฆ,T, we have ฮท_t = 4/ฮฑ t and (<ref>) takes the form
ฮด_t+1 โคฮด_t(1-2/t) + 2b^2L^2ยทh_t^2(ฮฒ-1)/ฮฑ t +8Lฬ
/ฮฑ^2t^2( V_2^2h_t^2+ V_3ฯ^2h_t^-2).
Additionally in this regime of t, we have h_t = (4Lฬ
ฯ^2 V_3/b^2L^2ฮฑ t)^1/2ฮฒ. Using this expression for h_t inย (<ref>) we obtain that
ฮด_t+1โคฮด_t(1-2/t) + _3ยท1/ฮฑ(ฯ^2/ฮฑ)^1-1/ฮฒ V_3( V_3/b^2L^2)^-1/ฮฒt^-2ฮฒ-1/ฮฒ
โค+_4ยท1/ฮฑ(/ฮฑ)^1+1/ฮฒ V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒ t^-2ฮฒ+1/ฮฒ,
where _3 = 2^4-2/ฮฒ, and _4 = 2^3 +2/ฮฒ. Applying Lemma <ref> to the above recursion we get
ฮด_T โค2T_0/Tฮด_T_0+1 + ฮฒ_3/(ฮฒ+1)ฮฑยท V_3( V_3/b^2L^2)^-1/ฮฒ(ฮฑ T/ฯ^2)^-ฮฒ-1/ฮฒ
+ฮฒ_4/(3ฮฒ+1)ฮฑยท V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒ(ฮฑ T/)^-ฮฒ+1/ฮฒ.
If T_0 = 0, we conclude the proof for the case T>T_0. Otherwise, we consider the second regime that corresponds to t โ [1, T_0]. In this regime, we have h_t = (4Lฬ
ฯ^2 V_3/b^2L^2ฮฑ T)^1/2ฮฒ, ฮท_t = 1/2Lฬ
V_1, and 4/(T_0+1)ฮฑโคฮท_t โค4/T_0ฮฑ. Using these expressions for h_t and ฮท_t inย (<ref>) we get that, for 1โค tโค T_0,
ฮด_t+1โคฮด_t(1 - 2/T_0+1) +2^4-2/ฮฒ/T_0ยท1/ฮฑ(ฯ^2/ฮฑ)^1-1/ฮฒ V_3( V_3/b^2L^2)^-1/ฮฒ(T^-ฮฒ-1/ฮฒ + T^1/ฮฒ/T_0)
+2^3+2/ฮฒ/T_0^2ยท1/ฮฑ(/ฮฑ)^1+1/ฮฒ V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-1/ฮฒ.
Using the rough bound 1 - 2T_0+1โค 1 and unfolding the above recursion we obtain:
ฮด_T_0+1 โคฮด_1+2^4-2/ฮฒยท1/ฮฑ(ฯ^2/ฮฑ)^1-1/ฮฒ V_3( V_3/b^2L^2)^-1/ฮฒ(T^-ฮฒ-1/ฮฒ + T^1/ฮฒ/T_0)
โค+2^3+2/ฮฒ/T_0ยท1/ฮฑ(/ฮฑ)^1+1/ฮฒ V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-1/ฮฒ.
Taking into account the definition of T_0, and the fact that T_0 โค T we further obtain:
2T_0/Tฮด_T_0+1 โค16Lฬ
V_1/ฮฑ Tฮด_1 + 2_3/ฮฑยท V_3( V_3/b^2L^2)^-1/ฮฒ(ฮฑ T/ฯ^2)^-ฮฒ-1/ฮฒ
+ 2_4/ฮฑยท(/ฮฑ)^1+1/ฮฒ V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒ(ฮฑ T/)^-ฮฒ+1/ฮฒ.
Finally, combiningย (<ref>) andย (<ref>) yields:
ฮด_T โค_1ยท V_1/ฮฑ Tฮด_1 + _2/ฮฑยท( V_3( V_3/b^2L^2)^-1/ฮฒ+ V_2^2( V_3/b^2L^2)^1/ฮฒ(ฮฑ T/ฯ^2)^-2/ฮฒฯ^-2)(ฮฑ T/ฯ^2)^-ฮฒ-1/ฮฒ,
where _1=16 and _2 = (2 + ฮฒ/ฮฒ+1)_3+(2 + ฮฒ/3ฮฒ+1)_4.
Second case: T โค T_0. In this case, we have h_t = (4Lฬ
ฯ^2 V_3/b^2L^2ฮฑ T)^1/2ฮฒ and thusย (<ref>) takes the form
ฮด_T+1 โคฮด_1 (1- 2/T_0+1)^T+2^4-2/ฮฒ/T_0ยท1/ฮฑ(ฯ^2/ฮฑ)^1-1/ฮฒ V_3( V_3/b^2L^2)^-1/ฮฒโ_t=1^T(T^-ฮฒ-1/ฮฒ + T^1/ฮฒ/T_0)
+2^3+2/ฮฒ/T_0^2ยท1/ฮฑ(/ฮฑ)^1+1/ฮฒ V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒโ_t=1^TT^-1/ฮฒ
โคฮด_1 (1- 2/T_0+1)^T +_3/ฮฑยท V_3( V_3/b^2L^2)^-1/ฮฒ(ฮฑ T/ฯ^2)^-ฮฒ-1/ฮฒ +_4/ฮฑยท V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒ(ฮฑ T/)^-ฮฒ+1/ฮฒ.
Note that, for any ฯ, T> 0, we have (1 - ฯ)^Tโคexp(-ฯ T) โค1ฯ T. Using this inequality for ฯ = 2T_0 + 1, the definition of T_0 and the fact that T+1โค 2T we obtain:
ฮด_T+1 โค_1 V_1/ฮฑ (T+1)ฮด_1
+ _2/ฮฑ( V_3( V_3/b^2L^2)^-1/ฮฒ+ V_2^2( V_3/b^2L^2)^1/ฮฒ(ฮฑ(T+1)/ฯ^2)^-2/ฮฒฯ^-2)(ฮฑ(T+1)/ฯ^2)^-ฮฒ-1/ฮฒ.
As in the proof of Theorem <ref>, we consider separately the cases T>T_0 and Tโค T_0, where T_0:=8 V_1/ฮฑ.
The case T > T_0. First, consider the algorithm at steps t= T_0,โฆ, T, where we have ฮท_t = 4/ฮฑ t. Since ฯ = 0 and ฮฒ=2, fromย (<ref>) we have
ฮด_t+1 โคฮด_t(1-2/t) + (2b^2L^2/ฮฑ t+8^3/ฮฑ^2 t^2 V_2)h_t^2.
Since ฮฒ=2, we have L =. Thus, using the assumption that h_t โค(โจ 1/ฮฑโง 1T(2b^2 + 8 ^2 V_2/ฮฑ))^-1/2 we deduce fromย (<ref>) that
ฮด_t+1โคฮด_t(1-2/t) + /ฮฑ t^2.
Applying Lemma <ref> to the above recursion gives
ฮด_Tโค2T_0/Tฮด_T_0+1 + /ฮฑ T.
If T_0 = 0, we conclude the proof for the case T>T_0. Otherwise, we consider the algorithm at steps t = 1,โฆ,T_0, where ฮท_t = 1/2 V_1 and 4/(T_0+1)ฮฑโคฮท_tโค4/T_0 ฮฑ. Fromย (<ref>) with ฯ =0 and ฮฒ=2 we obtain
ฮด_t+1โคฮด_t(1-2/T_0+1) + /ฮฑ T_0(2b^2 + 8/ฮฑ^2 V_2)h_t^2.
Using here the assumption that h_tโค(โจ 1/ฮฑโง 1T(2b^2 + 8 ^2 V_2/ฮฑ))^-1/2 and a rough bound 1 - 2T_0+1โค 1 and summing up both sides of the resulting inequality from t=1 to t=T_0 we get
ฮด_T_0+1โคฮด_1 + (ฮฑโง 1)/ฮฑ(โจ 1)T.
Combining this inequality withย (<ref>) and using the definition of T_0 and the fact that T_0 โค T we obtain the bound
ฮด_T โค16 V_1/ฮฑ Tฮด_1 + 16 V_1/ฮฑ T^2 + /ฮฑ T.
It remains to note that V_1โค 36dฮบ, cf. Table <ref>. This implies the theorem for the case T<T_0 with _1 = 576ฮบ and _2 =1/T + 1/_1 d.
Second case: T โค T_0. Using the fact that h_tโค(T/ฮฑโง 1(2b^2 + 8 ^2 V_2/ฮฑ))^-1/2 and unfolding the recursion inย (<ref>) gives
ฮด_T โคฮด_1(1- 2/T_0+1)^T + /ฮฑ T.
From the elementary inequality (1 - ฯ)^Tโค1ฯ T, which is valid for all ฯ, T> 0, we obtain with ฯ = 2T_0 + 1 that
ฮด_T โคT_0+1/2Tฮด_1 + /ฮฑ TโคT_0/Tฮด_1 + /ฮฑ Tโค_1 d/ฮฑT(ฮด_1+_2),
where _1 = 288ฮบ, _2 = 1/_1d, and the last inequality follows from the facts that T_0 โค8 V_1/ฮฑ and V_1โค 36dฮบ, cf. Table <ref>.
ยง.ยง Smoothness and ฮฑ-strong convexity: unconstrained minimization
We will use the following basic lemma.
Consider the iterative algorithm defined inย (<ref>).
Let f be ฮฑ-strongly convex on โ^d, let Assumptionย <ref> be satisfied. Let the minimizer ^โ of f on be such that โ f(^โ)=0.
Then we have
[f(_t)-f^โ]โคr_t-r_t+1/2ฮท_t-r_t(ฮฑ/4-ฮท_t/2^2 V_1)+(bLh_t^ฮฒ-1)^2/ฮฑ+ฮท_t/2( V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2),
where r_t = [_t - ^โ^2].
Recall the notation _t[ยท] = [ยท|_t].
For any โ, by the definition of projection,
_t+1-^2 = _(_t - ฮท_t_t) - ^2 โค_t - ฮท_t_t - ^2.
Expanding the squares and rearranging the above inequality, we deduce thatย (<ref>) is equivalent to
โจ_t, _t - โฉโค_t - ^2-_t+1-^2/2ฮท_t+ฮท_t/2_t^2.
On the other hand, since f is a ฮฑ-strongly convex function on , we have
f(_t) - f() โคโจโ f(_t), _t -โฉ -ฮฑ/2_t -^2.
Combiningย (<ref>) withย (<ref>) and introducing the notation a_t = _t - ^โ^2 we deduce that
_t[f(_t) - f(^โ)] โค_t[_t] -โ f(_t)_t-^โ+1/2ฮท_t_t[a_t-a_t+1]+ฮท_t/2_t_t^2-ฮฑ/2_t[a_t]
โค
b L h_t^ฮฒ-1_t-^โ+1/2ฮท_t_t[a_t - a_t+1]
โค
+ ฮท_t/2( V_1 [โ f(_t)^2] + V_2^2h_t^2 + V_3ฯ^2h_t^-2) - ฮฑ/2_t[a_t].
Using the elementary inequality 2abโค a^2+b^2 we have
bLh_t^ฮฒ-1_t - ^โโค(b L h_t^ฮฒ-1)^2/ฮฑ+ฮฑ/4_t-^โ^2.
Substitutingย (<ref>) inย (<ref>), setting r_t = [a_t], using the fact that
โ f(_t)^2โค^2_t - ^โ^2= ^2 a_t^2
and taking the total expectation from both sides of the resulting inequality
yields the lemma.
By definition, ฮท_t โคฮฑ/4^2 V_1, so that ฮฑ/4 - ฮท_t/2^2 V_1โฅฮฑ/8 and (<ref>) implies that
[f(_t) - f^โ] โคr_t - r_t+1/2ฮท_t -ฮฑ/8r_t+(bLh_t^ฮฒ-1)^2/ฮฑ+ฮท_t/2( V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2).
Set T_0: = max(32^2 V_1/ฮฑ^2-1,0). This is the value of t, where ฮท_t switches its regime.
We analyze the recursion (<ref>) separately for the cases T > T_0 and T โค T_0. We will use the fact that, by the convexity of f and Jensen's inequality,
f(_T) - f^โโค2/T(T+1)โ_t = 1^Tt(f(_t) - f^โ).
First case: T > T_0.
In this case, we decompose the sum in (<ref>) into the sum over tโ [T_0 +1, T] and the sum over tโ [1, T_0].
We first evaluate the sum โ_t=T_0+1^Tt[f(_t) - f^โ].
For any tโ [T_0 +1, T], we have ฮท_t = 8/ฮฑ (t+1) and h_t = (4ฯ^2 V_3/b^2L^2t)^1/2ฮฒ. Using inย (<ref>) these values of ฮท_t and h_t, multiplying by t, and summing both sides of the resulting inequality from T_0+1 to T we deduce that
โ_t=T_0+1^Tt[f(_t) - f^โ] โคฮฑ/16โ_t=T_0+1^Tt( (r_t -r_t+1)(t+1) -2r_t)_=: I +
_4/ฮฑ(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒโ_t=T_0+1^Tt^1/ฮฒ_=: II
+
_5/ฮฑ^2 V_2( V_3ฯ^2/b^2L^2)^1/ฮฒโ_t=T_0+1^Tt^-1/ฮฒ_=: III,
where _4 = 2^3ฮฒ-2/ฮฒ, _5 = 2^2ฮฒ+2/ฮฒ and we defined the terms I, II, and III that will be evaluated separately.
It is not hard to check that Iโคฮฑ16T T_0 r_T_0+1 since the summation in term I is telescoping. Next, we have
IIโค_4/ฮฑ(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒโ_t=1^Tt^1/ฮฒโค_6/ฮฑ(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒT^1/ฮฒ+1.
Finally,
IIIโค_5/ฮฑ^2 V_2( V_3ฯ^2/b^2L^2)^1/ฮฒโ_t=1^Tt^-1/ฮฒโค_7/ฮฑ^2 V_2( V_3ฯ^2/b^2L^2)^1/ฮฒT^1-1/ฮฒ,
where _6 = ฮฒ+1/ฮฒ_4 and _7 = ฮฒ-1/ฮฒ _5. Combining these bounds on I, II, and III we obtain
โ_t=T_0+1^Tt[f(_t)-f^โ] โคฮฑ/16TT_0r_T_0+1
+ (_6(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ+_7 V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^1/ฮฒ+1/ฮฑ .
If T_0=0 then combining (<ref>) and (<ref>) proves the theorem. If T_0โฅ 1 we need additionally to control the value r_T_0+1 on the right hand side of (<ref>). It follows fromย (<ref>) that, for 1 โค t โค T_0,
r_t+1โค r_t + 2ฮท_t(bLh_t^ฮฒ-1)^2/ฮฑ+ฮท_t^2( V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2).
Moreover, for 1โค tโค T_0 we have ฮท_t = ฮฑ/4^2 V_1 and ฮท_tโค8/ฮฑ (T_0+1). Therefore, unfolding the above recursion we get
r_T_0+1โค r_1 + โ_t=1^T_0(16/ฮฑ^2T_0(bLh_t^ฮฒ-1)^2 + 64/ฮฑ^2 T_0^2( V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2)) .
For 1โค tโค T_0 we have h_t = (4ฯ^2 V_3/b^2L^2T)^1/2ฮฒ, which yields
r_T_0+1โค r_1 + 16(_4(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ+_5 V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^1/ฮฒ/ฮฑ^2 T_0 ,
so that
ฮฑ/16T T_0 r_T_0+1โค2^2T V_1/ฮฑr_1 + (_4(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ+_5 V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^1/ฮฒ+1/ฮฑ .
It follows fromย (<ref>) andย (<ref>) that
โ_t=T_0+1^Tt[f(_t)-f^โ]
โค2^2T V_1/ฮฑr_1
+
{ (_4+_6)(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ
+
(_5+_7) V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ}T^1/ฮฒ+1/ฮฑ ,
We now evaluate the sum โ_t=1^T_0 t[f(_t) -f^โ]. Recall that for t โ [1, T_0] the parameters h_t,ฮท_t take constant values: h_t = (4ฯ^2 V_3/b^2L^2T)^1/2ฮฒ and ฮท_t = ฮฑ/4^2 V_1. Omitting in (<ref>) the term -ฮฑ r_t/8, summing the resulting recursion from 1 to T_0 and using the inequality ฮท_tโค8/ฮฑ (T_0+1) we obtain
โ_t=1^T_0t[f(_t) -f^โ]
โค T โ_t=1^T_0[f(_t) - f^โ]
โคT r_1/2ฮท_1
+T โ_t=1^T_0((bLh_t^ฮฒ-1)^2/ฮฑ+ฮท_t/2( V_2Lฬ
^2h_t^2 + V_3ฯ^2h_t^-2))
โคT r_1/2ฮท_1
+T^2(bLh_1^ฮฒ-1)^2/ฮฑ+2T/ฮฑ( V_2Lฬ
^2h_1^2 + V_3ฯ^2h_1^-2)
=
2^2T V_1/ฮฑr_1 + (_4(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ+ _5 V_2^2( V_3ฯ^2/ b^2L^2)^1/ฮฒT^-2/ฮฒ)T^1/ฮฒ+1/ฮฑ .
Summing upย (<ref>) andย (<ref>) and usingย (<ref>) we obtain the bound of the theorem:
[f(_T)-f^โ] โค_1^2 V_1/ฮฑ Tr_1+(_2(bL)^2/ฮฒ( V_3ฯ^2)^ฮฒ-1/ฮฒ+_3 V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^-ฮฒ-1/ฮฒ/ฮฑ ,
where _1 = 8, _2 = 4_4+2_6, and _3 =4_5+2_7.
Second case: T โค T_0.
In this scenario, the summation โ_t=1^Tt[f(_t) - f^โ] is treated as in (<ref>), with the only difference that T_0 is replaced by T. As a result, we obtain the same bound as in (<ref>).
ยง.ยง Smoothness and ฮฑ-strong convexity: constrained minimization
Since sup_โโ f()โค G we get fromย (<ref>) that, for any t = 1, โฆ, T,
[f(_t)- f^โ]โคr_t-r_t+1/2ฮท_t-ฮฑ/4r_t+(bLh_t^ฮฒ-1)^2/ฮฑ+ฮท_t/2( V_1G^2+ V_2^2h_t^2+ V_3ฯ^2h_t^-2).
Multiplying both sides of (<ref>) by t, summing up from t=1 to T and using the fact that
โ_t=1^T(t(r_t-r_t+1)/2ฮท_t - ฮฑ/4t r_t) โค 0 if ฮท_t = 4/ฮฑ (t+1)
we find that
โ_t=1^Tt[f(_t)- f^โ]
โค1/ฮฑโ_t=1^T[t(bLh_t^ฮฒ-1)^2+2t/t+1( V_1G^2+ V_2^2h_t^2+ V_3ฯ^2h_t^-2)].
Since h_t = (ฯ^2 V_3/b^2L^2 t)^1/2ฮฒ we obtain
โ_t=1^Tt[f(_t)- f^โ]
โค2 V_1 G^2 T/ฮฑ +
_3/ฮฑ( V_3ฯ^2( V_3ฯ^2/b^2L^2)^-1/ฮฒ + V_2^2( V_3ฯ^2/b^2L^2)^1/ฮฒT^-2/ฮฒ)T^1+1/ฮฒ,
where _3 = 2. To complete the proof, we multiply both sides of (<ref>) by 2/T(T+1) and use (<ref>).
The proof is divided in two parts,
one for estimatorย (<ref>) and the other one for estimatorย (<ref>).
In this result we set ฮท_t =2/ฮฑ t and h_t = ๐ฅ t^-1/2ฮฒ, where ๐ฅ equals to d^1/ฮฒ for estimatorย (<ref>) and to d^2 + ฮฒ/2ฮฒ for estimatorย (<ref>).
Analysis of both algorithms starts with Lemma <ref>, which states that
[f(_T)-f^โ] โค V_1G^2 (log(T)+1)/ฮฑ T+ 1/ฮฑ Tโ_t =1^T(v_t/t+b_t^2),
for any algorithm encompassed byย (<ref>). Recall that for the application of the above inequality it is assumed that f is ฮฑ-strongly convex, is a convex compact in โ^d and sup_โโ f()โค G.
Part I: for estimatorย <ref>
By the bias and variance bounds in Lemmas <ref> and <ref> respectively, we have
[f(_T)-f^โ] โค 4ฮบ G^2log(eT)d/ฮฑ T+1/ฮฑ Tโ_t =1^T(_4h_t^2(ฮฒ-1)+1/td(_5h_t^2+_6dh_t^-2)),
where _4 = (ฮบ L)^2, _5 = 4ฮบ^2, and _6 = ฯ^2ฮบ/2. Substituting h_t = (d^2/t)^1/2ฮฒ, we deduce that
[f(_T)-f^โ] โค4dฮบ G^2log(eT)/ฮฑ T+1/ฮฑ Tโ_t=1^T((_4+_6)(d^2/t)^ฮฒ-1/ฮฒ+_5d^1+2/ฮฒt^-ฮฒ+1/ฮฒ).
It remains to bound the partial sum appearing in the above inequality.
It holds that โ_t=1^Tt^-ฮฒ-1/ฮฒโคฮฒ T^1/ฮฒ and โ_t=1^Tt^-1/ฮฒ-1โค 1+ฮฒ.
Therefore, Eq.ย (<ref>) can be further bounded as
[f(_T)-f^โ] โค_1(log(T)+1)d/ฮฑ T +_2/ฮฑ Td^2(ฮฒ-1)/ฮฒT^1/ฮฒ+_3/ฮฑ Td^1+2/ฮฒ,
where _1 = 4ฮบ G^2, _2 = ฮฒ(_3+_5), and _3 = (ฮฒ+1)_5.
Part II: for estimatorย <ref>
Using Lemma <ref> (bound on the bias) and Lemma <ref> (bound on the variance), we get
[f(_T)-f^โ] โค_1(log(T)+1)d/ฮฑ T+1/ฮฑ Tโ_t = 1^T(_4h_t^2(ฮฒ-1)d^1-ฮฒ+1/t(_5h_t^2+_6d^3h_t^-2)),
where _1 = _d,1dฮบ/d-2, _4 = (ฮบ_ฮฒโ L)^2, _5 = _2d^2ฮบ^2 /(d-2)(d+ 1), and _6 = ฯ^2ฮบ/2. Plugging in h_t = d^2+ฮฒ/2ฮฒt^-1/2ฮฒ, implies
[f(_T)-f^โ] โค_1(log(T)+1)d/ฮฑ T+1/ฮฑ Tโ_t=1^T((_4+_6)(d^2/t)^ฮฒ-1/ฮฒ+_5d^1+2/ฮฒt^-ฮฒ+1/ฮฒ)
.
With a similar argument as in the previous paragraph, we dedeuce that
[f(_T)-f^โ] โค_1(log(T)+1)d/ฮฑ T+_2/ฮฑ Td^2(ฮฒ-1)/ฮฒT^1/ฮฒ+ _3/ฮฑ Td^1+2/ฮฒ.
where we assigned _2 =ฮฒ(_4+_6), and _3=(ฮฒ+1)_5.
ยง PROOF OF THE LOWER BOUNDS
Set for brevity A_t = {(_i,y_i)_i=1^t,('_i,y'_i)_i=1^t} for tโฅ 1.
Without loss of generality, we assume that _1 and '_1 are fixed and we prove the result for any fixed
(_1,'_1).
For any f : ^d โ and any sequential strategy in the class ฮ _T, such that
_t = ฮฆ_t(A_t-1,_t) with y_t = f(_t) + ฮพ_t and '_t = ฮฆ'_t(A_t-1,_t) for tโฅ 2, with y'_t = f('_t) + ฮพ'_t for t = 1, โฆ, T,
we will denote by ๐_f the joint distribution of (A_T, (_i)_i=2^T).
We start with the following lemma that will be used in the proof of Theorem <ref>.
Let Assumption <ref> be satisfied. Then, for any functions f, f' : ^d โ such that f-f'_โ =max_โ^d|f()-f'()| โค v_0 it holds that
1/2H^2(๐_f , ๐_f') โค 1-(1-I_0/2f-f'_โ^2)^T.
Since, for each tโฅ 2,the noise (ฮพ_t,ฮพ'_t) is independent of (A_t-1,(_i)_i=1^t) and _t is independent of
A_t-1 we have
P_f = Fฬฃ_1(y_1 - f(_1),y'_1 - f('_1))โ_t = 2^T Fฬฃ_t(y_t - f(ฮฆ_t(A_t-1,_t)),y'_t - f(ฮฆ'_t(A_t-1,_t)) )P_t(_t)
where โ_t is the probability measure corresponding to the distribution of _t. Set for brevity Fฬฃ_f, 1โFฬฃ_1(y_1 - f(_1),y'_1 - f('_1)) and Fฬฃ_f, tโFฬฃ_t(y_t - f(_t),y'_t - f('_t)) P_t(_t), tโฅ 2.
With this notation, we have P_f = โ_t = 1^T Fฬฃ_f, t.
Using the definition of Hellinger distance we obtain
1 -1/2H^2(๐_f , ๐_f')
=
โซโ(P_fP_f')
=โ_t=1^Tโซโ(Fฬฃ_f, t)โ(Fฬฃ_f', t)
=
โ_t = 1^T1 - H^2Fฬฃ_f, t, Fฬฃ_f', t/2.
Finally, invoking Assumption <ref>, we get
โ_t = 1^T1 - H^2Fฬฃ_f, t, Fฬฃ_f', t/2 โฅmin_1 โค t โค T1 - H^2Fฬฃ_f, t, Fฬฃ_f', t/2^T
โฅ1 - I_0f-f'_โ^2/2^T,
which implies the lemma.
The proof follows the general lines given inย <cit.>, so that we omit some details that can be found in that paper.
We first assume that ฮฑโฅ T^-1/2+1/ฮฒ.
Let ฮท_0 : โโโ be an infinitely many times differentiable function such that
ฮท_0(x) =
=1 if |x|โค 1/4,
โ (0,1) if 1/4 < |x| < 1,
=0 if |x| โฅ 1.
Set ฮท(x) = โซ_-โ^xฮท_0(ฯ)dฯ. Let ฮฉ = {-1,1}^d be the set of binary sequences of length d.
Consider the finite set of functions f_ฯ: โ^dโโ, = (ฯ_1, โฆ, ฯ_d)โฮฉ, defined as follows:
f_() = ฮฑ(1+ฮด) ^2/2 + โ_i=1^dฯ_irh^ฮฒฮท(u_ih^-1),
=(u_1,โฆ,u_d),
where ฯ_iโ{-1,1},
h =min((ฮฑ^2/d)^1/2(ฮฒ-1), T^-1/2ฮฒ) and r>0, ฮด >0 are fixed numbers that will be chosen small enough.
It is shown in <cit.> that if ฮฑโฅ T^-1/2+1/ฮฒ then f_โโฑ'_ฮฑ,ฮฒ
for r>0 and ฮด >0 small enough, and the minimizers of functions f_ belong to and are of the form
_^* = (x^โ(ฯ_1), โฆ, x^โ(ฯ_d)),
where x^โ(ฯ_i)=-ฯ_iฮฑ^-1(1+ฮด)^-1r h^ฮฒ-1.
For any fixed โฮฉ, we denote by ๐_,T the probability measure corresponding to the joint distribution of (A_T,(_i)_i=2^T) where y_t=f_ (_t)+ฮพ_t and y'_t=f_ ('_t)+ฮพ'_t with
(ฮพ_t,ฮพ'_t)'s satisfying Assumption <ref>,
and (_t,'_t)'s chosen by a sequential strategy in ฮ _T.
Consider the statistic
โ_โฮฉ_T-^*_.
Classical triangle inequality based arguments yield
max_โฮฉ๐_,T[_T-^*_^2] โฅฮฑ^-2r^2 h^2ฮฒ-2inf_max_โฮฉ๐_,T[ฯ(,) ].
Note that for all , 'โฮฉ such that ฯ(, ')=1 we have
max_โโ^d|f_()-f_'()|โค 2rh^ฮฒฮท(1) โค 2rT^-1/2ฮท(1).
Thus, choosing r small enough to satisfy 2rฮท(1)< min(v_0, I_0^-1/2) we ensure 2rT^-1/2ฮท(1) โค v_0 to apply Lemmaย <ref> and deduce for the considered , ' โฮฉ that
H^2(๐_,T , ๐_',T) โค 2(1 - (1 - (2T)^-1)^T)
โค
1,
where we have used the fact that 1-xโฅ 4^-x for 0<xโค 1/2.
Applying <cit.> we deduce that
inf_max_โฮฉ๐_, T [ฯ(,)]โฅ 0.3 d.
Therefore, we have proved that if ฮฑโฅ T^-1/2+1/ฮฒ there exist r>0 and ฮด >0
such that
max_โฮฉ๐_,T[_T-^*_^2]โฅ 0.3 dฮฑ^-2r^2 h^2ฮฒ-2 = 0.3 r^2min(1,
d/ฮฑ^2T^-ฮฒ-1/ฮฒ).
This implies (<ref>) for ฮฑโฅ T^-1/2+1/ฮฒ.
In particular, if ฮฑ=ฮฑ_0:= T^-1/2+1/ฮฒ the bound (<ref>) is of the order min(1, dT^-1/ฮฒ). Then for 0<ฮฑ<ฮฑ_0 we also have the bound of this order since the classes โฑ_ฮฑ,ฮฒ are nested: โฑ_ฮฑ_0,ฮฒโโฑ_ฮฑ,ฮฒ. This completes the proof of (<ref>).
We now prove (<ref>). From (<ref>) and ฮฑ-strong convexity of f we get that, for ฮฑโฅ T^-ฮฒ+2/2ฮฒ,
max_โฮฉ๐_,T[f(_T)-f(x_^*)]โฅ0.15 r^2min(ฮฑ,
d/ฮฑT^-ฮฒ-1/ฮฒ).
This implies (<ref>) in the zone ฮฑโฅ T^-ฮฒ+2/2ฮฒ=ฮฑ_0 since for such ฮฑ we have
min(ฮฑ,
d/ฮฑT^-ฮฒ-1/ฮฒ)
=
min(max(ฮฑ, T^-ฮฒ+2/2ฮฒ), d/โ(T), d/ฮฑT^-ฮฒ-1/ฮฒ).
On the other hand,
min(ฮฑ_0,
d/ฮฑ_0T^-ฮฒ-1/ฮฒ)
=
min(T^-ฮฒ+2/2ฮฒ, d/โ(T)).
The same lower bound holds for 0<ฮฑ<ฮฑ_0 by the nestedness argument that we used to prove (<ref>) in the zone 0<ฮฑ<ฮฑ_0. Thus, (<ref>) follows.
|
http://arxiv.org/abs/2306.12526v1
|
20230621192117
|
Quantum Weight Enumerators for Real Codes with $X$ and $Z$ Exactly Transversal
|
[
"Eric Kubischta",
"Ian Teixeira",
"J. Maxwell Silvester"
] |
quant-ph
|
[
"quant-ph"
] |
[email protected]
[email protected]
[email protected]
In this note we show that the weight enumerators of a real quantum error correcting code with X and Z exactly transversal must satisfy certain identities. One consequence of these identities is that if the code is error detecting then it is automatically error correcting for free; implying a relationship between transversality and code distance.
Quantum Weight Enumerators for Real Codes
with X and Z Exactly Transversal
J. Maxwell Silvester
=============================================================================
ยง INTRODUCTION
Let ((n,K,d)) denote an n-qubit quantum error correcting code with a codespace of dimension K and with a distance of d.
In this paper we will study quantum weight enumerators <cit.> for K = 2 codes under the assumption that the codes are real and the Pauli gates X = (0 1
1 0) and Z = (1 0
0 -1) are exactly transversal. We say a code is real if the orthogonal projector onto the codespace is real, ฮ ^*=ฮ . Equivalently a code is real if there exists a basis of codewords that has all real coefficients with respect to the computational basis.
For a given code we say that a gate U is exactly transversal if U^โ n on the physical space implements logical U on the code space.
These assumptions seem restrictive but in fact many familiar codes are real and have X and Z exactly transversal. Some examples include: the [[5,1,3]] code, the [[7,1,3]] Steane code, Shor's [[9,1,3]] code, the [[11,1,5]] code, the [[15,1,3]] code and indeed the whole family of [[2^r-1,1,3]] quantum Reed-Muller codes.
ยง PRELIMINARIES
ยง.ยง Weight Enumerators
Let ฮ be the code projector. The weight enumerator and the dual weight enumerator are defined in <cit.> as
A_i = 1/K^2โ_E โโฐ_i E ฮ ^2,
B_i = 1/Kโ_E โโฐ_iE ฮ E ฮ ,
where โฐ_i are the Pauli errors with weight i. Usually โweight enumerator" refers to a polynomial with these coefficients, but we will instead refer to the coefficient vectors A = (A_0, โฆ, A_n) and B = (B_0, โฆ, B_n) as the weight enumerators.
Two codes are said to be equivalent if their code spaces are related by a non-entangling gate, i.e., a gate from U(2)^โ nโ S_n, the local unitaries together with permutations. The weight enumerators form a code invariant.
<cit.>
Equivalent codes have the same weight enumerators A and B.
One of the most useful properties of weight enumerators is computing distance.
<cit.>
The distance of an ((n,K)) code is d iff A_i = B_i for all i < d.
ยง.ยง Exactly Transversal
Exactly transversal X and Z implies n is odd.
The commutator of X^โ n and Z^โ n is (-1)^n. The commutator of logical X and logical Z must be -1. Thus n must be odd.
ยง.ยง Weights
The Hamming weight of a bit string is the number of 1's it has. For example, the bit string 000 has Hamming weight 0 while 011,101,110 all have have Hamming weight 2. So
000,011,101,110
are all of the even weight bit strings of length 3. And
111,100,010,001
are all the odd weight bit strings of length 3.
The weight of a computational basis ket is the Hamming weight of the bit string it corresponds to. So
|000โฉ,|011โฉ,|101โฉ,|110โฉ
are all the even weight computational basis kets for 3 qubits. And
|111โฉ,|100โฉ,|010โฉ,|001โฉ
are all the odd weight kets.
Now we turn our attention to Pauli operators.
Up to a global phase, every n qubit Pauli operator E is a tensor product of the form E = g_1 โโฏโ g_n where each g_i is a 1-qubit Pauli gate g_i โ{ I ,X, Y ,Z }. Define the weight of E, denoted wt(E), to be the number of g_i โ{ X,Y,Z } in this tensor product.
Define the X weight, wt_X(E), to be the number of g_i โ{ X,Y } and define the Z weight, wt_Z(E), to be the number of g_i โ{ Z,Y }. Define n_Y(E) to be the number of g_i=Y. Similarly, define n_X(E) to be the number of g_i=X and define n_Z(E) to be the number of g_i=Z.
The following lemma shows that n_Y(E) is a function of the other three weights.
n_Y = wt_X + wt_Z - wt.
Suppose the total weight of a Pauli string is wt. There are n_X=wt_X-n_Y many g_i=X factors, n_Z=wt_Z-n_Y many g_i=Z, and n_Y many g_i=Y. These must add up to the total weight wt=n_X+n_Y+n_Z. Thus wt = (wt_X- n_Y) + n_Y + (wt_Z - n_Y). Rearranging we have the claim.
ยง.ยง Symmetric operators
A matrix M is said to be symmetric if M = M^T and is said to be anti-symmetric if M = - M^T. The Pauli matrices I, X, and Z are symmetric and Y is anti-symmetric.
Let E = g_1 โโฏโ g_n be a Pauli string. Then E is symmetric iff n_Y(E) is even and E is antisymmetric iff n_Y(E) is odd.
Let E be an anti-symmetric Pauli string. That is, n_Y(E) is odd. Suppose |ฮจโฉ is real. Then
ฮจEฮจ=0.
We have
ฮจEฮจ 1ฮจEฮจ^T
=|ฮจโฉ^T E^T โจฮจ|^T
2|ฮจโฉ^โ E^T โจฮจ|^โ
= ฮจE^Tฮจ
3-ฮจEฮจ
Line (1) uses the fact that every number is it's own transpose while line (2) uses the realness of |ฮจโฉ. Line (3) uses that E is antisymmetric.
Thus we have ฮจEฮจ=-ฮจEฮจ and the claim follows.
ยง MAIN RESULT
In this section we will assume we have a real code with X and Z exactly transversal. The code space has dimension K=2 so the code projector can be written as
ฮ = 0 + 1.
wt_X(E) =odd0E0 =0= 1E1 ,
wt_X(E) =even0E1 =0= 1E0 .
First notice that because Z is exactly transversal then in the expansion of |0โฉ with respect to the computational basis only even weight kets will have nonzero coefficients. Similarly, expanding |1โฉ in the computational basis only odd weight kets will have nonzero coefficients. When an error E has wt_X(E) odd, then E changes the parity of whatever basis ket it acts on, whereas if wt_X(E) is even, then E preserves parity. A linear combination of even weight kets is always orthogonal to a linear combination of odd weight kets. The claim follows.
In a similar vein we have the following.
wt_Z(E) =even0E0 =1E1 .
We can write
1E1 =0XEX0
10(-1)^wt_Z(E)EXX0
=(-1)^wt_Z(E)0E0
=0E0.
Lines (1) uses XEX = (-1)^wt_Z(E) E.
n_Y even and wt_Z(E) odd 0E1 = 0,
n_Y odd and wt_Z(E) even 0E1 = 0.
Recall that n_Y(E) even is equivalent to E symmetric and n_Y(E) odd is equivalent to E antisymmetric.
We have
0E1 =0EX0
= (-1)^wt_Z(E)0XE0
=(-1)^wt_Z(E)1E0
1(-1)^wt_Z(E)1E0^T
=(-1)^wt_Z(E)|0โฉ^T E^T โจ1|^T
2 (-1)^wt_Z(E)|0โฉ^โ E^T โจ1|^โ
= (-1)^wt_Z(E)0E^T1.
In line (1), we use that each number is it's own transpose while line (2) uses the realness of the codewords.
If E is symmetric then E^T = E. It follows in this case that if wt_Z(E) is odd then 0E1 = 0.
If E is anti-symmetric then E^T =-E. In this case 0E1 = (-1)^1+ wt_Z(E)0E1. It follows that if wt_Z(E) is even then 0E1 = 0.
Using these lemmas, we can rewrite the A_i.
A_i = โ_E โโฐ_i:
wt_Z(E) even
wt_X(E) even |0E0|^2.
Start with the A-type weight enumerators. Using the fact that XEX = (-1)^wt_Z(E) E we have the following:
E ฮ = E ( 0 + 1 )
= E ( 0 + X 0 X)
= E 0 + E X 0 X
= 0E0 + X E X 0
= 0E0 + (-1)^wt_Z(E)E 0
= ( 1 + (-1)^wt_Z(E))0E0.
If wt_Z(E) is odd then this term cancels and if wt_Z(E) is even then we get a factor of 2. On the other hand, from <ref> we know that when wt_X(E) is odd then 0E0 = 0 and so we must also have that wt_X(E) is even. It follows that
A_i = 1/4โ_E โโฐ_i |E ฮ |^2 = โ_E โโฐ_i:
wt_Z(E) even
wt_X(E) even |0E0|^2.
We now present our two main results.
For a real code with X,Z
exactly transversal, A_i = 0 for all odd i.
In <ref> we proved the following:
A_i = โ_E โโฐ_i:
wt_Z(E) even
wt_X(E) even |0E0|^2.
wt_X(E) and wt_Z(E) are even for all terms in the sum. Since wt(E) = i is assumed odd then from <ref> we have that n_Y(E) is odd. This implies that E is antisymmetric and so by <ref> 0E0 = 0. The claim follows.
For a real code with X,Z exactly transversal, A_i = B_i for all even i.
The A enumerators are described in <ref>. Let's rewrite the B coefficients as well:
Eฮ E ฮ
= E (0 + X 0 X) E (0 + X 0 X)
= E 0 E 0 + EX 0 X E 0
+ E 0 E X 0 X
+ EX 0 X E X 0 X
= 0E00E0 + 0E11E0
+ 1E00E1 +1E11E1
1 0E00E0 + 0E11E0
+ 1E00E1 +0E00E0
2 2|0E0|^2 + 2|0E1|^2.
Line (1) uses XEX = (-1)^wt_Z(E) E twice. Line (2) uses hermiticity of E, E=E^โ .
Thus the B weight enumerators are
B_i = 1/2โ_E โโฐ_iE ฮ E ฮ
= โ_E โโฐ_i |0E0|^2 + |0E1|^2
4โ_E โโฐ_i :
wt_X(E) = even |0E0|^2 + โ_E โโฐ_i :
wt_X(E) = odd |0E1|^2
= โ_E โโฐ_i :
wt_X(E) = even
wt_Z(E) = even |0E0|^2 + โ_E โโฐ_i :
wt_X(E) = even
wt_Z(E) = odd |0E0|^2 _C_i
+ โ_E โโฐ_i :
wt_X(E) = odd |0E1|^2 _D_i.
In line (4) we use <ref>. To be sure, we have
B_i = A_i + ( C_i + D_i ).
However, the next two lemmas show that C_i = D_i = 0 when i is even.
C_i = 0 for all even i.
For every term in the sum wt_X(E) is even and wt_Z(E) is odd. Since wt(E) = i is even then from <ref> we have that n_Y(E) is odd. This implies that E is anti-symmetric and so, by <ref> , 0E0 = 0. The claim follows.
D_i = 0 for all i even.
wt_X(E) is odd for every term in the sum. Since wt(E) = i is even then the difference wt_X - wt is odd. Combining this with <ref> we conclude that n_Y and wt_Z have opposite parity. So by <ref> we have that 0E1 =0 for every term in the sum. The claim follows.
This completes the proof of the theorem.
We immediately have the following consequence.
A real code with X and Z exactly transversal must have odd distance d.
In other words, every error detecting code is actually error correcting.
ยง APPLYING THE MAIN RESULT TO STABILIZER CODES
A stabilizer code is real if and only if n_Y is even for all the stabilizer generators.
Recall that a code is real if and only if the code projector is real, ฮ =ฮ ^*. For a stabilizer code, ฮ is a uniform sum over the stabilizer. So a code is real if and only if all Pauli gates in the stabilizer are real. Since I,X,Z are real while Y^*=-Y a Pauli stabilizer E is real if and only if wt_Y(E) is even. The result follows.
An [[n,1,d]] stabilizer code has X and Z exactly transversal (or at least is equivalent to such a stabilizer code) if and only if n is odd and all stabilizer generators have wt_X even and wt_Z even.
Suppose X and Z are exactly transversal. Then by <ref> we must have that n is odd. Furthermore, all stabilizer generators E commute with X^โ n, so wt_Z(E) is even, and commute with Z^โ n, so wt_X(E) is even.
Now consider the reverse implication. Since all stabilizer generators are assumed to have wt_X even and wt_Z even then X^โ n and Z^โ n must be in the normalizer N(S). Since n is odd then X^โ n and Z^โ n anticommute and so X^โ n , Z^โ nโ N(S) โ S are not in the stabilizer and they generate all the logical operations in N(S)/S. In this case, there will always exist a Clifford gate C such that C^โ n gives an equivalent stabilizer code with X and Z exactly transversal.
For an example of the situation described at the end of <ref>, recall that, in the most common version of the Shor code, X^โ 9 implements logical Z and Z^โ 9 implements logical X. Then applying H^โ 9, where H= 1/โ(2)[ 1 1; 1 -1 ] is the Hadamard gate, yields an equivalent stabilizer code with Xand Z exactly transversal.
A stabilizer code is real and has X and Z exactly transversal (or at least is equivalent to such a stabilizer code) if and only if n is odd and all stabilizer generators have wt_X,wt_Z,n_Y,n_X,n_Z all even.
The result follows from <ref> and <ref>.
<ref> provides an easy condition for when we can apply <ref> to stabilizer codes. So we obtain the following proposition.
Suppose we have an [[n,1,d]] stabilizer code with n odd. Furthermore suppose that the number of X's, Y's, and Z's in each stabilizer generator is even. Then A_i = 0 for odd i and A_i = B_i for even i. It follows that d is odd.
By <ref> such a code must be (at least equivalent to) a real code with X and Z exactly transversal. Since equivalent codes have the same weight enumerator, by <ref>, the result follows.
ยง ACKNOWLEDGMENTS
We would like to thank Michael Gullans and Victor Albert for helpful conversations. This research was supported in part by NSF QLCI grant OMA-2120757.
ยง WEIGHT ENUMERATORS FOR VARIOUS REAL CODES WITH X AND Z EXACTLY TRANSVERSAL
Notice that each code in the table has A_i = 0 for odd i and A_i = B_i for even i. Also notice that n is odd and d is odd. We have included the ((11,2,3)) non-additive code to affirm that the results above hold for non-additive codes. [Interestingly, this code seems to be the first non-additive error correcting code, despite the claim made in <cit.>.]
|
http://arxiv.org/abs/2306.01630v1
|
20230602154926
|
A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging
|
[
"Jeffrey Wen",
"Rizwan Ahmad",
"Philip Schniter"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
[
A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging
equal*
Jeffrey Wenosu_ece
Rizwan Ahmadosu_bme
Philip Schniterosu_ece
osu_eceDept. of ECE, The Ohio State University, Columbus, OH 43210, USA.
osu_bmeDept. of BME, The Ohio State University, Columbus, OH 43210, USA
Jeffrey [email protected]
Machine Learning, MRI, Normalizing Flow, Inverse Problems
0.3in
]
Accelerated magnetic resonance (MR) imaging attempts to reduce acquisition time by collecting data below the Nyquist rate.
As an ill-posed inverse problem, many plausible solutions exist, yet the majority of deep learning approaches generate only a single solution.
We instead focus on sampling from the posterior distribution, which provides more comprehensive information for downstream inference tasks.
To do this, we design a novel conditional normalizing flow (CNF) that infers the signal component in the measurement operator's nullspace, which is later combined with measured data to form complete images.
Using fastMRI brain and knee data, we demonstrate fast inference and accuracy that surpasses recent posterior sampling techniques for MRI.
Code is available at <https://github.com/jwen307/mri_cnf>
ยง INTRODUCTION
Magnetic resonance imaging (MRI) is a routine diagnostic imaging tool that has the potential to provide high-quality soft-tissue images without exposure to ionizing radiation.
However, MRI exams are generally time-consuming, which reduces throughput, compromises patient comfort, and increases the likelihood of artifacts from patient motion.
Scan time can be reduced by sampling below the Nyquist rate, but this makes the image reconstruction process more challenging.
Hence, recovering high-accuracy images from highly subsampled MRI scans has become an active area of research <cit.>.
Many approaches have been proposed to recover MR images from subsampled measurements.
Parallel imaging, which is available on all commercial scanners, takes advantage of the availability of multiple receiver coils.
After estimating coil-sensitivity maps or interpolation kernels, methods like SENSE <cit.> and GRAPPA <cit.> can use subsampled data from multiple coils to remove aliasing artifacts in the final reconstruction.
However, parallel imaging alone can typically allow only two- to three-fold acceleration of the acquisition process.
For higher acceleration, methods based on compressed-sensing (CS) have been proposed <cit.>.
The CS methods are framed as iteratively minimizing the sum of a data-fidelity term and a regularization term, where the regularization term incorporates prior knowledge about the images.
The prior knowledge could be that the true images are sparse in some transform domain, as in traditional CS, or that the true images are preserved by some denoising function, as in โplug-and-playโ recovery <cit.>.
Deep neural networks have also been proposed for MR image recovery, based on end-to-end approaches like <cit.> or algorithmic unrolling <cit.>.
Yet another approach, known as compressed sensing with a generative model (CSGM) <cit.>, trains a deep image generator and then optimizes its input to give the image that, after application of the forward model, best matches the measurements.
Although they achieve high reconstruction quality, the aforementioned methods provide only a point estimate.
Yet, accelerated MRI is an ill-posed inverse problem, where there exist many possible reconstructions that are consistent with a given prior and set of subsampled measurements.
Since small variations in image content can impact the final diagnosis, it is crucial for radiologists to know whether a visual structure is truly reflective of the patient anatomy or merely an imaging artifact.
Problems of this form fall into the realm of uncertainty quantification (UQ) <cit.>.
One approach that facilitates UQ is Bayesian imaging, where the goal is not to compute a single โgoodโ image estimate but rather to sample from the posterior distribution.
The availability of a large batch of posterior samples enables many forms of UQ.
For example, a simple approach is to generate the pixel-wise standard-deviation map, which quantifies which pixels are more trustworthy.
A more involved approach is to construct a hypothesis test for the absence of a particular (multi-pixel) visual structure <cit.>.
In this paper, we focus on the task of sampling from the posterior, which facilitates future work that uses those samples for uncertainty quantification, adaptive sampling <cit.>, counterfactual diagnosis <cit.>, or other applications.
There exist several deep-learning based approaches to sample from the posterior, including those based on
conditional generative adversarial networks (CGANs) <cit.>,
conditional variational autoencoders (CVAEs) <cit.>,
conditional normalizing flows (CNFs) <cit.>,
and score/Langevin/diffusion-based approaches <cit.>.
In this paper, we focus on the CNF approach.
Compared to the other methods, CNFs yield rapid inference and require only simple, likelihood-based training.
In a recent super-resolution (SR) contest <cit.>, a CNF (by Song et al. Song:CVPRW:22) won, beating all CGAN, CVAE, and diffusion-based competitors.
Inspired by the success of CNFs in SR,
we design
the first CNF for accelerated multi-coil MRI.
Previous applications of CNFs to MRI <cit.> showed competitive results but were restricted to single-coil recovery of magnitude images.
As the vast majority of modern MRI scanners capture multi-coil data, the extension to multi-coil, complex-valued data is crucial for real-world adoption. However, the order-of-magnitude increase in dimensionality makes this transition non-trivial.
For this purpose, we propose a novel CNF that infers only the signal component in the nullspace of the measurement operator and combines its output with the measured data to generate complete images.
Using fastMRI brain and knee data, we demonstrate that our approach outperforms existing posterior samplers based on CGANs <cit.> and MRI-specific score/Langevin-based approaches <cit.> in almost all accuracy metrics, while retaining fast inference and requiring minimal hyperparameter tuning.
ยง BACKGROUND
ยง.ยง Measurement Model
In MRI, measurements of the D-pixel true image iโโ^D are collected in the spatial Fourier domain, known as the โk-space.โ
In a multi-coil system with C coils, measurements from the cth coil can be written as
kโ_c = Pโ โFโ โSโ_c iโ + ฯตโ_cโ^M
,
where
Pโโ^M ร D is a sampling matrix containing M rows of the D ร D identity matrix Iโ,
Fโ is the D ร D 2D unitary discrete Fourier transform (DFT) matrix,
Sโ_c โ^D ร D is the coil-sensitivity map of the cth coil,
and
ฯตโ_c โ^M is measurement noise.
We will assume that {Sโ_c}_c=1^C have been obtained from ESPIRiT <cit.>, in which case โ_c=1^C Sโ_cSโ_c=Iโ.
In the case of single-coil MRI, C=1 and Sโ_1=Iโ.
We now rewrite the model in terms of the โcoil imagesโ xโ_c Sโ_ciโ and their corresponding โzero-filledโ estimates yโ_c FโPโkโ_c, and then stack all the coils together via xโ [xโ_1,โฆ,xโ_C] and yโ [yโ_1,โฆ,yโ_C] to obtain
yโ = Aโxโ + ฮตโ ,
with ฮตโ = [(FโPโฯตโ_1),โฆ,(FโPโฯตโ_C)] and forward operator
Aโ
= {FโPโPโFโ, โฆ, FโPโPโFโ}
.
To perform image recovery, one can first compute yโ, then estimate x from yโ, and finally either โcoil-combineโ to yield a complex-valued image estimate
i = [Sโ_1,โฆ,Sโ_C]x
or perform root-sum-of-squares (RSS) reconstruction to obtain a magnitude-only image estimate
|i| = โ(โ_c=1^C |x_c|^2).
In the โfully sampledโ case, M=D and so yโ=xโ+ฮตโ.
But fully sampled acquisition is very slow, and so we are interested in accelerated MRI, where one collects M < D measurements per coil to save time.
This gives an โacceleration factorโ of R D/M, but it makes Aโ rank deficient.
In this latter case, accurate recovery of xโ requires the use of prior information about xโ, such as the knowledge that xโ is a vector of MRI coil images.
ยง.ยง Posterior Sampling
In the case of MRI, the posterior distribution that we would ultimately like to sample from is p_i|k(ยท|kโ), where kโ[kโ_1,โฆ,kโ_C].
Equivalently, we could consider p_i|y(ยท|yโ) since yโ and kโ contain the same information.
Another option is to sample from p_x|y(ยท|yโ) and then use (<ref>) or (<ref>) to combine coil images into a single image.
We take the latter approach.
For CNFs and CGANs, posterior sampling is accomplished by designing a neural network that maps samples from an easy-to-generate latent distribution (e.g., white Gaussian) to the target distribution (i.e., the distribution of xโ given yโ, with density p_x|y).
Once that network is trained, sample generation is extremely fast.
For Langevin dynamics, an algorithm is run for hundreds or thousands of iterations to generate each sample, and each iteration involves calling a neural network.
Consequently, the inference time is much longer than that of CNFs and CGANs.
ยง.ยง Conditional Normalizing Flows
Normalizing flows (NF) <cit.> have emerged as powerful generative models capable of modeling complex data distributions.
Normalizing flows learn an invertible mapping between a target data distribution and a simple latent distribution, generally a Gaussian.
More concretely, for a latent sample zโ drawn from the latent distribution p_z, the normalizing flow defines an invertible transformation f_ฮธโ(ยท): ^Qโ^Q.
This transformation is parameterized by ฮธโ, and xโ = f_ฮธโ(zโ) defines a sample in the target data domain.
This mapping of the latent distribution induces a probability in the target data domain with a probability density derived from the change-of-variable formula
pฬ_x(xโ;ฮธโ) = p_z(f_ฮธโ^-1(xโ)) | ( โ f_ฮธโ^-1(xโ)/โxโ) | ,
where (ยท) denotes the determinant.
The goal of the normalizing flow is to approximate the underlying data distribution p_x with pฬ_x(ยท;ฮธโ).
Given a set of data samples {xโi}_i=1^N, the parameters ฮธโ can be fit using a maximum likelihood loss
L(ฮธโ)
= โ_i=1^Nlnpฬ_x(xโi;ฮธโ)
= โ_i=1^Nln p_z(f_ฮธโ^-1(xโi)) + ln| (โ f_ฮธโ^-1(xโi)/โxโ^(i)) |
Once the training is complete, samples from the target distribution can be rapidly generated by drawing samples from the latent distribution and passing them through the normalizing flow f_ฮธโ.
It is worth noting that maximizing L(ฮธโ) is equivalent to minimizing the Kullback-Leibler (KL) divergence between pฬ_x(ยท;ฮธโ) and p_x <cit.>, which aligns with the goal of approximating p_x with pฬ_x(ยท;ฮธโ).
The maximum-likelihood loss provides stable training with minimal hyperparameter tuning and has been shown to be robust to mode collapse.
Conditional normalizing flows (CNFs) <cit.> generalize normalizing flows by adding a conditioning signal yโ.
With the CNF denoted as h_ฮธโ(ยท,ยท): ^Qร^Q โ^Q, the forward process from the latent domain to the data domain is given by xโ = h_ฮธโ(zโ,yโ).
For complex-valued, multi-coil MRI, we have Q=2CD.
The inclusion of yโ alters the objective of the CNF to approximating the unknown posterior distribution p_x|y(ยท|yโ) with pฬ_x|y(ยท|yโ;ฮธโ).
As before, the change-of-variable formula implies the induced distribution
pฬ_x|y(xโ|yโ;ฮธโ) = p_z(h_ฮธโ^-1(xโ,yโ)) |( โ h_ฮธโ^-1(xโ,yโ)/โxโ) |
,
where h_ฮธโ^-1 refers to the inverse mapping of h_ฮธโ with respect to its first argument.
Given a dataset {(xโi, yโi)}_i=1^N, the maximum likelihood loss can be utilized to optimize the parameters ฮธโ
L(ฮธโ)
= โ_i=1^Nlnpฬ_x|y(xโi | yโi;ฮธโ)
= โ_i=1^Nln p_z(h_ฮธโ^-1(xโi,yโi))
+ln| ( โ h_ฮธโ^-1(xโi,yโi)/โxโi) |
CNFs have shown promising performance in solving inverse problems, such as super-resolution <cit.>, making it an exciting avenue of exploration for accelerated MRI.
Denker et al. Denker:JI:21 developed a CNF for single-coil, magnitude-only knee images.
This study showed promising initial results, but the limited scope did not demonstrate performance in the more realistic multi-coil, complex-valued domain.
As this transition increases the dimensionality by an order of magnitude, non-trivial architectural changes are required.
In this paper, we build on the latest advances in CNFs to create a method that is capable of generating high-quality posterior samples of multi-coil, complex-valued MRI images.
ยง METHOD
Our CNF consists of two networks, a conditioning network g_ฮธโ and a conditional flow model h_ฮธโ.
The conditioning network takes the vector of zero-filled (ZF) coil-images yโ as input and produces features that are used as conditioning information by the flow model h_ฮธโ.
Aided by the conditioning information, h_ฮธโ learns an invertible mapping between samples in the latent space and those in the image space.
Using the notation of Sec. <ref>, our overall CNF takes the form
hฬ
_ฮธโ(zโ,yโ) h_ฮธโ(zโ,g_ฮธโ(yโ)).
Recently, advancements of CNFs in the super-resolution literature have revealed useful insights for more general inverse problems.
First, Lugmayr et al. Lugmayr:ECCV:20 suggested the use of a pretrained, state-of-the-art point-estimate network for the conditioning network g_ฮธโ.
This network is then trained jointly with h_ฮธโ using the loss in (<ref>).
This approach provides a functional initialization of g_ฮธโ and allows g_ฮธโ to learn to provide features that are useful for the maximum-likelihood training objective.
We utilize a UNet from <cit.> for g_ฮธโ since it has been shown to perform well in accelerated MRI.
We first pre-train g_ฮธโ for MRI recovery, and later we jointly train g_ฮธโ and h_ฮธโ together.
Song et al. Song:CVPRW:22 demonstrated the benefits of using โfrequency-separation" when training a CNF for super-resolution.
The authors argue that the low-resolution conditional image already contains sufficient information about the low-frequency components of the image, so the CNF can focus on recovering only the high-frequency information.
The CNF output is then added to an upsampled version of the conditional image to yield an estimate of the full image.
We now generalize the frequency-separation idea to arbitrary linear models of the form yโ=Aโxโ+ฮตโ from (<ref>) and apply the resulting procedure to MRI.
Notice that (<ref>) implies
Aโ^+yโ = Aโ^+Aโxโ + Aโ^+ฮตโ
where (ยท)^+ denotes the pseudo-inverse.
Here, Aโ^+Aโxโ is recognized as the projection of xโ onto the row-space of Aโ, which we will refer to as the โmeasured space.โ
Then
uโ (Iโ-Aโ^+Aโ)xโ
would be the projection of xโ onto its orthogonal complement, which we refer to as the โnullspace.โ
Assuming that the nullspace has dimension >0, we propose to construct an estimate x of xโ with the form
x(zโ,yโ)
= (Iโ-Aโ^+Aโ)hฬ
_ฮธโ(zโ,yโ) + Aโ^+yโ,
where hฬ
_ฮธโ(zโ,yโ) is our CNF-generated estimate of uโ and the (Iโ-Aโ^+Aโ) in (<ref>) strips off any part of hฬ
_ฮธโ(zโ,yโ) that has leaked into the measured space.
A similar approach was used in <cit.> for point estimation.
Given training data {(xโi,yโi)}_i=1^N, the CNF hฬ
_ฮธโ(ยท,ยท) is trained to map code vectors zโiโผ p_z to the nullspace projections
uโi (Iโ-Aโ^+Aโ)xโi
using the measured data yโi as the conditional information.
As a result of (<ref>), the reconstructions x agree with the measurements yโ in that Aโx=yโ.
However, this also means that x inherits the noise ฮตโ corrupting yโ, and so this data-consistency procedure is best used in the low-noise regime.
In the presence of significant noise, the dual-decomposition approach <cit.> may be more appropriate.
In the accelerated MRI formulation (<ref>)-(<ref>), the matrix Aโ is itself an orthogonal projection matrix, so that, in (<ref>),
Iโ-Aโ^+Aโ
= {FโPPFโ,โฆ,
FโPPFโ},
where Pโ^(D-M)ร D is the sampling matrix for the non-measured k-space.
Also, yโ is in the row-space of Aโ, so
Aโ^+yโ = yโ
in (<ref>).
Figure <ref> illustrates the overall procedure, using โdata consistencyโ to describe (<ref>) and โnullspace projectionโ to describe (<ref>).
In ablation, we quantitatively demonstrate the improvements gained from designing our CNF to estimate only the nullspace component.
ยง.ยง Architecture
The backbone of g_ฮธโ is a UNet <cit.> that mimics the design in <cit.>, with 4 pooling layers and 128 output channels in the first convolution layer.
The first layer was modified to accept complex-valued coil images.
The inputs have 2C channels, where C is the number of coils, each with a real and imaginary component.
The outputs of the final feature layer of the UNet are processed by a feature-extraction network with L convolution layers.
Together, the feature extraction network and the UNet make up our conditioning network g_ฮธโ.
The output of each convolution layer is fed to conditional coupling blocks of the corresponding layer in h_ฮธโ.
For the flow model h_ฮธโ, we adopt the multi-scale RealNVP <cit.> architecture.
This construction utilizes L-layers and B-flow steps in each layer.
A flow step consists of an activation normalization <cit.>, a fixed 1 ร 1 orthogonal convolution <cit.>, and a conditional coupling block <cit.>.
Each layer begins with a checkerboard downsampling (squeeze layer) <cit.> and a transition step made up of an activation normalization and 1 ร 1 convolution.
Layers end with a split operation that sends half of the channels directly to the output on the latent side.
For all experiments, we use L=3 and B=20.
The full architecture of h_ฮธโ is specified in Fig. <ref>.
Although the code that accompanies <cit.> gives a built-in mechanism to scale their flow architecture to accommodate an increased number of input and output channels, we find that this mechanism does not work well (see ablation).
Thus, in addition to incorporating nullspace learning, we redesign several aspects of the flow architecture and training.
First, to prevent the number of flow parameters from growing unreasonably large, our flow uses fewer downsampling layers (3 vs 6) but more flow steps per downsampling layer (20 vs 5), and we utilize one-sided (instead of two-sided) affine coupling layers.
Second, to connect the conditioning network to the flow, Denker et al. Denker:JI:21 used a separate CNN for each flow layer and adjusted its depth to match the flow-layer dimension.
We use a single, larger CNN and feed its intermediate features to the flow layers with matched dimensions, further preventing an explosion in the number of parameters.
Third, our conditioning network uses a large, pretrained UNet, whereas Denker et al. Denker:JI:21 used a smaller untrained UNet.
With our modifications, we grow the conditional network more than the flow network, which allows the CNF to better handle the high dimensionality of complex-valued, multi-coil data.
ยง.ยง Data
We apply our network to two datasets: the fastMRI knee and fastMRI brain datasets <cit.>.
For the knee data, we use the non-fat-suppressed subset, giving 17286 training and 3592 validation images.
We compress the measurements to C=8 complex-valued virtual coils using <cit.> and crop the images to 320 ร 320 pixels.
The sampling mask is generated using the golden ratio offset (GRO) <cit.> Cartesian sampling scheme with an acceleration rate R=4 and autocalibration signal (ACS) region of 13 pixels.
We create the ZF coil-image vectors yโ by applying the mask and inverse Fourier transform to the fully sampled kโ_c given by the fastMRI dataset to obtain yโ_c=FโPโPโkโ_c for all c, and then stack the coils to obtain yโ=[yโ_1,โฆ,yโ_C].
We create the ground-truth coil-image vectors xโ using the same procedure but without the mask, i.e., xโ_c=Fโkโ_c and xโ=[xโ_1,โฆ,xโ_C].
With the brain data, we use the T2-weighted images and take the first 8 slices of all volumes with at least 8 coils.
This provides 12224 training and 3352 validation images.
The data is compressed to C=8 virtual coils <cit.> and cropped to 384 ร 384 pixels.
The GRO sampling scheme is again used with an acceleration rate R=4 and a 32-wide ACS region. For both methods, the coil-sensitivity maps are estimated from the ACS region using ESPIRiT <cit.>.
All inputs to the network are normalized by the 95th percentile of the ZF magnitude images.
ยง.ยง Training
For both datasets, we first train the UNet in g_ฮธโ with an additional 1 ร 1 convolution layer to get the desired 2C channels.
We train the UNet to minimize the mean-squared error (MSE) from the nullspace projected targets {uโi}_i=1^N for 50 epochs with batch size 8 and learning rate 0.003.
Then, we remove the final 1 ร 1 convolution and jointly train g_ฮธโ and h_ฮธโ for 100 epochs to minimize the negative log-likelihood (NLL) loss of the nullspace projected targets.
For the brain data, we use batch size 8 and learning rate 0.0003.
For the knee data, we use batch size 16 with learning rate 0.0005.
All experiments use the Adam optimizer <cit.> with default parameters ฮฒ_1 = 0.9 and ฮฒ_2 = 0.999.
The full training takes about 4 days on 4 Nvidia V100 GPUs.
ยง.ยง Comparison Methods
We compare against other methods that are capable of generating posterior samples for accelerated MRI.
For the fastMRI brain data, we present results for the CGAN from <cit.> and the Langevin method from <cit.>.
For the fastMRI knee data, we present results for the โScoreโ method from <cit.> and the โsCNFโ method from <cit.>.
For the CGAN, we utilize a UNet-based generator with 4 pooling layers and 128 output channels in the initial layer and a 5-layer CNN network for the discriminator.
The generator takes yโ concatenated with a latent vector zโ as input.
The model is trained with the default loss and hyperparameters from <cit.> for 100 epochs with a learning rate of 0.001.
For the Langevin method, we use the authors' implementation but with the GRO sampling mask described in Data.
The Score method is different than the other methods in that it assumes that the k-space measurements kโ_c are constructed from true coil images xโ with magnitudes affinely normalized to the interval [0,1] and phases normalized to [0,1] radians. Although this normalization cannot be enforced on prospectively undersampled MRI data, Score fails without this normalization.
So, to evaluate Score, we normalize each kโ_c using knowledge of the ground-truth xโ_c, run Score, and un-normalized its output x_c for comparison with the other methods.
Since the Score paper <cit.> used RSS combining to compute i, we do the same.
For the Score method, we use T=200 iterations and not the default value of T=2000.
This is because, when using posterior-sample averaging (see eval), the PSNR computed using 200 iterations is better than with 2000.
The sCNF method works only on single-coil magnitude data, and so we convert our multi-coil data to that domain in order to evaluate sCNF.
To do this, we apply RSS (<ref>) to ZF coil-images yโ and repeat the process for the true coil images xโ.
Using those magnitude images, we train sCNF for 300 epochs with learning rate 0.0005 and batch size 32.
ยง.ยง Evaluation
We report results for several different metrics, including peak-signal-to-noise ratio (PSNR), structural-similarity index (SSIM) <cit.>, Frรฉchet Inception Score (FID) <cit.>, and conditional FID (cFID) <cit.>.
PSNR and SSIM were computed on the average of P posterior samples {iโ_p}_p=1^P, i.e.,
i_(P)1/Pโ_p=1^Pi_p
to approximate the posterior mean, while FID and cFID were evaluated on individual posterior samples i_p.
By default, we compute all metrics using magnitude reconstructions |i| rather than the complex-valued reconstructions i, in part because competitors like sCNF generate only magnitude reconstructions, but also because this is typical in the MRI literature (e.g., the fastMRI competition <cit.>).
So, for example, PSNR is computed as
PSNR
10 log_10( Dmax_d |[iโ]_d|^2/ |รฎโ_(P)| - |iโ| _2^2)
,
where [ยท]_d extracts the dth pixel.
For FID and cFID, we use the embeddings of VGG-16 <cit.> as <cit.> found that this helped the metrics better correlate with the rankings of radiologists.
For the brain data, we compute all metrics on 72 random test images in order to limit the Langevin image generation time to 4 days.
We generate complex-valued images using the coil-combining method in (<ref>) and use P=32 posterior samples to calculate cFID1, FID1, PSNR, and SSIM.
(For the reference statistics of FID, we use the entire training dataset.)
Because FID and cFID are biased by small sample sizes, we also compute FID2 and cFID2 with 2484 test samples and P=8 for our method and the CGAN.
With the knee data, we follow a similar evaluation procedure except that,
to comply with the evaluation steps of Score, we generate magnitude-only signals using the root-sum-of-square (RSS) combining from (<ref>).
Also, we computed metrics on 72 randomly selected slices in order to bound the image generation time of Score to 6 days with P=8.
We use P=8 for all metrics, but for FID2 and cFID2, we use 2188 test samples.
When computing inference time for all methods, we use a single Nvidia V100 with 32GB of memory and evaluate the time required to generate one posterior sample.
ยง RESULTS
knee_metrics reports the quantitative metrics for the knee dataset.
It shows that our method outperforms sCNF by a significant margin in all metrics except inference time.
By using information from multiple coils and a more advanced architecture, our method shows the true competitive potential of CNFs in realistic accelerated MR imaging.
knee_metrics also shows that our method surpasses Score in all metrics except FID1, even though Score benefited from impractical ground-truth normalization.
Compared to Score, our method generated posterior samples 8000 ร faster.
Furthermore, our method (and sCNF) will see a speedup when multiple samples are generated because the conditioning network g_ฮธโ needs to be evaluated only once per P generated samples for a given yโ.
For example, with the knee data, we are able to generate P=32 samples in 1.41 seconds, corresponding to 44 milliseconds per sample, which is a 2.5ร speedup over the value reported in knee_metrics.
brain_metrics reports the quantitative results for the brain dataset.
The table shows that we outperform the Langevin and CGAN methods in all benchmarks except inference time.
While our method is a bit slower than the CGAN, it is orders of magnitude faster than the Langevin approach.
We show the mean images and standard-deviation maps for the fastMRI knee and brain experiments in Fig. <ref>.
For the knee data, our method captures texture more accurately than the sCNF method and provides a sharper representation than the Score method.
All of the brain methods provide a visually accurate representation to the ground truth, but the Langevin method provides a more diffuse variance map, with energy spread throughout the image.
In Fig. <ref>, we plot multiple posterior samples, along with zoomed-in regions, to illustrate the changes across independently drawn samples for each method.
The standard-deviation maps are generated using P=8 posterior samples, three of which are shown.
From the zoomed-in regions, it can be seen that several samples are consistent with the ground truth while others are not (although they may be consistent with the measured data).
Regions of high posterior variation can be flagged from visual inspection of the standard-deviation map and further investigated through viewing multiple posterior samples for improved clinical diagnoses.
Our method presents observable, realistic variations of small anatomical features in the zoomed-in regions.
The variations are also registered in the standard-deviation map.
Both the posterior samples and the standard-deviation map could be used by clinicians to assess their findings.
Comparatively, our method demonstrates variation that is spread across the entire image, while in the Score method, the variation is mostly localized to small regions.
Since it is difficult to say which standard-deviation map is more useful or correct, the interpretation of these maps could be an interesting direction for future work.
The sCNF also demonstrates variation, but it is mostly driven by residual aliasing artifacts.
ยง.ยง PSNR Gain versus P
It is well known that the minimum MSE (MMSE) estimate of iโ from yโ equals the conditional mean {iโ|yโ}, i.e., the mean of the posterior distribution p_i|y(ยท|yโ).
Thus, one way to approximate the MMSE estimate is to generate many samples from the posterior distribution and average them, as in (<ref>).
Bendel et al. Bendel:22 showed that the MSE
โฐ_P[ i_(P) - iโ ^2_2| yโ]
of the P-posterior-sample average i_(P) obeys
โฐ_1/โฐ_P = 2P/(P+1).
So, for example, the SNR increases by a factor of two as P grows from 1 to โ.
The same thing should happen for PSNR, as long as the PSNR definition is consistent with (<ref>).
For positive signals (i.e., magnitude images) the PSNR definition from (<ref>) is consistent with (<ref>), but for complex signals we must use โcomplex PSNRโ
cPSNR
10 log_10( Dmax_d |[iโ]_d|^2/รฎโ_(P) - iโ_2^2)
.
As RSS combining provides only a magnitude estimate, we compute the coil-combined estimate for our method and Score to evaluate cPSNR behavior for the knee dataset.
One may then wonder whether a given approximate posterior sampler has a PSNR gain versus P that matches the theory.
In Fig. <ref>, we answer this question by plotting the PSNR gain and the cPSNR gain versus Pโ{1,2,4,8,16,32} for the various methods under test
(averaged over all 72 test samples).
There we see that our method's cPSNR curve matches the theoretical curve well for both brain and knee data.
As expected, our (magnitude) PSNR curve does not match the theoretical curve.
The cPSNR curves of the Score and CGAN methods fall short of the theoretical curve by a large margin, but interestingly, the Langevin method's cPSNR curve matches ours almost perfectly.
sCNF's PSNR gain curve matches the theoretical one almost perfectly, which provides further empirical evidence that CNF methods accurately sample from the posterior distribution.
ยง.ยง Ablation Study
To evaluate the impact of our contributions to CNF architecture and training design, we perform an ablation study using the fastMRI knee dataset.
We start with the baseline model in <cit.>, modified to take in 16 channels instead of 1, and scale it up using the built-in mechanism in the author's code.
We train this model for 300 epochs with batch size 32 and learning rate 0.0001 to minimize the NLL of the multicoil targets {xโi}, since higher learning rates were numerically unstable.
ablation shows what happens when we add each of our contributions.
First, we add data consistency (<ref>) to the evaluation of the baseline.
We then add the architectural changes described in Architecture,
and finally we add nullspace learning to arrive at our proposed method.
From ablation, it can be seen that each of our design contributions
yielded a significant boost in performance, and that nullspace learning was a critical ingredient in our outperforming the Score method in knee_metrics.
For this ablation study, all models were trained following the procedure outlined in Training (except for the learning rate of the baseline).
ยง.ยง Maximum a Posteriori (MAP) Estimation
Because CNFs can evaluate the posterior density of a signal hypothesis (recall (<ref>)), they can be used for posteriori (MAP) estimation, unlike CGANs.
Due to our data-consistency step (<ref>), we find the MAP estimate of xโ using
x = u + Aโ^+yโ
u = max_uโโ(Aโ)lnpฬ_u|y(uโ|yโ)
.
Note the CNF output uโ is constrained to the nullspace of Aโ.
From (<ref>), this nullspace is spanned by the columns of
Wโ{FโP, โฆ, FโP},
which are orthonormal, and so u=Wโk with
k = max_klnpฬ_u|y(Wโk | yโ;ฮธโ)
= max_k[ ln p_z(hฬ
_ฮธโ^-1(Wโk,yโ))
+ln| ( โhฬ
_ฮธโ^-1(u,yโ)/โu|_u=Wโk) | ] .
For this maximization, we use the Adam optimizer with 5000 iterations and a learning rate of 1ร 10^-8.
Above, k can be recognized as the unmeasured k-space samples.
In Figure <ref>, we show an example of a MAP estimate along with the ground truth image, one sample from the posterior, a P=8 posterior-sample average, and their corresponding log-posterior-density values.
As expected, the MAP estimate has a higher log-posterior-density that the other estimates.
Visually, the MAP estimate is slightly sharper than the sample average but contains less texture details than the single posterior sample.
ยง CONCLUSION
In this work, we present the first conditional normalizing flow for posterior sample generation in multi-coil accelerated MRI.
To do this, we designed a novel conditional normalizing flow (CNF) that infers the signal component in the measurement operator's nullspace, whose outputs are later combined with information from the measured space.
In experiments with fastMRI brain and knee data, we demonstrate improvements over existing posterior samplers for MRI.
Compared to score/Langevin-based approaches, our inference time is four orders-of-magnitude faster.
We also illustrate how the posterior samples can be used to quantify uncertainty in MR imaging.
This provides radiologists with additional tools to enhance the robustness of clinical diagnoses.
We hope this work motivates additional exploration of posterior sampling for accelerated MRI.
ยง ACKNOWLEDGEMENTS
This work was supported in part by the National Institutes of Health under Grant R01-EB029957.
icml2023
ยง ACCELERATED MRI SIMULATION PROCEDURE
We outline the procedure for simulating the accelerated MRI problem.
The fastMRI datasets provide the fully sampled multi-coil k-space, i.e., {kโ_c}_c=1^C with M=D.
To obtain the ground truth coil-images {xโ_c}_c=1^C, we take the inverse Fourier transform of the fully sampled k-space measurement, i.e., xโ_c=Fโkโ_c, wherein we assume that the noise ฯตโ_c in (<ref>) is negligible.
To obtain the zero-filled images yโ_c, we take the inverse Fourier transform after masking the fully-sampled k-space measurement kโ_c, i.e., yโ_c=FโPโPโkโ_c.
This procedure is illustrated in Fig. <ref>.
In real-world accelerated MRI, the data acquisition process would collect masked k-space Pโkโ_c directly.
ยง IMPLEMENTATION DETAILS
For our machine learning framework, we use PyTorch <cit.> and PyTorch lightning <cit.>.
To implement the components of the CNF, we use the Framework for Easily Invertible Architectures (FrEIA) <cit.>.
For the Score, sCNF, and Langevin methods, we utilize the authors' implementations at <cit.>, <cit.>, and <cit.>, respectively.
ESPIRiT coil-estimation and coil-combining are implemented using the SigPy package <cit.>.
ยง BRAIN POSTERIOR SAMPLES
|
http://arxiv.org/abs/2306.10006v2
|
20230616175804
|
Unsupervised Learning of Style-Aware Facial Animation from Real Acting Performances
|
[
"Wolfgang Paier",
"Anna Hilsmann",
"Peter Eisert"
] |
cs.CV
|
[
"cs.CV",
"cs.GR",
"cs.LG"
] |
1
.001
Style-Aware Facial Animation
Paier etย al.
mode = title]Unsupervised Learning of Style-Aware Facial Animation from Real Acting Performances
1]Wolfgang Paier
[1]
[email protected]
1]Anna Hilsmann
[email protected]
1,2]Peter Eisert
[email protected]
[1]organization=Fraunhofer Heinrich Hertz Institute,
city=Berlin,
country=Germany
[2]organization=Humboldt University,
city=Berlin,
country=Germany
[cor1]Corresponding author
[cor2]Principal corresponding author
This paper presents a novel approach for text/speech-driven animation of a photo-realistic head model based on blend-shape geometry, dynamic textures, and neural rendering.
Training a VAE for geometry and texture yields a parametric model for accurate capturing and realistic synthesis of facial expressions from a latent feature vector.
Our animation method is based on a conditional CNN that transforms text or speech into a sequence of animation parameters.
In contrast to previous approaches, our animation model learns disentangling/synthesizing different acting-styles in an unsupervised manner, requiring only phonetic labels that describe the content of training sequences.
For realistic real-time rendering, we train a U-Net that refines rasterization-based renderings by computing improved pixel colors and a foreground matte. We compare our framework qualitatively/quantitatively against recent methods for head modeling as well as facial animation and evaluate the perceived rendering/animation quality in a user-study, which indicates large improvements compared to state-of-the-art approaches.
facial animation neural rendering neural animation self-supervised learning dynamic textures
[
[
Submitted 9 June 2023.
==========================
ยง INTRODUCTION
Modeling and animation of virtual humans play an essential role in many applications such as virtual reality (VR), video games, or movie productions.
Especially, the synthesis of realistic talking head videos from speech data received great attention, as this will be an important feature in future applications like virtual assistants or service agents.
While recent approaches have made great progress in terms of realism and animation quality, automatically synthesizing believable facial animations from speech or text still remains a challenge.
We present a new approach for creating an animatable and photo-realistic 3D head model from multi-view video footage of a real actor, together with a neural animation model based on emotional performances. The approach allows for synthesizing new emotional speech sequences from text or speech and manipulates speaking styles as well as emotions with a low dimensional latent style vector.
Our head model is based on a hybrid representation that combines 3D mesh-based geometry, dynamic textures, and neural rendering.
A personalized statistical geometry model captures rigid motion as well as large-scale deformations.
Additionally, we extract dynamic textures from multi-view images in order to capture facial details, fine motion, and the appearance of complex regions (e.g.ย mouth cavity, eyes, or hair), which cannot be represented by the underlying mesh alone.
Based on mesh and dynamic texture data, we create a parametric face model by training a joint variational autoencoder (VAE) for face texture and geometry.
This parametric representation gives control over displayed facial expressions and allows for animation from text or speech.
While this is a convenient and efficient way to represent facial expressions in every detail, it is insufficient for photo-realistic rendering that requires highly detailed geometry even in complex regions (e.g.ย mouth, eyes, hair) and accurate material and reflectance information.
To account for this, we employ a pixel-to-pixel translation network that refines rasterization-based head images such that the appearance of hair, silhouette and other view-dependent effects are correctly reproduced.
We train the render network in a self-supervised manner, which minimizes the pre-processing overhead for multi-view data since foreground/background segmentation is learned automatically during training.
For synthesis and animation, we train our neural animation model based on automatically generated phonetic annotations, which represent the speech content in all captured video sequences.
Using a pre-defined mapping, we convert phonetic annotations to visemes IDs, which serve as input features.
We chose to rely on visemes, because of their speaker independence and versatility (e.g. both speech, as well as text, can be easily converted to viseme sequences).
As we aim at synthesizing facial performances similar to that of a real actress/actor, we design our network architecture in a way that allows for unsupervised learning of emotions, natural variations of facial expressions, and head movements during speech.
This has several advantages: first, no additional manual effort for annotating emotions or speaking style is required; second, we do not need to capture an exhaustive training database that contains all combinations of speech sequences, emotions, and emotion intensities; third, our actress can focus on an authentic performance rather than acting according to a target emotion, which increases the naturalness of the training data, and thus also the realism and authenticity of the synthesized animations.
After training, we can synthesize realistic talking-head videos from text or speech while being able to manipulate speaking style and emotions with a low-dimensional control vector.
ยง RELATED WORK
ยง.ยง Face Capture and Modeling
Creating controllable models of human heads and faces has been extensively studied in the field of computer vision and computer graphics.
One of the most popular approaches are morphable models <cit.>.
In order to deal with their limitations, researchers proposed different strategies to increase the quality of captured facial expressionsย <cit.>
such as the personalization of blend-shapes <cit.>, computation of corrective shapes during run-time <cit.>, or the extension of linear models to capture
multiple additional attributes such as identity, texture, and light <cit.>.
However, one major drawback of linear models is the lack of expressive power to correctly represent complex areas like the oral cavity, eyes, or hair.
As a result, purely model-based approaches employ either 'hand crafted' solutions (e.g.ย oral cavity) or simply ignore themย <cit.>.
Alternatively, hybrid or image-based methods solve this problem by representing facial performances in geometry and texture spaceย <cit.>.
With the advent of deep learning, more sophisticated approaches have been proposedย <cit.>.
For example, Tewari et al.ย <cit.> and Chai et al.ย <cit.> use large collections of unlabelled face images and video in order to train and refine existing linear face models in a self-supervised or unsupervised manner.
In order to improve the rather simple image formation model of previous approaches, Dib et al.ย <cit.> employ a differentiable ray tracer, which allows for more advanced materials, better illumination, and self-shadowing.
Siarohin et al.ย <cit.> proposed an approach that directly animates an image of a person using 2D affine transformations, which further reduces the effort of creating a controllable face model.
While their system works well in most cases, large changes in the head pose typically create unnatural distortions.
Ren et al.ย <cit.> try to avoid this limitation by employing a 3D morphable head model that allows for larger changes of the head pose.
Wang et al.ย <cit.> improve the quality of animated portraits by explicitly predicting features for appearance, expression, head pose, and canonical 3D key points to guide the image formation process, whereas
Hong et al.ย <cit.> propose a depth-aware generative adversarial network (GAN) that explicitly learns predicting depth values for each pixel in order to improve the quality of the warped face image.
While these approaches have the advantage that controllable face models for a specific person can be generated with low effort, they usually do not yield highly detailed and realistic 3D head representations.
In order to solve this task, deep generative face models have been introduced <cit.>.
For example, Lombardi et al.ย <cit.> capture multi-view facial video with 40 synchronized machine vision cameras and 200 LED point lights to promote uniform illumination.
Using a highly detailed blend shape model, they recover face geometry and view-dependent textures per frame, which are then used for training a variational autoencoder (VAE)ย <cit.>.
After training, the decoder can be used as a parametric non-linear face model that reconstructs dynamic geometry and expression- as well as view-dependent textures.
Li et al.ย <cit.>as well as Chandran et al.ย <cit.> propose deep neural face models, which are capable of representing facial expressions as well as identities.
Chandran et al.ย train their neural face model with registered and textured face meshes of 224 subjects showing 24 predefined expressions.
Two separate VAEs are used to extract identity and expression information.
A joined decoder uses a latent identity and expression vector to reconstruct the target geometry as well as a low-resolution albedo map.
Using a super-resolution network, they compute the final high-resolution albedo map.
Li et al.ย <cit.> build their neural face model using two generative adversarial networks (GAN) based on the styleGAN architecture.
Ma et al.ย <cit.> proposed a codec avatar that allows for efficient rendering of detailed head avatars even on mobile devices.
More recently, Grassal et al.ย <cit.> proposed an approach for generating neural head models from monocular RGB video that are able to dynamically refine geometry and texture, which allows for modeling complex geometry that cannot be captured by the underlying morphable model.
ยง.ยง Neural Rendering
In recent years, novel scene representationsย <cit.> and sophisticated methods for image synthesis based on neural networksย <cit.> have been proposed that allow for creating truly photo-realistic renderings of humans.
For example, Martin-Buralla et al.ย <cit.> use a deep convolutional network to refine the renderings of 3D point clouds in virtual reality applications.
Wang et al.ย <cit.> implemented a video-to-video translation network that synthesizes realistic videos of arbitrary objects or scenes based on semantic segmentation videos,
whereas Thies et al.ย <cit.> propose a deferred neural rendering approach to create photo-realistic renderings of 3D computer graphic models.
Kim et al.ย <cit.> and Prokudin et al.ย <cit.> combine neural rendering with parametric models of humans and human heads, which allows for photo-realistic rendering of animatable CG models.
Recently, novel implicit scene representations based on neural networks have been proposed <cit.>.
The core idea is to capture the 3D structure and appearance of an object/scene with a non-linear function that depends on the 3D position in space as well as viewing direction and predicts color as well as volume density.
While the key idea is simple, it enables the network to represent fine structures such as hair fibers, objects with complex reflective properties such as glass and metal but also dynamic effects such as smoke.
Although they have not been specifically designed for representing human faces, they provide useful capabilities that classic computer graphic models lack and that can help synthesizing more realistic faces/heads (e.g.ย rendering hair and complex geometries). A big drawback, however, is the high computational complexity <cit.>.
While Mรผller et al.ย <cit.> propose a new approach that supports very fast training as well as real-time rendering, they still lack semantic control, which is essential for animation.
ยง.ยง.ยง Neural Animation
With the advances in the processing of natural language and sequential data, several new methods for facial animation from speech have been proposedย <cit.>.
For example, Suwajanakorn et al.ย <cit.> train a recurrent neural network (LSTM) on a large collection of weekly presidential addresses (17 hours in total) to predict mouth landmarks according to a speech signal.
Based on these mouth landmarks, they generate a realistic mouth texture, which is integrated into a re-timed target video.
Zhou et al.ย <cit.> tackle the problem of animating a production face rig from a speech signal based on a small amount of training data using transfer learning and complementary objective function for which enough data is available.
Other researchers published similar methodsย <cit.>: Thies et al.ย <cit.> use a CNN-based approach together with neural rendering to synthesize photo-realistic videos from speech, Zhang et al.ย <cit.> use a GAN-based approach that could synthesize plausible eye blinks and Guo et al.ย <cit.> introduced a new NeRF-based approach for talking head synthesis from speech.
With the availability of larger audio-visual datasets, several methods for speaker-aware facial animationย <cit.> have been proposed: Cudeiro et al.ย <cit.> use one-hot-encoding to model subject-specific speaking styles, Zhou et al.ย <cit.> extract a speaker identity vector from the speech signal that helps synthesizing speaker dependent face landmark displacements, but this does not allow for representing emotion or manipulating the speaking style of the animated character.
Chen et al.ย <cit.> propose a method that synthesizes more natural head motion by explicitly representing it in 3D space.
Recent works started modeling the effects of emotion and speaking stylesย <cit.>:
for example, Karras et al.ย <cit.> propose a system that predicts vertex offsets according to an input speech signal.
In addition, they learn a specific style vector for each training sample in order to capture natural variations of facial expressions, however, their style features are difficult to interpret and their approach does not model important features such as eyes, oral cavity, texture, or head motion.
Most closely related to our method are Eskimez et al.ย <cit.>, Cheng et al.ย <cit.>, and Ji et al.ย <cit.>.
However, their models cannot be trained without an exhaustive (emotion) database or annotated emotions.
Eskimez et al.ย <cit.> as well as Cheng et al.ย <cit.> require a content signalย (i.e. speech/text) as well as pre-defined emotion labels.
While the former are able to synthesize a complete face video based on the emotion condition, Cheng et al.ย <cit.> use this information only for the synthesis of eye expressions.
Ji et al.ย <cit.> do not directly predict from emotion labels, but they require pairs of training sequences with the same content and different emotion in order to employ a cross-reconstruction training.
Although this reduces the demands on the training data, it is not enough to facilitate training with regular video sequences that do not contain emotion labels and are not captured multiple times with different emotions.
In contrast, our approach does neither require emotion labels nor multiple versions of each training sequence in order to disentangle animation style and content.
Moreover, our method is able to synthesize realistic expression and motion parameters conditioned on automatically learned style features for mouth, eyes, and head pose.
ยง.ยง.ยง Contributions
Hybrid 3D Head Model:
We present an approach for creating a personalized and animatable 3D head avatar from video data of a real person.
Similar to other methodsย <cit.>, we use a statistical geometry model to represent rigid motion and large-scale deformations of the face,
but instead of relying on the existing expression space of the underlying geometry model, we compute a refined expression space that allows representing and synthesizing all facial expressions of the captured person.
This is possible because the personalized dynamic texture model captures complex areas such as the eyes or the oral cavity (even if not modeled in geometry) as well as fine details and small deformations
that cannot be represented by the geometry model alone.
Another similar method was proposed by Ma et al.ย <cit.>. Starting from low-resolution mesh, they synthesize a dense mesh that refines inaccurate/missing partsย (e.g. tongue) of the coarse underlying tracking mesh.
However, this requires high-resolution multi-view stereo matching for the training videos, while our approach can essentially be used without detailed 3D reconstruction of the captured scenes.
Self-Supervised Neural Rendering:
In contrast to previous approachesย <cit.>, we train the rendering model in a self-supervised fashion.
This allows for synthesizing photo-realistic head images in real-time together with alpha masks that separate foreground from background, which minimizes pre-processing overhead and allows inserting the rendered head image into new backgrounds.
While the approach of Gafni et al.ย <cit.> offers similar capabilities, we can show that our system yields higher image quality and clearer alpha masks.
Style-Aware Neural Animation:
In contrast to existing approachesย <cit.>, our system can synthesize authentic speech performances after being trained on a challenging dataset with a low number of training samples, unannotated and varying emotions/speaking styles as well as strong head motion.
This is beneficial as actors can perform more authentically since strong facial expressions and head motions do not degrade the quality of synthesized talking-head videos, but actually increase realism.
Training our animation network in a semi-supervised way (i.e.ย with phonetic annotations via forced alignment) allows for automatically discovering speaking styles and emotions that occur in natural acting performances.
Moreover, we show a simple but effective way of visualizing the learned style/emotion space, which enables users to shape synthesized animations according to their needs.
Extensive Evaluation and User Study:
We evaluate our system qualitatively as well as quantitatively against several recent approaches for modeling, animation, and rendering.
Since traditional error metrics do not always capture the perceived quality of rendered head videos, we conduct an extensive user study in order to evaluate the perceived rendering quality, the audio-visual synchronization as well as the realism of synthesized facial expressions and head motion.
ยง SYSTEM OVERVIEW
In a first step, we create a personalized parametric head model that represents 3D geometry, head motion, facial expression, and appearance of the captured personย (Sec.ย <ref>).
The model is learned from multi-view data of an actor captured with a multi-view video rig and a microphone for the speech signal.
For animation control, we use a latent parameter vector of an autoencoder, which essentially stores the captured facial performance as a sequence of low-dimensional parameter vectors.
This latent vector can be driven from text, speech, or video input data, exploiting a corresponding mapping network.
Using forced alignment, we automatically annotate the captured voice signal with time-aligned phonemes in order to represent the speech content.
We train a neural animation model together with a speech style encoder from annotated speech signals and corresponding animation parameters, which allows for disentangling the style and content of the captured performanceย (Sec.ย <ref>).
After training, we can condition our neural animation model with a low-dimensional style/emotion vector that allows for synthesizing authentic and emotional talking head videos.
To increase realism, we train a neural rendering model that refines rasterization-based images of our head model, which allows for synthesizing photo-realistic videos of the animated head modelย (Sec.ย <ref>).
ยง HEAD MODEL CREATION AND RENDERING
The core of our system is constituted by a photo-realistic animatable 3D head model that is computed from multi-view video footage to ensure that it
perfectly resembles the appearance of the captured person.
Our model creation process is based on <cit.> and consists of three stages that are illustrated in figureย <ref>.
First, we employ a statistical head model to recover pose and approximate head geometry for each captured frame using automatically detected landmarksย <cit.> and optical flow.
Statistical models are sufficient to capture large-scale variations with only a small number of parameters, but they typically fail to reproduce fine details, small motions, and complex geometry deformations (e.g.ย in the oral cavity).
To account for this, in a second step, we extract dynamic head textures in addition to the approximate geometry assuming that the head model already possesses valid texture coordinates.
This allows for capturing missing details, small motions, and appearance changes in texture space.
While the original approach computes the texture sequence by solving a discrete optimization task over all frames,
we improve computational efficiency by solving the objective for a smaller set of keyframes and approximating the solution for other frames via the nearest keyframe.
This allows for processing longer sequences, reduces computation time, and introduces implicit temporal smoothness.
After geometry recovery and texture extraction, each captured performance is represented by a sequence of textured meshes with corresponding geometry.
More precisely, each frame is represented by rigid motion parameters ๐, blend-shape weights ๐, and an RGB image as texture.
Based on this representation, we train a deep generative face model that reconstructs blend-shape weights ๐ as well as face textures from a shared low-dimensional expression vector, thereby enabling natural, plausible, and realistic facial animation.
The joined parametric representation ensures that texture and geometry always fit together such that facial expressions are reconstructed correctly.
We create two local animation models that represent mouth and eyes by manually selecting two rectangular regions of interest in texture space.
Since pre-computed texture coordinates are provided by the underlying head template, this step has to be performed only once as the selected mouth and eye regions can be re-used for the creation of further head avatars.
For the model creation, a variational autoencoderย <cit.> architecture is employed.
The encoder receives shape weights ๐ as well as a cropped texture region from the performance capture stage as input and transforms them into a low dimensional latent representation ๐ณ of the facial expression.
The decoder receives the noisy code vector ๐ณ and reconstructs shape weights ๐ as well as the texture for mouth/eyes.
In order to improve texture reconstruction quality, we employ an adversarial training strategy using a patch-based discriminator networkย <cit.> that is trained simultaneously with the autoencoder.
During training, we minimize the L_1 texture reconstruction loss, the adversarial texture loss, and the MSE between predicted and target shape weights.
For better control over facial expressions, we create two expression models, one for the mouth and one for the eyes.
After training, all captured multi-view sequences are represented with a single parameter vector consisting of 3D head pose ๐ as well as expression vectors ๐ณ_mouth and ๐ณ_eyes for the corresponding face regions.
The separate representation of mouth and eyes increases the flexibility as both regions can be controlled independently of each other.
Moreover, it enables the use of a specialized animation prior for the eyes, which further increases the realism synthesized talking-head-video (section <ref>).
ยง.ยง Neural Rendering
While the traditional mesh-plus-texture representation correctly reconstructs the facial performance as well as most of the appearance, some important details cannot be reproduced, see figure <ref>.
For example, due to the rough geometry model, the silhouette appears smooth and unnaturally clean.
Looking at the model from the side while the mouth is open reveals that the underlying statistical shape model does not properly capture the oral cavity.
Moreover, the hair is not reconstructed faithfully as view-dependent appearance effects, and fine hair strands near the silhouette or eyelashes are missing.
Some of these shortcomings could be resolved by increasing the complexity of the geometry model or introducing specialized part models for the oral cavity, eyes <cit.>, or hair <cit.>.
However, this would also require a more complex capture setup and could make manual intervention necessary while still not achieving the desired level of realism.
Instead, we employ a self-supervised approach based on pixel-to-pixel translation, where our render network receives the mesh-based rendering as input and predicts a refined head image as well as weight masks that help separating foreground from background.
This simplifies the training process as we do not need to pre-compute foreground masks because providing clean plate images (as background models) for each camera is sufficient.
The image formation model (<ref>) of our neural rendering approach can be expressed as a convex combination of the mesh-based rendering โ_orig, the corrective image โ_corr and the static background model โ_backg, where each image contributes according to spatially varying weight maps ฮฑ, ฮฒ, and ฮณ.
โ_out = ฮฑโ_orig + ฮฒโ_corr + ฮณโ_backg
ฮฑ + ฮฒ + ฮณ = 1
During experiments, we found the explicit weighting of the corrective image advantageous as it allows the network to disable refinement in certain areas (e.g.ย background or areas that are already well represented by the textured mesh).
Especially when the static background model is not sufficiently accurate, a render model without explicit weighting of โ_corr would try refining the background as well, which would result in the wrong foreground (ฮฑ+ฮฒ) and background (ฮณ) weights.
Our re-rendering network is based on a U-Net architecture <cit.> and the input tensor contains the RGB colors of the mesh-based rendering.
The output tensor consists of six channels: RGB color plus three channels that contain the weight maps ฮฑ, ฮฒ, and ฮณ.
We use a U-Net with five layers, 64 filters (instead of 32) at the first layer and a scaling factor of 4 (instead of 2) at each layer as this yields higher rendering quality while still achieving high rendering speed (12 milliseconds per frame).
Training our network requires the captured frame, the initial rendering โ_orig of the textured head model, an empty background frame โ_backg as well as a binary foreground/background mask of the initial rendering.
The corrective image โ_corr is predicted by the first three channels of the output tensor using a tanh activation.
The weight maps ฮฑ, ฮฒ, and ฮณ are predicted by the last three channels of the output tensor using a soft-max activation.
The image reconstruction is guided by the VGG loss <cit.> between the reconstructed frame โ_out and the captured frame.
Moreover, we minimize an adversarial loss โฐ_adv by introducing a patch-based discriminator <cit.> that aims at distinguishing between real and synthesized images on a patch basis.
The full objective function (<ref>) is given by
โฐ = w_vggโฐ_vgg + w_advโฐ_adv + w_priโฐ_pri + w_binโฐ_bin + w_regโฐ_reg,
which is minimized using the Adam optimizer with default parameters, an initial learning rate of 0.0001, exponential learning rate scheduling (0.95), and a batch size of four.
During training, we randomly scale, rotate, and translate source and target images in order to support rendering at different distances (scales) or partially visible heads.
In order to minimize flickering in โ_out, we smooth the mesh-based rendering at the border between the head and background using a Gaussian filter.
The render network for each subject is trained for thirty epochs, which corresponds to approximately 93k iterations.
The weights for each loss in (<ref>) are: w_vgg=1, w_adv=0.1, w_pri=0.1, w_bin=0.1, w_reg=0.001.
The first 5000 iterations are considered as a warm-up phase where w_bin and w_adv are set to zero.
Starting from iteration 5000, the weights of w_bin and w_adv are linearly ramped up such that they reach their target value after 1000 steps at iteration 6000.
ยง.ยง.ยง Mask Regularization
In order to better constrain the refinement process, we implement two additional loss functions to create more accurate and clean foreground masks โฑ.
This is especially important when the background model is not accurate (e.g.ย due to dynamic shadows) since the render network might learn to refine the background as well (figure <ref>, leftmost image).
โฑ = ฮฑ + ฮฒ
First, we generate a dilated as well as an eroded version of the initial binary render mask โณ.
Outside the dilated mask โณ_d, the foreground likelihood should be zero (right side of (<ref>)), while inside of the eroded mask โณ_e, the foreground likelihood should be one (left side of (<ref>)).
This is motivated by the heuristic that pixels far away from the initial rendering should belong to the background, while pixels in the center of the initial rendering should belong to the foreground:
โฐ_pri = |(โฑโโณ_e) - โณ_e|^2 + |โฑโ (1 - โณ_d)|^2,
with โ being the Hadamard product.
Second, we implement an additional regularization term that encourages the network to generate a mostly binary foreground mask.
We introduce an additional L_1 loss that pulls the entries of โฑ closer to zero if โฑ_i,j < 0.5 or to one if โฑ_i,j > 0.5, respectively.
During training, we start employing the mask regularization after 10 epochs by gradually increasing the regularization weight.
โฐ_bin =
|โฑ_i,j-1.0|, โฑ_i,j > 0.5
|โฑ_i,j|, โฑ_i,j < 0.5
Moreover, we control refinement by regularizing the corrective image โ_corr with the L_1 norm.
โฐ_reg = |โ_corr|
ยง NEURAL FACE ANIMATION
This section presents our neural approach for realistic animation of our face model from text or speech.
For creating the animation model, we captured four multi-view sequences of our actress and asked her to present emotional monologues.
Additionally, we recorded the speech track with a small wireless microphone.
However, instead of training the animation model directly with the audio signal, we employ an existing forced alignment approach <cit.> that converts the speech track into timed phoneme sequences,
which are further transformed into viseme sequences that are in sync with the video data.
This is advantageous as the trained animation model can be used by more than just the captured person since visemes are speaker-agnostic.
Existing speech recognition software can be used to convert speech into viseme sequences, which allows for animation from audio, while tasks (e.g.ย facial animation of sign language avatars) that benefit from a textual representation of speech can be easily implemented as well.
The input for the animation network consists of a sequence of viseme IDs where each viseme roughly approximates the visible mouth shape at a certain time frame.
Assuming 25 fps, the input sequence has 25 ยท t entries where t corresponds to the length of the animated sequence measured in seconds.
The output of our network is a 3D tensor of size bร (25 ยท t)ร d, where b corresponds to the batch size and d corresponds to the dimension of animation parameters.
This means that the duration of each expression/viseme is explicitly encoded in the input sequence by the number of repetitions.
Figure <ref> shows a high-level architecture of our animation network.
It consists of three main structures, the core animation network that converts viseme IDs into animation parameters, a variational style encoder network that extracts a compact style vector from the target parameters signal, and two variational sequence autoencoder that act as animation priors for eye and pose parameters, which are not well correlated with the source viseme sequence.
ยง.ยง Variational Animation Prior
The main purpose of the variational sequence autoencoders (VSAE) for eye and pose parameters is to provide a statistical prior in order to minimize unnatural behavior.
Both VSAEs are based on a 1D convolutional architecture, which has two advantages: first, we benefit from better parallelization as well as faster training and second, it allows reducing the temporal resolution of the latent parameters via average pooling.
Reducing the temporal resolution in latent space forces the VSAE to learn more meaningful features, which yields smoother and more natural expression/motion sequences since each feature vector represents a short sequence of animation parameters.
Figure <ref> shows the architecture of our VSAEs, where N corresponds to the depth of the sequence autoencoder (i.e.ย number of encoding/decoding blocks).
Via experiments, we found that setting N=4 for the pose sequence prior and N=2 for the eye sequence prior yields the best performance.
All convolutions have 128 filters, a kernel size of 3, and a padding of 1. The latent parameter sequences for eyes and pose have a feature dimension of 128 as well.
The objective function (<ref>) of the animation prior consists of the reconstruction loss and the Kullback-Leibler divergence between the distribution of latent sequence parameters ๐ฉ_(ฮผ_z,ฯ_z) and the standard normal distribution.
โฐ_prior = ||X-Xฬ||^2 + ฮป_prior KL(๐ฉ_(ฮผ_z,ฯ_z)||๐ฉ_(0,1))
ยง.ยง Style Aware Animation Network
The main purpose of the variational style encoder (figure <ref>, left) is to provide additional information such that the animation network (figure <ref>, middle) can disentangle content from speaking style and emotions.
This is important since we train our animation network with several speech sequences that include an arbitrary mix of different emotions and speaking styles.
Moreover, similar to a VAE, the style encoder learns a well-defined latent style space, which allows for simple editing and sampling of the speaking style vectors after training.
Again, the animation network and style encoder are based on a 1D convolutional architecture.
All convolutions have 128 filters, a kernel size of 9, padding of 4, and the latent style vector has 128 dimensions.
The objective function of the animation modelย (<ref>) consists of three data terms in order to minimize the prediction error of mouth expressions Xฬ_mouth, latent eye expressions แบ_eyes and latent pose แบ_pose.
The latent style vector is regularized via KL divergence such that the distribution of style vectors matches the standard normal distribution ๐ฉ_(0,1).
The latent pose Z_pose and latent eye expressions Z_eyes are computed by encoding the original pose and eye parameter sequences with the corresponding variational sequence autoencoders:
โฐ_anim = ||X_mouth-Xฬ_mouth||^2 + ||Z_eyes-แบ_eyes||^2
+||Z_pose-แบ_pose||^2 + ฮป_style KL(๐ฉ_(ฮผ_style,ฯ_style)||๐ฉ_(0,1))
All variables Z, X, แบ, and Xฬ represent sequences rather than single animation parameter/feature vectors.
Additionally, we train a small fully connected VAE (two hidden layers with 1024 neurons) that maps the original style vectors into a 2D style space.
This allows for visualizing the latent style space on a 2D grid and simplifies the task of finding suitable style vectors.
ยง.ยง Implementation and Training
The neural face model uses 256-dimensional parameters for the mouth region, 256-dimensional parameters for the eye region, and 6-dimensional parameters for the head pose (orientation and position),
which results in a 518-dimensional animation parameter vector per frame.
That means for each frame the mouth expression, eye expression, as well as head, pose, and corresponding expression labelย (viseme or idle) are known, while the emotion/style vector is learned automatically during training.
The full animation dataset consists of four short takes with a frame rate of 25 fps and a total length of 6500 frames (4.3 minutes). The first 500 frames of the third take are excluded from training and used for validation.
The animation network is trained with random sequence lengths between 2.5 seconds and 10 seconds.
Both animation priors are trained as independent auto-encoders, while the style-aware animation network and the style encoder are trained end-to-end.
All networks are trained simultaneously with a batch size of 32 and an initial learning rate of 0.001 and 0.0005, respectively.
For all networks, we use exponential learning rate scheduling with ฮณ=0.96.
The animation networks are trained approximately for 25000 iterations with both regularization weights ฮป_prior and ฮป_style set to 1.0e^-4.
ยง EXPERIMENTAL RESULTS AND DISCUSSION
This section presents our quantitative evaluation, our user study, and visual results generated with the proposed method as still images, while an accompanying video demonstrating dynamic effects can be found in the supplementary material.
For our experiments, we captured actors with synchronized and calibrated multi-view camera rigs consisting of three, five, or eight cameras that were equally placed around them at eye level.
The angle between neighboring cameras is always 45^โ, which yields a coverage of 360^โ for the 8-camera rig, 225^โ for five cameras, and 90^โ for the 3-camera setup.
We captured different facial expressions as well as speech and asked them to present single words and short sentences in English.
The effective capture resolution for the head is approximately 520x360 pixels.
All pre-processing steps, network training, and experiments have been carried out on a regular desktop computer with 64GB Ram, 2.6 GHz CPU (14 cores with hyper-threading), and one GeForce RTX3090 graphics card.
For the creation of the personalized head models, we obtain a single 3D reconstruction of the person's head and adapt the geometry of an existing human body template (SMPL) <cit.>.
Personalized facial expression models are generated by transferring blend-shapes from an existing multi-person expression model (Facewarehouse)ย <cit.>.
We register them by optimizing similarity as well as identity parameters of the expression model.
With both models in correspondence, we obtain personalized expression shapes by sampling the 3D coordinates for all expression shapes for each face vertex of our head template.
For reasons of simplicity blend-shapes are transferred as differential shapesย (i.e.ย the difference between neutral expression and corresponding expression shape).
All transferred blend-shapes are then compressed into a set of six expression shapes using PCA.
After generating a personalized head model, we use the facial performance capture pipeline described in section <ref> to obtain dynamic head meshes as well as dynamic head textures, which are used to train our hybrid generative head models as well as rendering models as described in section <ref> and section <ref>.
ยง.ยง Evaluation of Modeling and Rendering
To assess the quality of the proposed hybrid modeling and rendering approach, we compare rendering results from our proposed approach and four existing methods <cit.> based on a challenging publicly available dataset used in the paper of Gafni et al.ย <cit.>.
For the reason of fairness, we generated the results of all approaches based on monocular data.
While this limits the ability of our approach to modify the captured head-pose, it still allows for performing self-reenactment based on a monocular validation sequence.
Since we want to focus on the quality of the face and the head, we mask out the background and upper body.
Figure <ref> shows typical problems of the reference approaches: image-based methods like FOMMย <cit.> work well as long as the head pose remains the same, however as soon as the actor looks in a different direction obvious rendering artifacts appear.
The second approach (FACIALย <cit.>) is based on a pre-trained morphable head model. Since it is not able to capture facial expressions accurately enough, mouth and eye expressions can heavily deviate from the ground truth.
While the recent approach of Grassal et al.ย <cit.> (NHA) captures the personal head geometry and appearance very well, glasses seem to pose a problem for this approach as the last row shows.
The NeRF-based approach of Gafni et al.ย <cit.> (4DFA) works well on the validation sequence, however, the careful comparison shows also inaccurate eye expressions and a lack of details on skin and hair as well as a lower quality based on the evaluated metrics in table <ref>.
Figure <ref> shows a more detailed comparison between 4DFA and our approach.
As both methods learn to separate fore- and background during training, we also compare the generated foreground alpha masks.
The most obvious difference is that our masks are much clearer, but on a closer look, one can also see that our approach is able to reproduce more details on the skin and hair.
Table <ref> shows the evaluation of all rendering approaches with different error metrics such as L_1, PSNR, SSIMย <cit.> and LPIPS (Learned Perceptual Image Patch Similarity)ย <cit.>.
Especially the two perceptual error metrics (SSIM and LPIPS) show interesting results:
4DFA achieves only good structural similarity (0.8729), while FACIAL has a lower error according to LPIPS (0.0758).
In contrast, our approach performs best according to SSIM (0.8872) and LPIPS (0.0510).
A typical problem when evaluating dynamic visual content is that perceived image/video quality is difficult to measure with traditional error metrics.
Therefore, we conducted a user study where 70 participants (non-expert users, between 18 and 60 years old) were asked to rate the quality of the reconstructed head videos as well as the ground truth, see table <ref>.
For the user study, we compared our method against the approaches of Hong et al.ย <cit.> (DAGAN), Zhang et al.ย <cit.> (FACIAL), Gafni et al.ย <cit.> (4DFA), and the ground truth.
With each method, we generated four talking-head videos without sound and an approximate duration of four seconds.
Every participant was presented with all videos in random order and asked to rate the visual quality on a Likert scale (from 1 up to 5).
The ground truth achieved the highest mean opinion score (4.6), followed by our approach with a score of 4.0 and 4DFA with a score of 3.6.
Videos produced with DAGAN and FACIAL received lower ratings (2.5 and 2.4) on average,
which was probably caused by temporal artifacts/flickering and unnatural deformations of the face due to strong head movement (DAGAN).
Summarized, quantitative evaluation as well as the user study show that our modeling and rendering approach can generate high-quality renderings and outperforms all reference approaches.
ยง.ยง Evaluation of our Neural Animation Approach
The evaluation of the proposed animation approach is based on our audio-visual multi-view dataset.
Figure <ref> shows three short animated speech sequences that were produced with our method. Each sequence was rendered based on three manually chosen style vectors that represent 'neutral' speech style (S1), angry/frustrated speech (S2), and happy/delighted speech (S3).
This demonstrates that our system can disentangle content and speaking style in an unsupervised manner, which allows for synthesizing of emotional speech sequences and gives manual control over style/emotion.
Moreover, we show that the learned style space allows consistently animating the head model, as the same style vectors yield similar emotions/speaking styles also with different input sequences.
To evaluate the quality of the proposed animation approach, we mainly rely on the user study as this is the most meaningful way of measuring the perceived quality of synthesized talking head videos.
We compare our animation results against the ground-truth and synthesized talking head videos generated with four recent speech-driven animation methods (Wav-To-Lipย <cit.>, Make-It-Talkย <cit.>, Facialย <cit.>, and AD-Nerfย <cit.>).
In an online questionnaireย (implemented with the SoSciSurvey online survey platformย <cit.>), we asked 70 English-speaking participants (non-expert users, between 18 and 60 years old) to rate three important characteristics of each synthesized video: the rendering quality, the synchronicity between mouth and voice, and the realism of facial expressions.
Based on our captured emotional-speech data, we generated three short speech sequences (approx. 4 seconds long) with each reference method.
All video stimuli (including the corresponding captured speech videos of the actress) were presented in randomized order and at the same resolution (600x600 pixels).
In order to improve quality of the collected data, each participant was first presented a training task (incl. instructions) based on the synthetic talking head video of a different actor.
Video stimuli could be played only once. To reduce distraction, answer options appeared after the video finished.
Table <ref> shows that our approach achieves the highest ratings (except ground truth) in all three categories.
In addition, we conduct a quantitative evaluation of the audio-visual synchronicity (table <ref>) based on the mean-squared landmark distance (LMD) proposed by Chen et al.ย <cit.> as well as the predicted frame-offset and the audio-visual confidence measure proposed by Chung et al.ย <cit.>.
As our reference videos contain considerable head motion, we compute the LMD on registered sets of mouth landmarks (i.e. after compensating for 2D-translation, 2D-rotation and uniform-scale).
Assuming perfect audio-visual synchronicity in the captured reference videos, a high LMD value indicates a high deviation between synthesized and captured mouth movements, while an LMD close to zero indicates almost identical mouth movements.
The method of Chung et al.ย <cit.> predicts a frame offset in order to align a video with its audio track.
Additionally, a confidence value (unnormalized) measures the correspondence between video and audio track.
A high confidence (e.g. 6) indicates well matching speech and mouth movements, while a confidence close to zero indicates that mouth movements are uncorrelated with the audio.
Again our approach outperforms the reference methods achieving the lowest LMD error (6.1) and the lowest predicted absolute frame-offset of 1 with a confidence of 4.2.
Only WAV-TO-LIPย <cit.> achieved higher confidence (6.6) but also a higher offset of 3 frames.
It has to be emphasized that the training videos are challenging as our actress used a lot of body language to convey emotions, which resulted in large movements and very strong facial expressions.
In the supplemental video, we show samples of the training data, rendering and animation results of reference approaches, ablation studies to visualize the learned style space and the impact of the learned animation prior models as well as the variational style encoder.
ยง.ยง Limitations and Future Work
A limitation of the proposed approach is the missing flexibility in terms of lighting conditions during rendering as the studio light is baked into the textures.
For future versions of our system we plan to further extend our approach such that the light situation can be adapted at render time according to the target scene.
Regarding animation, our system is limited to emotions and speaking styles that are presented by the captured actors.
Training a multi-person animation model that disentangles animation style from actor identity would further increase the flexibility of our approach as this could allow extrapolating/transferring animation styles between actors.
ยง CONCLUSIONS
We present a new method for the creation and animation of photo-realistic 3D human head models.
Our hybrid head model combines the advantages of lightweight and robust model-based representations and the realism of neural rendering techniques, which allows for animation, facial expression editing as well as realistic rendering in real time.
The self-supervised architecture allows for convenient training (without pre-computed segmentation masks) and supports multi-view data.
Moreover, automatically learned foreground/background segmentation masks allow for simple integration of rendered head images with new backgrounds as well as with 3D/VR applications.
Based on the neural head representation, we introduce a novel style-aware facial animation approach that uses a sequence-to-sequence translation architecture.
In contrast to previous methods, our animation network can be successfully trained with real acting performances that contain strong emotions (unannotated) and as well as large head movements.
To synthesize more realistic videos, we simultaneously train variational sequence auto-encoders for eye expression and head pose, which are typically not well correlated with raw speech content (text, phonemes, visemes)
The VSAEs act as a statistical animation prior, which results in smoother and more natural expression/motion sequences.
An additional style encoder helps to capture natural variations of speech as well as emotions via low-dimensional latent style vectors that can be used after training to control the animated speech style.
We show that our approach successfully disentangles content and style via a consistent latent style space and evaluate our animation method as well as the proposed hybrid head model quantitatively and qualitatively on challenging datasets.
Moreover, we conducted a study showing that the proposed rendering as well as the animation approach receive the highest ratings in terms of perceived quality from 70 participants.
ยง ACKNOWLEDGEMENTS
This work has partly been funded by the European Unionโs Horizon 2020 research and innovation programme under grant agreement No 952147 (Invictus),
by the German Federal Ministry of Education and Research (Voluprof, grant no.ย 16SV8705),
by the German Federal Ministry for Economic Affairs and Climate Action (ToHYve, grant no.ย 01MT22002A)
as well as the Fraunhofer Society in the Max Planck-Fraunhofer collaboration project NeuroHum.
plain
|
http://arxiv.org/abs/2306.02579v1
|
20230605041004
|
Cross-Lingual Transfer Learning for Phrase Break Prediction with Multilingual Language Model
|
[
"Hoyeon Lee",
"Hyun-Wook Yoon",
"Jong-Hwan Kim",
"Jae-Min Kim"
] |
cs.CL
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] |
[
[
July 31, 2023
=================
Phrase break prediction is a crucial task for improving the prosody naturalness of a text-to-speech (TTS) system.
However, most proposed phrase break prediction models are monolingual, trained exclusively on a large amount of labeled data.
In this paper, we address this issue for low-resource languages with limited labeled data using cross-lingual transfer.
We investigate the effectiveness of zero-shot and few-shot cross-lingual transfer for phrase break prediction using a pre-trained multilingual language model.
We use manually collected datasets in four Indo-European languages: one high-resource language and three with limited resources.
Our findings demonstrate that cross-lingual transfer learning can be a particularly effective approach, especially in the few-shot setting, for improving performance in low-resource languages.
This suggests that cross-lingual transfer can be inexpensive and effective for developing TTS front-end in resource-poor languages.
Index Terms: cross-lingual transfer, multilingual, phrase break prediction, text-to-speech front-end, language model
ยง INTRODUCTION
The text processing front-end, such as text normalization, grapheme-to-phoneme (G2P), and phrase break prediction, has become a core part of modern text-to-speech (TTS) systems.
Many studies have shown that these text processing modules successfully leverage the naturalness of synthetic speech in various approaches, including traditional statistical methodsย <cit.> and deep learning-based methodsย <cit.>.
Recently, with the great success of BERTย <cit.> in various natural language processing (NLP) tasks, most of the proposed works have adopted mainstream pre-trained language models (PLMs)ย <cit.>.
These works have reported stable and robust performance on several downstream tasks of the TTS front-end, such as text normalizationย <cit.>, G2P conversionย <cit.>, and phrase break predictionย <cit.>.
While PLMs have enabled significant advances in a wide range of TTS front-end tasks, a sizeable set of labeled, task-specific data is essentially required.
However, a well-structured, large-scale database that thereby affects the quality of models carries a tremendous cost in linguistic annotation, as well as notorious tediousness, and subjectivityย <cit.>.
Even building it by language or task is more challenging in a practical way.
Current research on the TTS front-end is actively conducted on manually constructed, large-scale monolingual datasets or a handful of resource-rich languages, such as English, to address the data scarcity issue.
One potential approach to address this challenge is to leverage relatively low-resources via cross-lingual transfer from a high-resource language. This approach can be effective in reducing the cost and time of data annotation for low-resource languages, while improving their performanceย <cit.>.
In this paper, we empirically investigate the effectiveness of zero-shot and few-shot cross-lingual transfer for phrase break prediction in four Indo-European languages: English, French, Spanish, and Portuguese.
We explore this question using DistilmBERTย <cit.>, which is one of the mainstream pre-trained multilingual language models to utilize multilingual representation space.
Our manually annotated dataset consists of one high-resource language and three other relatively low-resource languages, which are approximately 12%-16% in size compared to the English dataset.
We consider two different transfer learning settings and compare the performance of 1) zero-shot, cross-lingual โ fine-tuning on one source language and testing on different target languages, and 2) few-shot, cross-lingual โ fine-tuning on one source language and a few target language instances, and testing on the target language.
Our contributions are summarized as follows:
* This is the first study to investigate the effectiveness of using a cross-lingual transfer learning framework based on a pre-trained multilingual language model for phrase break prediction in low-resource languages.
* To verify its efficacy in transfer settings, we conducted cross-lingual transfer experiments in both zero-shot and few-shot settings. The experimental results showed that zero-shot, cross-lingual models are not enough to resolve the lack of large-scale labeled data in their current state but that the few-shot models' performance was comparable to or slightly better than that of the monolingual models.
* We discovered that DistilmBERT, a pre-trained multilingual language model, can quickly leverage multilingual representation space, adapting its knowledge from high-resource languages to low-resource ones.
ยง RELATED WORK
ยง.ยง Phrase Break Prediction
A phrase break prediction refers to a pause or boundary between phrases, mainly for intonational or breath reasons, and plays an important role in comprehending the structure of sentences.
In TTS systems, phrase break prediction can be one of the most important front-end modulesย <cit.> along with text normalizationย <cit.> and G2Pย <cit.>.
Although previous studies have indicated that the use of a deep learning-based model architectureย <cit.> in phrase break prediction results in better performance than traditional methods such as a hidden Markov model (HMM) based modelย <cit.>, most models are trained on a large-scale monolingual dataset.
Futamata et al.ย <cit.> employed contextual text representations obtained from the BERT model pre-trained on Japanese Wikipedia, along with various linguistic features such as part-of-speech (POS) and dependency tree (DEP), and showed significant improvement compared to the conventional methods.
However, these approaches for building phrase break prediction models typically demand a substantial amount of labeled data, which can be a significant challenge for low-resource languages.
ยง.ยง Cross-lingual Transfer
Most supervised learning-based models heavily rely on a large amount of labeled data to achieve optimal performance, which can be challenging due to the data scarcity issue.
To resolve this issue, cross-lingual transfer from high-resource to low-resource languages can be employedย <cit.>.
A cross-lingual transfer is a strategy by which a model can solve diverse downstream tasks from a resource-poor target language despite receiving only a few examples or none at all in that language during the training process.
Typically, a model is pre-trained using a large corpus of monolingual data in different languages, including more than 100 diverse languages, and can also make use of multilingual corpora. Then, the model is fine-tuned with a specific target languageย <cit.>.
In recent studies, cross-lingual transfer in NLP comes with techniques that rely on continuous cross-lingual representation spaces for different languagesย <cit.> or multilingual transformer models that are pre-trained on large, multilingual corpora through the language modeling objectivesย <cit.>.
Due to the advantages of using a shared representation, multilingual language model-based zero-shotย <cit.> or few-shotย <cit.> cross-lingual transfer studies are making progress.
ยง DATASET DESCRIPTION
To assess the cross-lingual transfer learning framework for phrase break prediction, we collected utterances from various domains in four languages: English (EN), French (FR), Spanish (ES), and Portuguese (PT).
These languages share typologically similar features: they belong to the Indo-European language familyย <cit.>, use the Latin alphabetย <cit.>, and have the same subject-verb-object (SVO)ย [
There are seven categories of word order found among the various human languages: subject-verb-object (SVO), subject-object-verb (SOV), object-verb-subject (OVS), object-subject-verb (OSV), verb-object-subject (VOS), and verb-subject-object (VSO)ย <cit.>.
] word orderย <cit.>.
To prevent bias towards specific domains in our datasets, we collected utterances from diverse sources, such as news articles, blog posts, community websites, and the spoken language corpus.
We then manually curated utterances containing sentences that satisfy the following criteria: 1) complete sentence structure, and 2) a minimum of 4 words and a maximum of 25 words per sentence.
Each utterance consists of several phrasing information, which mainly comprises an intonation phrase (IP), accent phrase (AP), and end of the sentence (SB). IP refers phonetic pause inserted between phrases, and AP refers to a delimitation.
Following the annotation method of Futamata et al.ย <cit.>, we constructed a phrase break dataset in which phrasing information was manually annotated by a linguistic expert. Each silence between words and word transitions in the recorded speech of collected utterances was labeled as a phrase break.
Since some text preprocessing techniques for a particular language may degrade performance in other languagesย <cit.>, we applied only minimal preprocessing.
For tokenization, we used a shared WordPieceย <cit.> vocabulary instead of language-specific tokenization.
Most of the utterances in our dataset around 69,000 are in English, whereas the other three languages have much fewer scripts, roughly 12%-16% of the English dataset, primarily due to the higher cost of the annotation required.
Each utterance consisted of an average of 18.27 (SD = 10.58) subwords, and the average length of a subword was 3.38 (SD = 2.14).
In total, our dataset contained 98,592 utterances with 1.8 million subwords in four languages.
Tableย <ref> shows the distribution of our dataset by language.
ยง ZERO-SHOT TRANSFER
First, we focus on zero-shot, cross-lingual transfer for phrase break prediction in four languages: training on one language and testing on other unseen languages.
ยง.ยง Experimental Setup
From our manually-curated dataset, we randomly select 6,000, 200, and 200 utterances as training, validation, and test sets for each language, respectively.
We use the shared multilingual WordPiece vocabulary for utterances across all languages and take sequences of subwords as model input.
We follow the same architecture as DistilmBERTย <cit.>, which is a distilled version of multilingual BERT (mBERT)ย <cit.> trained on monolingual corpora in 104 languages, including English, French, Spanish, and Portuguese. In detail, the model consisted of 6 transformer layers with 12 heads in each layer, 768 hidden dimensions, making up 134M parameters[
Note that DistilmBERT has relatively smaller parameters compared to the 177M parameters of mBERT-base and is about twice as fast as mBERT-baseย <cit.>. In our preliminary experiments, we confirmed that the distilled model was enough to effectively train for phrase break prediction, our downstream task, in various languages.
].
DistilmBERT is fine-tuned with a maximum sequence length of 128 subword tokens to accommodate relatively long utterances for 5 epochs[
DistilmBERT was trained on a single NVIDIA Tesla V100 GPU for every cross-lingual transfer experiment including subsequent few-shot settings.
].
To select the best hyperparameters, we use the validation set performance.
We grid-search for the following hyperparameter values and select those in bold as our best: a batch size from {16, 32}, a learning rate of Adam optimizer <cit.> from {2e-5, 5e-5, 5e-6}, and a dropout rate of the last layer from {0.1, 0.2}.
The loss function is cross-entropy loss, and we measure the model performance using a macro-averaged F1 score.
ยง.ยง Results and Analysis
Tableย <ref> presents the macro-averaged F1 scores for different training and test languages, with rows and columns representing each language, respectively. The numbers underlined refer to the results in the monolingual setting.
As expected, the performance drops in zero-shot results for all target languages compared to the monolingual results, with the degree of the decline varying significantly across languages.
For example, when training on Spanish and testing on French, we observe the largest performance drop of 26.19%. On the other hand, when training on Portuguese and testing on English, it is only 3.34%, which is the smallest drop, and very close to the result of monolingual models.
Notably, given the monolingual results (underlined numbers in Tableย <ref>), training and testing in English produces the poorest performance compared to other languages, even though English has the highest proportion of pre-training corpora used in multilingual model training.
These results imply that zero-shot cross-lingual transfer, even without using a single target language instance, can mitigate the data scarcity issue to some extent.
However, it is also clear that it cannot achieve the same level of performance as monolingual models, which could be a feasible alternative for real-world TTS front-end applications.
Therefore to further explore beyond the limitation of a zero-shot setting, we proceed to investigate cross-lingual transfer in few-shot settings in the following section.
ยง FEW-SHOT TRANSFER
To improve the inadequate results in zero-shot transfer, we now investigate the effectiveness of few-shot, cross-lingual transfer with a modest number of target language instances in conjunction with the source language data. Our objective is to narrow the performance gap with the monolingual model.
ยง.ยง Experimental Setup
Unlike in zero-shot transfer, we employ a two-step fine-tuning approach in this setting.
For the first fine-tuning step, we consider two different scenarios โ English-Only and Augmented โ to analyze each few-shot, cross-lingual transfer performance.
In both scenarios, we expand a source language dataset in English to 60,000 examples to take advantage of the abundance of available data.
For the first scenario (English-Only), we use one of the general cross-lingual transfer methods, which fine-tunes the model solely on a large amount of English data.
Additionally, we consider another scenario (Augmented) based on the generally accepted correlation between larger dataset size and better performanceย <cit.>.
In this scenario, English and all other languages except for the target language are included during the training.
Note that the datasets used for training, validation, and testing are identical to those used in the zero-shot experiment.
This approach reflects a realistic scenario that utilizes all available datasets.
After the first fine-tuning step, the second fine-tuning step is conducted in the same manner for both scenarios.
During this step, we progressively increase the size of the target language examples k in powers of 2 as follows: k โ{4, 8, 16,..., 4096}.
Note that the model architecture and hyperparameters used in this experiment are the same as those in the zero-shot setting.
ยง.ยง Results and Analysis
The overall results of few-shot, cross-lingual transfer in both scenarios across all languages are presented in Figureย <ref>.
The results depicted in Figureย <ref> indicate consistent and significant improvements in performance across all languages and scenarios, as the number of target language instances represented by k increases.
Even with very few instances, less than 100 in total, substantial performance gains are observed in comparison to zero-shot setting in all cases.
We can also observe that when trained on only a few target examples (i.e., from 4 to 256), the score rapidly increased during the initial stages, then gradually slowed down as more examples were added, and eventually converged.
Interestingly, in some cases, the performance even declined with a larger number of examples, particularly when k was greater than 2,000.
Considering the results in Figureย <ref>, we conjecture that DistilmBERT, a pre-trained multilingual language model, can leverage multilingual representation space effectively and transfer its knowledge of the phrase break prediction task to a new unseen language.
ยง.ยง.ยง Results in English-Only Scenario
Tableย <ref> shows a detailed breakdown of the few-shot transfer results for the English-Only scenario.
To highlight the performance changes with respect to k, we included the zero-shot performance from a model exclusively fine-tuned on an English dataset, as well as the few-shot performance, and the differences between them.
As shown in Tableย <ref>, the zero-shot transfer performance is enhanced in French and Portuguese and remains nearly the same in Spanish since the size of the English training set increases in the first fine-tuning step.
This implies that training on a larger dataset can lead to learning more knowledge about the phrase break prediction task.
We notice a consistent pattern in which performance gradually improves and achieves comparable results to a monolingual model trained with supervised learning.
Despite a marginal difference of 0.48, the French few-shot, cross-lingual model (91.89, k=2,048) scored slightly lower than the monolingual model (92.37), whereas the Spanish and Portuguese models outperformed when k=2,048 and k=256, respectively.
Surprisingly, in the case of Portuguese, the model obtained the best F1 score when only 256 training instances were used.
Furthermore, the performance in Spanish improved to 91.99 even when k was set to 4,096.
ยง.ยง.ยง Results in Augmented Scenario
Tableย <ref> presents the results of few-shot transfer for the Augmented scenario.
As in the previously presented findings, we can see a steady improvement as k increases, following a similar pattern to the English-Only scenario.
On the other hand, zero-shot performance generally decreased by less than 4%.
Interestingly, even with a relatively small amount of high-resource language data (as shown in Tableย <ref>), the performance in Spanish and Portuguese was lower than that of zero-shot transfer.
This demonstrates that the model is not able to generalize well across different languages during the initial fine-tuning step, as the datasets of different languages and sizes are trained simultaneously.
Consequently, this affects the next fine-tuning step, where the target language examples are added to the training set.
As you can see in Figureย <ref>, languages with the lower zero-shot performance show poorer performance than the English-Only scenario.
A notable observation is that, as k reaches a large size (greater than 1,000), comparable performance is reached, and even beyond that point, performance gradually improves.
ยง.ยง.ยง Analysis
As we have conducted few-shot transfer experiments in two scenarios, we can verify that cross-lingual transfer is possible in a phrase break prediction task and report considerable gains with only a small amount of labeled data.
Since PLMs were introduced, training a large amount of labeled data has become more essential, as it always leads to better performance on various downstream tasks, including TTS front-end modules.
However, our findings demonstrate that few-shot transfer can fully address the lack of labeled data and narrow the gap with that, while zero-shot transfer, the most cost-effective approach that does not use any target example, cannot be used as it is.
Furthermore, the fact that a distilled version of the multilingual language model was enough to leverage its representation ability will be more beneficial for real-time TTS applications.
ยง CONCLUSION
In this study, we present the first comprehensive research to verify the effectiveness of cross-lingual transfer for phrase break prediction in less-resourced languages using a pre-trained multilingual language model.
We investigate zero-shot and few-shot transfer learning settings to find an inexpensive alternative for low-resource language adaptation.
While our experimental results in the zero-shot setting did not rise to the performance of the monolingual model trained on a large-scale dataset with consistent labeling, they demonstrated the capability of the multilingual model to perform the transfers required for this task.
Moreover, we found that few-shot transfer using only a small number of annotated examples led to performance comparable to that of the monolingual model.
Our findings indicate that a few-shot, cross-lingual model can be an effective solution to address the challenge of limited annotated data in TTS front-end applications.
In future work, we plan to investigate the efficacy of cross-lingual transfer for other TTS front-end tasks, such as grapheme-to-phoneme (G2P) conversion, in languages with limited resources.
Furthermore, in future studies, we will explore whether the transfer learning approach works best even between typologically distinct languages, such as English and Korean.
ยง ACKNOWLEDGEMENTS
This work was supported by Voice&Avatar, NAVER Cloud, Seongnam, Korea.
IEEEtran
|
http://arxiv.org/abs/2306.10760v1
|
20230619080335
|
On the Theory of Difference Frequency Quantum Oscillations
|
[
"Valentin Leeb",
"Johannes Knolle"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci",
"cond-mat.other"
] |
Technical University of Munich, TUM School of Natural Sciences, Physics Department, 85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 Mรผnchen, Germany
Technical University of Munich, TUM School of Natural Sciences, Physics Department, 85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 Mรผnchen, Germany
Blackett Laboratory, Imperial College London, London SW7 2AZ, United Kingdom
Quantum oscillations (QO) describe the periodic variation of physical observables as a function of inverse magnetic field in metals. The Onsager relation connects the basic QO frequencies with the extremal areas of closed Fermi surface pockets, and the theory of magnetic breakdown explains the observation of sums of QO frequencies at high magnetic fields. Here we develop a quantitative theory of difference frequency QOs in two- and three-dimensional metals with multiple Fermi pockets with parabolic or linearly dispersing excitations. We show that a non-linear interband coupling, e.g. in the form of interband impurity scattering, can give rise to otherwise forbidden QO frequencies which can persist to much higher temperatures compared to the basis frequencies. We discuss the experimental implications of our findings for various material candidates, for example multi-fold fermion systems, and the relation to magneto intersubband oscillations known for coupled two-dimensional electron gases.
On the Theory of Difference Frequency Quantum Oscillations
J. Knolle
July 31, 2023
===========================================================
ยง INTRODUCTION
Quantum oscillation measurements have been a standard tool to determine electronic properties of metals since their discovery in bismuth by de Haas and van Alphen in 1930ย <cit.>. QO were first reported in the magnetization, but soon afterwards magneto-transport measurements, known as the Shubnikovโde Haas effect, proved to be quantitatively similarย <cit.> as both originate from the discreteness of Landau levels of electrons in a magnetic fieldย <cit.>. Onsager later realized that the oscillation frequency of the magnetization or the conductivity as a function of the inverse magnetic field is directly related to the metal's Fermi surfaceย <cit.>, before Lifshitz and Kosevich (LK) completed the canonical theory of QOs by connecting the temperature dependence of the amplitude of QOs to the effective mass of the electronsย <cit.>. Hence, QO measurements can detect the size of even tiny Fermi pockets, the effective mass of electrons and the scattering rate via the Dingle temperatureย <cit.>.
Deviations to the standard theory of QO are rare and exotic. For example, the observation of anomalous QOs in bulk insulatorsย <cit.> and heterostructuresย <cit.> is believed to be a result of strong electron correlations and has recently led to a flurry of new theoretical proposals beyond the standard LK theoryย <cit.>. In contrast, it is well established and long understood that QO frequencies beyond the simple Onsager rule can appear in strong magnetic fieldsย <cit.> where the basic semiclassical description of electrons, simply traveling along the edge of the cross sectional area of the Fermi surface, breaks down and tunneling between distinct Fermi pockets becomes importantย <cit.>. In this magnetic breakdown theory a whole zoo of combinations of frequencies can arise, depending strongly on the gaps between different semiclassical orbits and Berry phase effectsย <cit.>.
A new QO frequency appears, for example, in Weyl semimetals associated with the difference of two electron- and hole-type Fermi surface areasย <cit.>, which can be understood via magnetic breakdown in the form of Klein-tunneling between the counter-propagating semiclassical orbits. However, not all possible combinations of frequencies appear within the semiclassical breakdown theory, e.g. a difference frequency originating from two parallel electron-like (or two hole-like) pockets, see Fig.<ref>ย (c), seems impossible.
In this work we study the effect of non-linear interband coupling on the oscillating part of the conductivity in multi-band metals. We show in detail how interband impurity scattering can lead to sum and difference frequencies in the Shubnikovโde Haas effect. Unlike magnetic breakdown, the appearance of these frequencies is not induced by the magnetic field but triggered by self-energy effects originating from the non-linear coupling of distinct Landau quantized pockets. Remarkably, the emerging difference frequency is often only weakly temperature damped such that it can persist to much higher temperatures then its semiclassical basis frequencies.
In the context of two dimensional (2D) electron gases (2DEG) a related phenomenon, dubbed magneto intersubband oscillations, has been studied previouslyย <cit.>. The unusual temperature stability is in accordance with experimental observations on 2DEGsย <cit.> and has also been predicted to appear in quasi-2D, layered metalsย <cit.>. Here, we generalize the theory of QO difference frequencies to generic parabolic and linearly dispersing band structures and establish that they even appear for isotropic 3D systems, which is of experimental relevance to a number of materials classes, e.g. multifold fermion systemsย <cit.> as very recently observed in CoSiย <cit.>.
ยง SUMMARY OF RESULTS
ยง.ยง Summary and Review
We consider generic two band Hamiltonians in 2D and 3D, the exemplary band structures are shown in Fig.ย <ref>ย (a) and (b). The common feature of our models is the existence of two Fermi surfaces, see Fig.ย <ref>ย panelย (c). Note that, we do not expect any magnetic breakdown between the two pockets because the velocity of the semiclassical orbits has only parallel components.
The key ingredient for the appearance of a difference frequency is a non-linear coupling of the bands which we study in terms of generic impurities. In addition to the standard intraband scattering channels ฮฑ, the interband ฮฒ-channel allows electrons to scatter between the two distinct Fermi surfaces, see Fig.ย <ref>ย (c). We concentrate on short-ranged impurities, which permit an analytical calculation of the conductivity and capture the main contribution for dominant s-wave scattering.
The emergence of sum and difference frequencies can be seen by considering the generic formula of the conductivity
ฯ = โซ_-โ^โฮตฬฃ[-n_F'(ฮต)] ฯฬ(ฮผ+ฮต),
which can be expressed as a convolution of the derivative of the Fermi distribution function -n_F'(ฮต)= 1/4 T cosh^2(ฮต/2T) (with chemical potential ฮผ and temperature T) and the conductivity kernel ฯฬ(E). The kernel includes a sum of various contributions from the two bands and different harmonics. Concentrating on the oscillating components only, one can write each as a product of a non-oscillating term g and an oscillating functionย <cit.> as
ฯฬ(E) = cos(f(E)) g(E).
In this expression f(ฮผ) = 2 ฯ F/B + ฯ includes the dependence on the cross sectional area of a Fermi pocket F(ฮผ) and a phase ฯ.
First, let us recall how to obtain the canonical LK result for QOs via (<ref>) andย (<ref>). Using the fact that the chemical potential ฮผ is large compared to the temperature one expands the kernel in a region k_B T around E=ฮผ. The resulting integral can be evaluated analytically, see Appendix A, and one finds the standard behavior
ฯ = cos(f(ฮผ)) g(ฮผ) R_LK(ฯ f'(ฮผ, T)
with the famous LK temperature dependence
R_LK(ฯ) = ฯ/sinhฯ.
We note that higher orders in the expansion k_B T/ฮผ can lead to non-LK behavior as we discuss in Appendix <ref>.
Next, let us discuss the apperance of a difference frequency via impurity scattering. Intraband contributions can lead to oscillations in the normally assumed to be non-oscillating prefactor g, as well as to oscillations of the chemical potential <cit.>. However, the quantities always oscillate with the same frequency F as the basic QO, i.e. their oscillations depend on the cross sectional area of the Fermi surface associated with the same band. Therefore, these perturbative effects can only change the non-universal part of the amplitude of the QOs.
The key observation is that due to the interband scattering channel ฮฒ the quantities g and ฮผ can also oscillate with the frequency associated with the other Fermi pocket! Thus, the convolution in (<ref>) leads to an interference of the different QO frequencies F_1 and F_2 in the conductivity. In the second harmonic this leads to two new frequencies: the sum and the difference of the two basis frequencies
ฯ_ยฑโcos(2ฯF_1 ยฑ F_2/B) R_D1 R_D2 R_LK(2ฯ^2 m_1 ยฑ m_2/e B T),
which is the main result of our work.
Note, the sum and difference frequency resemble the standard LK form with the generic damping factor R_LK(2ฯ^2 ( F_1/ฮผ- F_2/ฮผ) T/B), and both are a second order effect in the Dingle factor R_D, which describes damping from impurity scattering as discussed below. Hence, their Dingle temperature is in either case a sum of the two basis Dingle temperatures, weighted with their effective masses. Strikingly, the difference frequency can persist to much higher temperatures then the basis frequencies. Following (<ref>) the sum and difference frequency decay for parabolic bands with the sum and the difference of the effective masses of the two bands. If the effective masses around the Fermi energy are equal, e.g. the difference of the cross sectional areas of the Fermi surfaces does not change as a function of the chemical potential, the difference frequency acquires no temperature smearing at all! Note that the temperature dependence of sum and difference frequency inverts for coupled electron/hole pockets.
For relativistic dispersions with Dirac/Weyl-like excitations similar expressions to (<ref>) are obtained. For these linearly dispersive bands with Fermi velocity v_ฮป the difference frequency remains however slightly temperature dependent even for equal Fermi velocities. The reason for this is the quadratic dependence of the extremal cross section of the Fermi surface on the chemical potential.
We note that beyond second order in the Dingle factor higher combination frequencies can appear. Any integer combination k_1 F_1 + k_2 F_2 with k_1, k_2 โโค is allowed and comes with the Dingle factors of R_D1^|k_1|R_D2^|k_2|. Strikingly, all combination with negative k_1,k_2, including the difference frequency, are absent to leading order in the density of states and therefore also in the de Haasโvan Alphen effect.
ยง.ยง Relation to earlier work
The insight that interband scattering can lead to a sizable temperature stable difference frequency in the conductivity is well known for 2DEGs <cit.>. Furthermore, similar effects are known to appear in quasi-2D layered parabolic metals <cit.>, where impurities couple the different layers. Systems where nearly equal effective masses have lead to the observation of a temperature stable difference frequency include GaAs heterostructures <cit.>, metals with bilayer crystal structure <cit.> and organic metals <cit.>. Here, we first extend the 2D theory to Dirac systems with linear dispersions. The second and main new finding of our work is to establish that difference frequency oscillations may also appear for isotropic 3D systems with generic dispersions. Our work provides a framework for the theory of difference frequency QOs being applicable to generic band structures in any dimension. We also provide qualitative arguments for the behavior of higher harmonic frequencies and their unusual temperature dependencies.
ยง.ยง Outline
The remainder of our work is organized as follows: In sectionย III we first rederive the oscillating part of the conductivity for parabolic dispersions in 2D and obtain similar results as Ref.ย <cit.>. We discuss in detail the interband scattering contribution to the self-energy, before we generalize the results to 3D. In sectionย IV we proceed with analogous calculations for relativistic fermions, e.g. effective descriptions of Dirac and Weyl/multifold fermion materials. In sectionย V we show that the difference frequency is to leading order absent in the de Haasโvan Alphen effect. Finally, in sectionย VI we conclude with a discussion of our results and present exemplary 3D and/or Dirac materials in which we expect a temperature stable difference frequency to be observable.
ยง PARABOLIC DISPERSIONS
ยง.ยง Two dimensions
QOs in the conductivity can be seen in nearly every known metal disregarding its specific features like interactions or spinโorbit coupling. Hence, our models should be seen as effective descriptions of the excitations around the Fermi energy which emerge after incorporating all microscopic details. For simplicity we start by considering a generic two band Hamiltonian
H = โ_วฉ,ฮปฯต_ฮป (วฉ) c_วฉ,ฮป^โ c_วฉ,ฮป + โ_ล U(ล) ฤ^โ _ลฮฤ_ล
in 2D. The quadratically dispersive bands ฮป with dispersion ฯต_ฮป(วฉ) = วฉ^2/2m_ฮป-W_ฮป have different effective masses m_ฮป and are shifted with respect to each other by W_1-W_2, see Fig.ย <ref>ย (a). They can in principle also be shifted in momentum space with respect to each other, modeling different electron or hole pockets. The electrons ฤ_ล = (c_ล,1,c_ล,2)^ can scatter on impurities located at positions ล_i. The impurities are distributed randomly and uniformly such that the systems remains on average translationally invariant which we model by the short-ranged potential
U(ล) = U_0 โ_ล_iฮด(ล-ล_i ). The key ingredient is the scattering vertex ฮ which has intraband channels ฮฑ_ฮป and an interband channel ฮฒ
ฮ =
[ โ(ฮฑ_1) โ(ฮฒ); โ(ฮฒ) โ(ฮฑ_2); ],
and allows electrons to scatter between the distinct Fermi surfaces. The dimensionless numbers โ(ฮฑ_ฮป) and โ(ฮฒ) quantify the effective scattering rates and we have absorbed the complex phase of โ(ฮฒ) in the definitions of c and c^โ . The following calculation follows similar steps as in Ref.ย <cit.>.
In order to study QOs, we introduce a quantizing magnetic field Bฬ = B รช_z, perpendicular to the 2D system. The vector potential is chosen in the Landau gauge ว=(-B y,0,0)^. Peierls substitution leads to LLs for each band of the form ฯต_ฮป(l) = ฯ_cฮป(l+1/2)-W_ฮป where the cyclotron frequency is ฯ_cฮป = e B/m_ฮป. The field operators now carry the following quantum numbers: band index ฮป, LL index l and the trivial momentum k_x. The wavefunctions are the usual ones of a shifted harmonic oscillator at y_0 = k_x/eB.
ยง.ยง.ยง Conductivity
Our objective is to compute QOs in the conductivity and to proceed analytically we concentrate on the transversal component ฯ_xx. Following the Kubo formula <cit.> the conductivity kernel appearing in (<ref>) is given by
ฯฬ_xx(E) = e^2/ฯ L_x L_yTr_l,k_x,ฮป[v_x G(E) v_x G(E)]
where G(E) is the retarded, impurity averaged Green's function G_ฮป,l(E) = (E-ฯต_ฮป(l)-ฮฃ_ฮป(E))^-1 and v_x is the velocity operator. For short-range impurity scattering, the self-energy ฮฃ_ฮป does not depend on any of the electron quantum numbers except the band index ฮป, see sectionย <ref>. Furthermore, we assume in the notation for G that the self-energy remains diagonal in the band index ฮป which is an approximation discussed below in sectionย <ref>.
In order to reduce the complexity of the notation in this manuscript, we will use the dimensionless energy ฮพ_ฮป = E+W_ฮป/ฯ_cฮป which will always appear together with the real part of the self-energy and define ฮพ_ฮป^โ = ฮพ_ฮป-ฮฃ_ฮป /ฯ_cฮป. Note, because the Fermi distribution function n_F'(ฮพ) is strongly peaked in a region k_B T around ฮผ, and as ฮผ/ฯ_c ฮปโซ 1, we may take ฮพ_ฮป^โโโ for all integration boundaries. Furthermore, we denote the imaginary part of the dimensionless self-energy by ฮ_ฮป = -ฮฃ_ฮป/ฯ_c ฮป and introduce ฮปฬ
which takes the value 2 (1) if ฮป is 1 (2).
Inside a magnetic field the velocity operator is quantized as v_x = โ(e B)/โ(2) m_ฮป(a^โ + a), where a^โ and a are the ladder operators of the shifted harmonic oscillator. After evaluation of the trace the conductivity kernel is
ฯฬ_xx(E) = ฯ_0 N_ฮฆ/2 L_x L_yโ_ฮป,l=1 l G_ฮป,l(E) G_ฮป,l-1(E),
with ฯ_0 = 2 e^2/ฯ. The sum over Landau levels can be transformed into a sum over harmonics using the standard Poisson summation formula
โ_l=0^โ f(l) = โ_k=-โ^โโซ_0^โxฬฃ^2ฯ k x f(x).
The resulting integral can be solved exactly by extending the lower boundary to -ฮพ_ฮป^โโ -โ and then performing complex contour integration. The final conductivity kernel takes the form
ฯฬ_xx (E) = ฯ_0 โ_ฮปฮพ_ฮป^โ |ฮ_ฮป(ฮพ)|/1+4ฮ_ฮป(ฮพ)^2
ร(1+ 2โ_k=1^โ (-1)^k cos(2ฯ k ฮพ_ฮป^โ) R_ฮป(ฮพ)^k).
The conductivity can then be expanded as a power series in the damping factor
R_ฮป(ฮพ) = exp(-2ฯ |ฮ_ฮป(ฮพ)| ).
The standard canonical QOs can now be recovered by setting ฮฃ_ฮป to a constant, the empirical Dingle temperature T_D,ฮป, such that R_ฮป(ฮพ) becomes the well known Dingle damping factorย <cit.>
R_D,ฮป = exp(-2ฯ^2 T_D,ฮป/ฯ_cฮป).
One would then follow our discussion in sectionย <ref> ((<ref>) to (<ref>)) to evaluate the convolution with the Fermi distribution function (<ref>) to obtain QOs of the well known LK form, which are also in accordance with the semiclassical Onsager relation. In the next section we will go beyond the simple assumption that the self-energy is a mere constant, but show that it can acquire oscillations with two basis frequencies due to interband impurity scattering.
ยง.ยง.ยง Self-energy
In general, impurities lead to spectral broadening of the LLs captured by the band-dependent Dingle temperatures T_D,ฮป which are normally of the order of a few Kelvin. We will show in the following that the band-dependent self-energy can acquire oscillations with the frequencies associated with both Fermi surfaces.
To calculate the self-energy we use the self-consistent Born approximation (SCBA). A graphical representation of contributing irreducible diagrams is shown in Fig.ย <ref>. The first order contributions are scattering events on a single impurity. Already at this level it is obvious that the interband channels of the scattering vertex lead to a non-diagonal self-energy, see Fig.ย <ref>ย (b). This is due to the fact that the impurities do not conserve the quantum number ฮป.
The central approximation we do in order to make analytic progress is to neglect the off-diagonal elements of the self-energy, ฮฃ_ฮปฮปฬ
=0, which is at least in the limit T_D,ฮป/|W_1-W_2|โช 1 rigorously justified <cit.>. Then the effect of the first order diagrams can be incorporated by simply renormalizing W_ฮป - n_imp U_0 โ(ฮฑ_ฮป)โ W_ฮป. The second order contributions are double scattering events on a single impurity. Due to the random and uniform distribution of the impurities the self-energy can be expressed as an integral
ฮฃ_ฮป = n_impU_0^2/L_x L_yโซ^ฬฃ2 r (ฮฑ_ฮป G_ฮป(ล,ล,ฮพ) + ฮฒ G_ฮปฬ
(ล,ล,ฮพ) ),
with the full Green's function in real space defined by
G_ฮป(ล,ล',E) = โ_l, k_xฮจ^*_ฮป, l, k_x(ล') ฮจ_ฮป, l, k_x(ล)/E-ฯต_ฮป(l)-ฮฃ_ฮป(E).
and the electron wave function given by
ฮจ_ฮป, l, k_x(ล) = ^ k_x x/โ(L_x)ฯ_l(y-y_0).
Here, ฯ_l(y-y_0) are the Eigenfunctions of a harmonic oscillator located at y_0 = k_x/e B. An explicit calculation of the Green's function for ล = ล' can now be carried out, see Appendixย <ref>, and it turns out to be spatially invariant
G_ฮป(ล,ล,ฮพ) = -m_ฮป/2(1+2โ_k=1^โ (-1)^k ^2 ฯ k (ฮพ_ฮป^โ + ฮ_ฮป(ฮพ))).
The evaluation of the spatial integral in (<ref>) is therefore trivial. (<ref>) can be obtained from (<ref>) by using the summation over k_x to integrate out the dependence on the wavefunction and transforming the sum over LLs l by Poisson summation to a sum over harmonics.
Motivated by the fact that the Dingle temperature T_D,ฮป is a measure of the total interactions in the system, we set ฯ T_D,ฮป = |โจฮฃ_ฮปโฉ|. Additionally, we introduce ฮฑฬ_ฮป and ฮฒฬ_ฮป as weights of the different contributions (consider them as renormalized values of ฮฑ and ฮฒ, see Appendixย <ref> for details) and for convenience an operator A with the property Ax_ฮป = x_ฮปฬ
A and A = 1 if it is all the way to the right.
In this compact notation, we obtain the self-consistent equations for the self-energy
|ฮฃ_ฮป(ฮพ)| = ฯ T_D,ฮป[1 + (ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) โ_k=1^โ (-1)^k cos(2 ฯ k ฮพ^โ_ฮป) R_ฮป(ฮพ)^k ]
ฮฃ_ฮป(ฮพ) = ฯ T_D,ฮป(ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) โ_k=1^โ (-1)^k sin(2 ฯ k ฮพ^โ_ฮป) R_ฮป(ฮพ)^k.
Although (<ref>) and (<ref>) still need to be solved for ฮฃ_ฮป, they already show an intriguing property of the self-energy โ the self-energy of an electron of type ฮป oscillates not only with the basis frequency which is dictated by its own Fermi energy (the ฮฑฬ_ฮป contribution) but also with a frequency associated with the respectively other Fermi surface (the ฮฒฬ_ฮป contribution). This contribution is solely a result of interband scattering, which provides the non-linear coupling.
The self-consistent equations (<ref>) and (<ref>) can be solved by iterative insertion in the strong damping limit R_D,ฮปโช 1 which translates to T_D,ฮปโณฯ_cฮป. However we argue that all results hold also qualitatively in the limit where R_D,ฮป approaches 1. This seems reasonable since higher order effects are simply additive and therefore may change amplitude and phase factors of the oscillations but will not change the frequencies or their dependence on temperature.
ยง.ยง.ยง Quantum Oscillations and temperature smearing
We now solve the self-consistent equations for the self-energy up to second order in R_D,ฮป. Then we insert our findings in the conductivity kernel (<ref>) keeping only terms up to the second harmonic, i.e. second order in R_D,ฮป.
At second order the interference of the various oscillating quantities leads not only to a change of the non-universal amplitudes but also to new frequencies. More precisely, the oscillations of the conductivity kernel interfere with the oscillating Dingle factor, an oscillating prefactor and an oscillating contribution to the chemical potential. Since these oscillations all originate from oscillations of the self-energy, there is an interference of F_1 with F_2, which appears as oscillations of the form cos(2ฯ[ฮพ_1ยฑฮพ_2]).
Various frequencies contribute to the conductivity but they all follow the generic description of (<ref>) such that the evaluation of the integrals can be carried out. The evaluation of the convolution with n_F', see (<ref>), leads to a LK temperature damping following (<ref>), details are given in the Appendixย <ref>.
Neglecting all non-oscillatory terms, the final result in 2D then reads
ฯ_xx/ฯ_0 = โ_ฮป A_1Fฮปcos(2 ฯฮผ+W_ฮป/ฯ_ฮป) R_D,ฮป R_LK(2 ฯ^2 T/ฯ_ฮป)
+ โ_ฮป A_2Fฮปcos(4 ฯฮผ+W_ฮป/ฯ_ฮป) R_D,ฮป^2 R_LK(4 ฯ^2 T/ฯ_ฮป)
+ A_+cos(2 ฯฮผ+W_+/ฯ_+) R_D,1 R_D,2 R_LK(2 ฯ^2 T/ฯ_+)
+ A_-cos(2 ฯฮผ+W_-/ฯ_-) R_D,1 R_D,2 R_LK(2 ฯ^2 T/ฯ_-)
where ฯ_ยฑ^-1=ฯ_1^-1ยฑฯ_2^-1, ฮผ+W_ยฑ/ฯ_ยฑ = ฮผ+W_1/ฯ_1ยฑฮผ+W_2/ฯ_2 and the non-universal amplitudes A = A(ฮผ) are evaluated at the Fermi energy/chemical potential and are presented in (<ref>) to (<ref>) of the Appendix. As expected the sum and difference frequency are only present for nonzero interband scattering A_ยฑโฮฒ.
We note that our results are in agreement with previous calculation on magneto inter-subband oscillations for 2DEGsย <cit.>. So far, our calculations are mainly a generalization from previous derivations as we considered different effective masses. In the remainder of this manuscript we establish that similar results hold also true in isotropic 3D metals and for relativistic dispersions.
ยง.ยง Three dimensions
The 2D calculations can be easily generalized to a generic three dimensional metal. The cross section of a Fermi surface of a 3D metal may be complicated such that for simplicity we assume that there exist only two electron or hole pockets between which electrons can be scattered by a channel ฮฒ. Note that within the second order SCBA this treatment can be easily generalized to multiple electron/hole pockets. We expand around the Fermi pockets assuming a local, rotational symmetric, quadratic dispersion such that momenta should be understood as crystal momenta with respect to the center of the Fermi pockets. The Hamiltonian is again given by (<ref>) but with three dimensional momenta.
The dispersion of the LLs is now continuous in z-direction ฯต_ฮป(l,k_z) = ฯ_cฮป (l+1/2) + k_z^2/2m_ฮป - W_ฮป. We can now show that the effect of the k_z-dependence of the dispersion only results in an additional phase of the QOs, similar to the canonical LK theory, see e.g. Ref.ย <cit.>.
In 3D the wavefunction includes now the additional factor ^ k_z z /โ(L_z) but G(ล,ล,ฮพ) remains spatially invariant. We can simply change our equations from the 2D case to describe the 3D model by transforming W_ฮปโ W_ฮป-k_z^2/2m_ฮป and integrate over k_z momenta. The resulting integrals are of the form
โซ_-โ(ฮพ)^โ(ฮพ)xฬฃ(ฮพ-x^2 )^n ^2 ฯ(ฮพ-x^2) = J_n(ฮพ).
Using the usual approximation of a large Fermi energy ฮพโซ 1 it can be shown that J_n(ฮพ) = ฮพ^n/โ(2)^2ฯฮพ-ฯ/4, see Appendix <ref>, where in general the phase consists of a variety of contributions making it non-universal. Details of the full calculation are presented in the Appendixย <ref>.
The final result for the conductivity in 3D resembles the 2D one, (<ref>), with different non-universal amplitudes A and an additional phase ฯ inside the cos terms, see (<ref>) for the full expression of the QO or Tableย <ref> for a summary. We can conclude that sum and difference frequencies of two Fermi pockets are observeable as long as the Fermi pockets are sufficiently strongly coupled.
ยง RELATIVISTIC DISPERSIONS
ยง.ยง Double Weyl model in 2D
A material class, which has all ingredients for temperature stable difference frequencies, are multifold fermion systemsย <cit.>. In these, bands which are parallel within a large region of the Brillouin zone arise out of symmetry arguments, e.g. in representatives of space group 198 <cit.>. However, the quasiparticles near the Fermi energy are often relativistic, hence possess a linear energy-momentum dispersion. In this section we show that sum and difference frequencies also emerge in this situation and follow the general form of (<ref>).
We first consider a model which consists of two Weyl cones labeled by ฮป=1,2 in 2D. The Hamiltonian can be written in the pseudospin basis, {ฮฆ}, and in the band basis, {ฮจ}, as
H_0 = โ_วฉ,ฮปฮฆฬ_วฉ,ฮป^โ (v_ฮปฯฬยทวฉ - W_ฮป) ฮฆฬ_วฉ,ฮป
= โ_วฉ,ฯ=ยฑ,ฮปฯต_ฮป,ฯ(วฉ) ฮจ_วฉ,ฯ,ฮป^โ ฮจ_วฉ,ฯ,ฮป,
with ฯต_ฯ,ฮป(วฉ) = ฯ v_ฮป|วฉ|-W_ฮป, ฯ_i are the Pauli matrices i = x,y and the subband index ฯ = ยฑ 1 labels the two subbands of each Weyl cone. The two cones are shifted in energy by W_1-W_2, see Fig.ย <ref> (b). The extremal cross sectional Fermi surface looks the same as for quadratic bands in 2D or 3D (for fixed k_z), see Fig.ย <ref> (c). Note that we refer to the band structure as double Weyl cones, but it equivalently applies for all other linearly dispersive band structures, e.g. Dirac cones or band structures which can be effectively described by the double Weyl model around the Fermi energy.
The LLs of relativistic electrons are not equidistant but follow ฯต_ฯ,ฮป(l) = ฯฯ_c ฮปโ(l)-W_ฮป where the cyclotron frequency ฯ_cฮป=v_ฮปโ(2 e B) now depends on the Fermi velocity v_ฮป <cit.>. Their wavefunctions are symmetric and antisymmetric superpositions of neighboring harmonic oscillator levels with different pseudospin, see (<ref>).
As above, we add impurities to the system by adding a scattering potential
U = โ_ล U(ล) ฮจ^โ _ล,ฯ,ฮป(ฮ_ฮป, ฮป'โฮด_ฯ,ฯ') ฮจ_ล,ฯ',ฮป'
to the Hamiltonian H = H_0 + U. The scattering vertex ฮโ (with ฮ from (<ref>)) features intracone intraband scattering (ฮฑโchannels) and intercone scattering (ฮฒโchannel) but does not have any intracone intersubband channels. The choice of a trivial intracone intersubband vertex is motivated by the graphene literature <cit.> and simplifies the calculation of the self-energy. As before, our central assumption in the analytical derivation will be that the self-energy remains diagonal in the cone index ฮป neglecting off-diagonal elements of the self-energy.
ยง.ยง.ยง Conductivity
The calculation of the conductivity for Weyl cones differs from that of parabolic bands. The velocity operator in the pseudospin basis reads simply v_x = ฯ_x. However we perform our calculations in the band basis with a magnetic field. In this case v_x couples different LLs as well as different bands. An evaluation of the trace in the conductivity kernel yields
ฯฬ_xx(E) = e^2 N_ฮฆ/ฯ Vโ_l,ฮป v_ฮป^2 (G_l,+,ฮป+G_l,-,ฮป)
ร(G_l-1,+,ฮป+G_l-1,-,ฮป)
in agreement with e.g. Ref.ย <cit.>.
The sum over LLs can be transformed into a sum over harmonics by Poisson summation as before and we obtain
ฯฬ_xx(ฮพ) = 4e^2/ฯโ_ฮปฮพ_ฮป^* |ฮ_ฮป|
ฮพ_ฮป^*2+ฮ_ฮป^2/1+16 ฮพ_ฮป^*2ฮ_ฮป^2
ร[1+2โ_k>0cos(2ฯ k (ฮพ_ฮป^*2-ฮ_ฮป^2)) R_ฮป^k].
where now R_ฮป(ฮพ) = exp(-4ฯฮพ_ฮป^* |ฮ_ฮป|) in contrast to parabolic bands, compare to (<ref>). We note that the same conductivity kernel for a single Weyl cone has already been derived in Ref.ย <cit.> but we argue that our derivation of a more general form is considerably simpler.
ยง.ยง.ยง Self-energy
The real space Green's function G_ฮป, ฯ (ล,ล'ฬ,ฮพ) depends now additionally on the subband index ฯ. We can reuse (<ref>) keeping in mind the different dispersion of relativistic LLs. The wavefunction
ฮจ_ฯฮป,l,k_x(ล) = ^ k_x x/โ(2 L_x)(ฯ_l (y-y_0) + ฯฯ_l-1 (y-y_0))
mixes different levels of the harmonic oscillator. However, the calculation of G_ฮป, ฯ (ล,ล,ฮพ) works analogously. The crucial step is to perform first the summation over subbands ฯ before transforming the sum over LLs to a sum over harmonics in order to be able to use complex contour integration for the integral. We then find analogously to above, the self-consistent equations for the self-energy
|ฮฃ_ฮป(ฮพ)|
= ฯ T_D,ฮป[1+ (ฮฑฬ_ฮป + ฮฒฬ_ฮปA) โ_k=1^โcos(2 ฯ k (ฮพ^โ 2_ฮป-ฮ_ฮป^2)) R_ฮป^k(ฮพ) ]
ฮฃ_ฮป(ฮพ)
= ฯ T_D,ฮป(ฮฑฬ_ฮป + ฮฒฬ_ฮปA) โ_k=1^โsin(2 ฯ k (ฮพ^โ 2_ฮป-ฮ_ฮป^2)) R_ฮป^k(ฮพ) ,
where we used ฮพ_ฮป^โโซฮ_ฮป outside the arguments of sin and cos and introduced the artificial Dingleโtemperature as a prefactor. These equations should be seen as analogue to (<ref>) and (<ref>). The main differences are all expected for Weyl systems, i.e. this is the quadratic dependence on ฮพ in the arguments of cos/sin and the explicit dependence of the Dingle factor on ฮพ.
ยง.ยง.ยง QOs and temperature smearing
The expansion of the conductivity kernel and the evaluation of the temperature integral is analogous to the scenario with parabolic bands. In principle the ฮ_ฮป term inside the cos would lead to additional contributions to the amplitude, but these turn out to be suppressed with T_Dฮป/ฮผโช 1. We neglect these minor changes in the frequency and note that they should become important at low fillings of the Weyl cone.
Omitting all non-oscillatory terms we finally obtain the main result for the conductivity as
ฯ_xx/ฯ_0 = โ_ฮป A_1Fฮปcos(2 ฯ[ฮผ+W_ฮป/ฯ_cฮป]^2) R_D,ฮป R_LK(4 ฯ^2 T(ฮผ+W_ฮป)/ฯ_cฮป^2)
+ โ_ฮป A_2Fฮปcos(4 ฯ[ฮผ+W_ฮป/ฯ_cฮป]^2) R_D,ฮป^2 R_LK(8 ฯ^2 T(ฮผ+W_ฮป)/ฯ_cฮป^2)
+ A_+cos(2 ฯ[(ฮผ+W_1)^2/ฯ_c1^2+(ฮผ+W_2)^2/ฯ_c2^2]) R_D,1 R_D,2 R_LK(2 ฯ^2 T [ฮผ+W_1/ฯ_c1^2+ฮผ+W_2/ฯ_c2^2])
+ A_-cos(2 ฯ[ฮผ+W_1/ฯ_c1]^2-2ฯ[ฮผ+W_2/ฯ_c2]^2) R_D,1 R_D,2 R_LK(2 ฯ^2 T[ฮผ+W_1/ฯ_c1^2-ฮผ+W_2/ฯ_c2^2])
where the non-universal amplitudes A are given in the Appendix (<ref>) to (<ref>). Although (<ref>) seems complicated it can be understood equivalently to (<ref>). The first and second line are the first and second harmonics of the basis frequencies F_ฮป which are dictated by the geometry of the Fermi surface. The sum and difference frequencies of the basis frequencies, line three and four, are also of second order in the Dingle factor. The temperature dependence of the amplitudes follows our main result (<ref>).
Note that for relativistic dispersions the Fermi surface area depends quadratically on the chemical potential. Hence, the scale for the exponential temperature decay is set by ฮผ+W_ฮป/v_ฮป^2 instead of m_ฮป <cit.>.
ยง.ยง Three dimensions
The Hamiltonian of the double Weyl model (<ref>) can be easily generalized to three dimensions by including the Pauliโz matrix ฯ^z. The LLs are then continuous ฯต_ฯ,ฮป(l,k_z) = ฯโ(v_ฮป^2 k_z^2+w_cฮป^2 l)-W_ฮป. In 3D the wavefunction is more involved then before, since the prefactors of ฯ_l and ฯ_l-1 in (<ref>) depend now additionally on l, z and k_z. The nested form of the dispersion and the non-trivial prefactors of the wavefunction make analytic calculations cumbersome. Nevertheless we can use a similar argument as in sec.ย <ref>: Using the relation ฯต_ฯ,ฮป(l,k_z)=ฯต_ฯ,ฮป(l+ฮฝ_ฮป^2/ฯ_cฮป^2 k_z^2) the k_z-dependence reduces to a real shift of the pole in (<ref>). Hence, the terms ฮพ_ฮป^โ^2-ฮ_ฮป^2 transform to ฮพ_ฮป^โ^2-ฮ_ฮป^2 - ฮฝ_ฮป^2 k_z^2, whereas terms ฮพ_ฮป^โฮ_ฮป which are the imaginary shifts of the poles remain the same. The appearing integrals are then of the form J_n(ฮพ^2). Therefore, we conclude that an extension from 2D to 3D has for relativistic bands the same effect on the conductivity as for parabolic bands โ only additional non-universal phases shift the QOs but the overall phenomenology remains unchanged.
ยง DE HAASโVAN ALPHEN EFFECT
The main objective of our work is to establish difference frequency QOs as a generic phenomenon of multiband metals. We focused on the Shubnikovโde Haas effect, i.e. QOs of the conductivity. In this section we comment on the behaviour of the de Haasโvan Alphen effect, i.e. the QOs of quantities derived from the thermodynamic potential.
The main result of this section is that to 2nd order in R_D the difference frequency is absent in the density of states and therefore in the de Haasโvan Alphen effect, whereas the sum frequency remains observable. This applies equivalently for all higher order combination frequencies.
We evaluate the density of states per unit area
ฯ(E) = -1/ฯ L_x L_y_l,k_x,ฮป[ G(E) ]
from the imaginary part of the retarded, impurity averaged Green's function. The result for the density of states for parabolic bands in 2D
ฯ(E) = โ_ฮปm_ฮป/2ฯ(1+ 2โ_k=1^โ (-1)^k cos(2ฯ k ฮพ_ฮป^โ) R_ฮป(ฮพ)^k)
is derived in appendixย <ref> and should be seen as the de Haasโvan Alphen analogue of (<ref>). Hence, we continue with the same expansion up to second order in the Dingle factor as for the conductivity. The major difference in the result is that the difference frequency term cancels exactly in the expansion because the self-energy enters in the density of states only over the damping factor R_ฮป and ฮพ_ฮป^โ and not via any prefactors like the scattering time for the conductivity.
The grand canonical potential is evaluated by a convolution with the Fermi distribution function
ฮฉ = โซ_-โ^โแบธ n_F(E-ฮผ) ฯ(E).
To keep the analogy to the derivation of the conductivity, we split the integration up into a convolution returning the zero temperature grand canonical potential
ฮฉฬ(ฮผ) = โซ_-โ^โแบธฮ(ฮผ-E) ฯ(E)
and include temperature in the same way as for the conductivity (<ref>)
ฮฉ = โซ_-โ^โฮตฬฃ[-n_F'(ฮต)] ฮฉฬ(ฮผ+ฮต).
Note that the integration in (<ref>) has no effect on the oscillations up to a ฯ/2 phase change. The result for the oscillating part of the thermodynamic potential in 2D is
ฮฉ = โ_ฮป A_1Fฮปcos(2 ฯฮผ+W_ฮป/ฯ_ฮป) R_D,ฮป R_LK(2 ฯ^2 T/ฯ_ฮป)
+ โ_ฮป A_2Fฮปcos(4 ฯฮผ+W_ฮป/ฯ_ฮป) R_D,ฮป^2 R_LK(4 ฯ^2 T/ฯ_ฮป)
+ A_+cos(2 ฯฮผ+W_+/ฯ_+) R_D,1 R_D,2
R_LK(2 ฯ^2 T/ฯ_+) .
and the amplitudes are given in (<ref>)-(<ref>). Strikingly, only the difference frequency is absent but the other frequencies show the same behavior as in the conductivity.
The exact cancellation of the difference frequency is not an artifact of the model. It remains also valid in 3D and in linear band structures, see appendixย <ref>. For relativistic dispersions a difference frequency appears, however its amplitude is negligible compared to the sum frequency.
We like to point out that a difference frequency can be generated in the de Haasโvan Alphen effect by higher order scattering processes. Going to the third order of the SCBA, i.ย e. three scattering events on the same impurity, difference and sum frequency are already generated at the level of the self-energy, but again the amplitude is expected to be strongly reduced.
ยง DISCUSSION AND MATERIALS
We have shown in detail how to compute QOs of the conductivity for two-band models with parabolic and relativistic dispersions in 2 and 3D. Remarkably, we find that a non-linear coupling of bands โ studied in terms of interband impurity scattering โ leads to the emergence of new sum and difference frequencies. Their amplitudes are damped with the sum and difference of the temperature scales of their basis frequencies. Hence, a striking feature is that the difference does not acquire any temperature dependence at all if the effective masses of two parabolic bands are the same. For relativistic bands the point of absolute temperature stability is a fine-tuned one, depending on the relation between v_ฮป, W_ฮป and ฮผ. For parallel linear bands, i.e. equal Fermi velocities v_1 = v_2, the difference frequency remains slightly temperature damped by W_1-W_2/v_1^2, providing an opportunity to experimentally distinguish relativistic and parabolic dispersion. In Tab.ย <ref> we present a concise summary of our calculations.
We conclude this section by discussing experimental requirements for observing temperature stable difference frequencies, and furthermore, point out possible candidate materials. The main ingredient for an appreciable amplitude of the difference frequency is, of course, an electronic band structure with multiple pockets whose quasiparticles have similar mass (or Fermi velocity).
The main non-trivial requirement is a strong effective coupling ฮฒ between the bands. The exact strength of the interband scattering depends on the type of the impurity and on the microscopic details of the wavefunction. Therefore, we expect that ab-initio calculations will be very helpful for estimating the inter- versus intraband impurity scattering strengths in suitable materials.
In addition, the strength/density of intraband impurities influences strongly the Dingle temperatures. Our expansion in R_D, ฮป does in principle require T_D, ฮปโณฯ_cฮป, however we argue that our expansion also holds qualitatively for R_D, ฮปโ 1. Hence, in addition to an effective interband coupling, the observation of the difference frequency is mainly limited by the strength of the signals of the second harmonics of the basis frequencies which are similarly of second order in the Dingle factors. We argue that strong signals from the higher harmonics are a good indication that the difference frequency can be observed. These can be maximized by choosing a relatively clean system with w_c ฮปโ T_D, ฮป. Finally, we note that the amplitudes A_ยฑ in 3D are slightly suppressed for large frequencies.
Next, we discuss concrete material candidates. Intriguingly, difference and sum frequencies have potentially been already measured in various systems, but their existence has not been attributed to the present mechanism from interband coupling. For example, the 3D heavy-fermion superconductor CeCoIn_5 displays QO frequencies which are approximately the difference of two larger frequenciesย <cit.>, and which persist when the material is doped with Ndย <cit.>. Similarly, the tritelluride NdTe_3 shows a frequency which is the difference of two basis frequencies to high accuracyย <cit.>.
In general, we expect materials with multifold fermion excitations to be prime candidates because they have parallel bands over large momentum space regions. For example, the topological semimetal PtGa shows a difference frequency and several other frequencies in the Fourier spectrum whose origins are unexplainedย <cit.>. Most strikingly, in accordance with our predictions a temperature stable difference frequency has been very recently reported for the topological semi-metal CoSiย <cit.>.
Similarly, Shoenberg's classic book on QOsย <cit.> lists large number of materials displaying putative magnetic breakdown frequencies some of which could be difference frequencies. It would also be worthwhile to search within the recent class of square-net materialsย <cit.> for unusual QOs which fall outside the scope of the standard LK theory.
Beyond the 2D systems studied previously in the context of magneto intersubband oscillations of 2DEGsย <cit.>, systems like bilayer grapheneย <cit.> show all the required properties of the band structure but to our knowledge no difference frequency has been reported. This might be related to ineffective interband scattering but we expect that the controlled introduction of selected impurities could do the trick, which is again an oustanding task for ab-initio modelling. We note that recently a difference frequency has been reported in twisted bilayer graphene where interband scattering is induced by imperfections of the Moire pattern <cit.>.
Another general expectation is that band splitting is not only induced by interlayer tunneling but also via spin orbit coupling, e.g. for Rashba surface statesย <cit.>, which could result in temperature stable difference frequencies. An observation thereof would turn difference frequency QOs into a very precise tool for determining the energy scales of spin orbit induced band splitting.
ยง OUTLOOK
We have shown how non-linear interband coupling influences the Fourier spectrum of QOs up to second order in the Dingle factors with the emergence of a new difference frequency stable in temperature, see Fig.ย <ref>ย (d). A natural next question is which other higher order effects can emerge in the QO spectrum? Based on our calculations, we expect that any integer, linear combination of the basis frequencies can appear as well as the interference of higher harmonics. These higher order QOs will be damped with the Dingle factors of the involved frequencies and acquire a temperature smearing according to (<ref>).
For the calculation of the self-energy we have used the SCBA. Going beyond this, the full SCBA would take into account multiple scattering events on a single impurity by including products of real-space Green's functions. Hence, sum and difference frequencies should already appear at the level of the self-energy but we expect that the resulting QOs are qualitatively similar and behave in the same way as the higher harmonics discussed above.
For the non-linear interband coupling we have concentrated on the effect of impurities. However, the coupling of bands can also be the result of interactions. Clearly, the Coulomb interaction between electrons is not limited to electrons of the same band. In an expansion of the self-energy the Fock diagrams resemble those of the impurity scattering shown in Fig.ย <ref>ย (a) and are expected to lead to similar effects. In that context, the effect of Coulomb interactions on the de Haasโvan Alphen effect of quasi-2D systems has been recently studied in Ref.ย <cit.>, which also finds a difference frequency albeit with a strong temperature dependence. Similarly, interaction mediated fluctuations have recently been shown to enhance de Haasโvan Alphen QOs in insulatorsย <cit.>. In general, a detailed study on the interplay of interband coupling from impurities and interactions for thermodynamic and transport QOs remains a formidable task for the future.
The unambiguous observation of a new difference frequency in QOs is exciting by itslefย <cit.>. In addition, because of its temperature stability it can be turned into a versatile tool, for example, for studying the temperature dependence of the Dingle temperature, for quantifying interband scattering strengths or band splitting mechanisms like spin orbit coupling. In this way difference frequency QO measurements may detect temperature dependent changes of material properties which are otherwise impossible to observe with the strongly damped canonical QOs.
In conclusion, difference frequency QOs are a qualitatively new phenomenon beyond the known magnetic breakdown scenarios. Despite the long history of QO research we expect further surprises in the future.
We acknowledge a related experimental collaboration and discussions with N.ย Huber, M.ย Wilde and C.ย Pfleiderer. We thank N.ย R.ย Cooper for helpful discussions. V.ย L. thanks S.ย Birnkammer for helpful discussions.
V.ย L. acknowledges support from the Studienstiftung des deutschen Volkes. J.ย K. acknowledges support from the Imperial-TUM flagship partnership. The research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus.
ยง CODE AVAILABILITY
The symbolic calculations related to this paper are available on Zenodo <cit.> from the corresponding authors
upon reasonable request.
ยง EVALUATION OF THE TEMPERATURE CONVOLUTION
In this section we derive the temperature dependence of the conductivity out of the zero temperature conductivity kernel. For a generic conductivity kernels we obtain in leading order a temperature dependence of LK type. The calculation below applies for any oscillating conductivity kernel, being applicable for quadratic and linear dispersion in any dimension.
Starting from the generic form the conductivity kernel (<ref>), we expand the functions f(E) and g(E) around the Fermi energy ฮผ motivated by the form of the integral in (<ref>). We truncate the expansion of f(E) around E = ฮผ + ฮต at linear order, higher orders can be taken into account systematically. The convolution of the conductivity may then be written as
ฯ = ^ f(ฮผ)โซ_-โ^โฮตฬฃ[-n_F'(ฮต)] ^ f'(ฮผ) ฮต
รโ_n=0^โg^(n)(ฮผ/ฯ_c)/n!^ฬฃn g/ฮพฬฃ^n|_ฮผ/ฯ_c(ฮต/ฯ_c)^n
= ^ f(ฮผ)โ_n=0^โ^ฬฃn g/ฮพฬฃ^n|_ฮผ/ฯ_c(T/ฯ_c)^n I_n (f'(ฮผ) T)
where we have introduced the integral
I_n (a) = 1/n!โซ_-โ^โxฬฃ^
i a x/(^x/2+^-x/2)^2 x^n
which can be evaluated exactly.
ยง.ยง Evaluation of I_n
Obviously I_2n+1(a)= I_2n(a)=0 due to antisymmetry of the integrands. In the following we will show that the integrals are given by derivatives of the Lifshitz-Kosevich damping factor
I_2n (a) = (-1)^n/(2n)!^ฬฃ2n/ฮปฬฃ^2n(ฮป/a)^2n1/ฮป R_LK(ฯa/ฮป) |_ฮป=1
I_2n+1 (a) = (-1)^n/(2n+1)!^ฬฃ2n/ฮปฬฃ^2n(ฮป/a)^2n+11/ฮป^2 R_LK(ฯa/ฮป)
ร[2n-1+ R_LK(ฯa/ฮป) cosh(ฯa/ฮป)] |_ฮป=1.
The calculation below shows how the expression for I_2n can be obtained, the calculation for I_2n+1 is analogous. Using the Geometric series we rewrite the exponential factors in the denominator of the integral. However, the geometric series holds strictly speaking only true for x>0, hence we take the integration boundaries from ฯตโ 0 to โ and perform the limit in the end to ensure proper convergence. For the following calculation we set n to be even:
n! I_n(a) = 2 โซ_ฯต^โxฬฃcos(a x)/(^x/2+^-x/2)^2 x^n
= -2 โ_k=1^โ (-1)^k k โซ_ฯต^โxฬฃ^-kxcos(ax) x^n
= -2 โ_k=1^โ (-1)^k k^1-n (-1)^n ^ฬฃn/ฮปฬฃ^nโซ_ฯต^โxฬฃ^-ฮป kxcos(ax)|_ฮป=1
= -^ฬฃn/ฮปฬฃ^nโ_k=-โ^โ (-1)^k ฮป k^2-n/a^2+ฮป^2 k^2^-ฯตฮป |k||_ฮป=1
= ^ฬฃn/ฮปฬฃ^nโ_z^*=ยฑa/ฮปRes( ฮป z^2-n/a^2+ฮป^2 z^2ฯ/sin(ฯ z)^-ฯตฮป z, z=z^* ) |_ฮป=1
= (-1)^n/2^ฬฃn/ฮปฬฃ^n(ฮป/a)^n ฯa/ฮป^2/sinh(ฯa/ฮป)|_ฮป=1
Note that we have used the Feynman trick in line 3, that the summand for k=0 vanishes for all values of n (line 4) and the residue formula for summation in line 5.
ยง.ยง Interpretation of the temperature dependence
From (<ref>) and (<ref>) it is obvious that the lowest order of the expansion in g (i.e. n=0) leads to a temperature dependence of LK type. Higher order corrections (n>0) to the temperature dependence are typically suppressed by the functional form of g โ for g being e.g. a polynomial the n-th derivative of g is suppressed with ฮพ^-n.
ยง SUPPLEMENT FOR 2D PARABOLIC DISPERSIONS
This section provides several details to the calculations performed in the sec.ย <ref>. First details on the calculation of the self-energy are explained, second the used scheme to expand the conductivity kernel to second or even higher order is demonstrated.
ยง.ยง Calculation of the self-energy
In order to evaluate the self-energy ฮฃ_ฮป (<ref>) we need to determine the real space Green's function for ล=ล' from (<ref>). We use the summation over k_x-momenta to integrate out the y-dependencies and use Poisson summation (<ref>)
G_ฮป(ล,ล,E) = 1/L_xโ_l, k_xฯ^*_ฮป,l(y-y_0) ฯ_ฮป,l(y-y_0)/E-ฯต_ฮป(l)-ฮฃ_ฮป(E)
= 1/2 ฯโ_l=0^โโซ_-โ^โแธณ_x |ฯ_ฮป,l(y-y_0)|^2/E-ฯต_ฮป(l)-ฮฃ_ฮป(E)
= e B/2 ฯฯ_ฮปโ_k=-โ^โ (-1)^k โซ_1/2^โแปฅ^2 ฯ k u/ฮพ_ฮป^โ - u +ฮ_ฮป.
The integral over u can be evaluated for kโ 0 by using ฮพ_ฮป^โโซ 1 and complex contour integration:
โซ_1/2^โแปฅ^2 ฯ k v/-ฮพ_ฮป^โ+ u-ฮ_ฮป(ฮพ)
โ^2 ฯ k ฮพ_ฮป^*โซ_-โ^โแปฅ^2 ฯ k u/u -ฮ_ฮป(ฮพ)
= 2ฯ^2 ฯ k (ฮพ_ฮป^โ +ฮ_ฮป(ฮพ))ฮ(k ฮ_ฮป(ฮพ)) sgn(ฮ_ฮป(ฮพ)) .
For k=0 the integral is divergent, due to the divergent sum over l in (<ref>) since we did not assume that our energy spectrum is bounded from above. It is however easy to see that the imaginary part of the integral is convergent
โซ_1/2-ฮพ_ฮป^โ^โแปฅ/u - ฮ_ฮปโฯsgn(ฮ_ฮป).
For the real part of the integral we introduce an upper cut off ฮ_c and use that the integrand is odd
โซ_1/2-ฮพ_ฮป^โ^ฮ_cแปฅ/u - ฮ_ฮป = 1/2log(ฮ_c^2+ฮ_ฮป^2/(ฮพ_ฮป^โ-1/2)^2+ฮ_ฮป^2).
Anticipating that ฮฃ_ฮปโผ T_D,ฮป which is of order of a few Kelvin, any physical oscillations of this formally divergent part are suppressed with at least T_D,ฮป/ฮผโช 1 and can be neglected. We absorb the non/weakly-oscillating real part in the chemical potentialย <cit.>, to obtain the oscillating Green's function (<ref>).
The correct prefactors for the interband and intraband contribution are motivated from the fact that the Dingle temperature is a measure of the total interactions in the system ฯ T_D,ฮป = |โจฮฃ_ฮปโฉ|. We introduce the band dependent total mass M_ฮป = ฮฑ_ฮป m_ฮป + ฮฒ m_ฮปฬ
and the Dingle temperature ฯ T_D,ฮป = 1/2n_imp U_0^2 M_ฮป leading to (<ref>). To simplify the notation, we also set ฮฑฬ_ฮป = 2ฮฑ_ฮปm_ฮป/M_ฮป and ฮฒฬ_ฮป = 2ฮฒm_ฮปฬ
/M_ฮป.
We note at this point that several similar integrals need to be evaluated throughout this manuscript, e.g. in the derivation of (<ref>). All of them can be evaluated in a similar fashion as above, the rest is however convergent unless stated differently.
ยง.ยง Expansion of the conductivity kernel
In (<ref>) there are three terms that are oscillating with respect to the magnetic field: ฮพ_ฮป^* taking into account the oscillating real part of the self-energy, the oscillations of ฮฃ_ฮป leading effectively to an oscillating Dingle factor R_ฮป(ฮพ) and to oscillations of the prefactor, and the intrinsic oscillations of the conductivity. Interestingly the oscillations of the Dingle factor and the real part of the self-energy cancel exactly for the difference frequency. We expand the conductivity up to second order in R_D, ฮป = R_ฮป(ฯ T_D,ฮป/ฯ_cฮป). There is no need to expand ฮพ_ฮป^* if it appears outside the arguments of cos or sin, since these second order contributions will be suppressed by |T_D,ฮป/(ฮผ-W_ฮป)|โช 1. Therefore, ฮฃ_ฮป only needs to be expanded up to first order, as it will only appear together with first order terms
ฮฃ^(1)_ฮป(ฮพ) = -ฯ T_D,ฮป(ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) sin(2 ฯฮพ_ฮป) R_D,ฮป.
The other expanded quantities read
R^(2)_ฮป(ฮพ) = R_D,ฮป[1 + 2ฯฯ_ฮป(ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) cos(2 ฯฮพ_ฮป) R_D,ฮป]
|ฮ^(2)_ฮป(ฮพ)| = ฯ_ฮป[1 - (ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) cos(2 ฯฮพ^โ_ฮป) R^(2)_ฮป(ฮพ) .
+.(ฮฑฬ_ฮป+ฮฒฬ_ฮป๐) cos(4 ฯฮพ_ฮป) R_D,ฮป^2
]
ฯ_xx^(2)(ฮพ) = ฯ_0 โ_ฮปฮพ_ฮปฮ^(2)_ฮป(ฮพ)/1+4(ฮ^(2)_ฮป(ฮพ))^2
ร(1 - 2cos(2ฯฮพ^*_ฮป) R^(2)_ฮป(ฮพ)
+2 cos(4ฯฮพ_ฮป) R_D,ฮป^2)
where ฯ_ฮป = ฯ T_D,ฮป/ฯ_ฮป is the dimensionless Dingle temperature.
After an expansion, done with Mathematica <cit.> where we collect terms with the same frequency we find for the conductivity kernel
ฯ_xx(ฮพ)/ฯ_0 = A_0(ฮพ) + โ_ฮปcos(2 ฯฮพ_ฮป)R_D,ฮป A_1Fฮป(ฮพ)
+ โ_ฮปcos(4 ฯฮพ_ฮป) R_D,ฮป^2 A_2Fฮป(ฮพ)
+ cos(2 ฯ[ฮพ_1+ฮพ_2 ]) R_D,1 R_D,2 A_+(ฮพ)
+ cos(2 ฯ[ฮพ_1-ฮพ_2 ]) R_D,1 R_D,2 A_-(ฮพ).
The amplitudes read
A_0(ฮพ) = โ_ฮปฯ_ฮปฮพ_ฮป/1+4ฯ_ฮป^2 + R_D,ฮป^2 ฮพ_ฮป/(1+4ฯ_ฮป^2)^3(ฮฑฬ_ฮป[ฯ_ฮป-16 ฯ_ฮป^5] +ฮฑฬ_ฮป^2[-6 ฯ_ฮป^3+8 ฯ_ฮป^5] )
+R_D,ฮป^2 ฮพ_ฮปฬ
ฮฒฬ_ฮปฬ
^2/(1+4ฯ_ฮปฬ
^2)^3(-6 ฯ_ฮปฬ
^3+8 ฯ_ฮปฬ
^5)
A_1Fฮป(ฮพ) =
-2ฯ_ฮปฮพ_ฮป/4 ฯ_ฮป^2+1-ฮฑฬ_ฮป(ฯ_ฮปฮพ_ฮป/4 ฯ _ฮป^2+1
-8 ฯ_ฮป^3ฮพ_ฮป/(4 ฯ _ฮป^2+1)^2)
- ฮฒฬ_ฮปฬ
(ฯ _ฮปฬ
ฮพ_ฮปฬ
/4 ฯ _ฮปฬ
^2+1-8 ฯ_ฮปฬ
^3 ฮพ_ฮปฬ
/(4 ฯ _ฮปฬ
^2+1)^2)
A_2Fฮป(ฮพ) = ฮพ _ฮป/(4 ฯ _ฮป^2+1)^3(2 ฯ _ฮป+16 ฯ _ฮป^3+32 ฯ _ฮป^5
.+ฮฑฬ_ฮป[2 ฯ _ฮป-4 ฯฯ _ฮป^2-32 ฯฯ _ฮป^4-32 ฯ _ฮป^5-64 ฯฯ _ฮป^6]+ฮฑฬ_ฮป^2[-2 ฯฯ _ฮป^2-6 ฯ _ฮป^3+8 ฯ _ฮป^5+32 ฯฯ _ฮป^6] )
+
ฮฒฬ_ฮปฬ
ฮพ _ฮปฬ
/(1+4 ฯ _ฮปฬ
^2)^3( ฯ _ฮปฬ
-16 ฯ _ฮปฬ
^5 +ฮฑฬ_ฮป[-2 ฯฯ _ฮปฯ _ฮปฬ
+32 ฯฯ _ฮปฯ _ฮปฬ
^5]+ฮฒฬ_ฮปฬ
[-6 ฯ_ฮปฬ
^3+8 ฯ _ฮปฬ
^5 ] )
A_+(ฮพ) = โ_ฮปฮพ_ฮปฮฒฬ_ฮป/(1+4 ฯ_ฮป^2)^3(ฯ _ฮป-4 ฯฯ _ฮป^2-32 ฯฯ_ฮป^4-16 ฯ _ฮป^5-64 ฯฯ_ฮป^6
.
+ ฮฑฬ_ฮป[-2 ฯฯ_ฮป^2-12 ฯ_ฮป^3+16 ฯ_ฮป^5+32 ฯฯ_ฮป^6]
+.
ฮฒฬ_ฮปฬ
[-2 ฯฯ_ฮปฯ_ฮปฬ
+32 ฯฯ_ฮป^5 ฯ_ฮปฬ
] )
A_-(ฮพ) = โ_ฮปฮพ_ฮปฮฒฬ_ฮป/(1+4 ฯ_ฮป^2)^3( ฯ_ฮป-16 ฯ_ฮป^5+ฮฑฬ_ฮป(-12 ฯ_ฮป^3+16 ฯ_ฮป^5) )
where A_0 constitutes the non-oscillating contributions which we state here for the sake of completeness, but we will drop it in all other calculations.
ยง CALCULATION FOR 3D PARABOLIC DISPERSIONS
ยง.ยง Evaluation of the integral J_n
In order to evaluate the integral appearing in 3D calculations, see (<ref>), we would like to extend the integration boundaries to ยฑโ. However doing this directly would lead to a divergent integral for n>0 among other problems. We solve this problem by first using Feynman's integral trick and then extending the integration boundaries to ยฑโ
J_n(ฮพ) = (2 ฯ)^-n^ฬฃn/ฮปฬฃ^n|_ฮป=1โซ_-โ(ฮพ)^โ(ฮพ)xฬฃ^2 ฯฮป(ฮพ-x^2)
= (2 ฯ)^-n^ฬฃn/ฮปฬฃ^n|_ฮป=11/โ(2ฮป)^2 ฯฮปฮพ-ฯ/4
= ฮพ^n/โ(2)^2 ฯฮพ-ฯ/4
where we used that ฮพโซ 1 to obtain the last line. To check the correctness of this calculation we compared the result to a numerical evaluation of (<ref>).
In practice the k_z-integral will lead to an additional phase of ฯ/4, a suppression of oscillations with 1/โ(ฮพ_ฮป) as already predicted in Ref.ย <cit.> and a suppression of higher harmonics with 1/โ(k). These effects are clearly visible in (<ref>) and (<ref>).
ยง.ยง Conductivity
Starting from (<ref>) and carrying out the integral over k_z-momenta leads to
ฯฬ_xx = ฯ_0/ฯโ_Bโ_ฮปฮพ_ฮป^โ |ฮ_ฮป(ฮพ)|/1+4ฮ_ฮป(ฮพ)^2ร(2โ(2)/3โ(ฮพ_ฮป^โ).
.
+ โ_k=1^โ(-1)^k/โ(k)cos(2ฯ k ฮพ_ฮป^โ-ฯ/4) R_ฮป(ฮพ)^k)
where โ_B = 1/โ(e B) is the magnetic length scale. The integral for the first summand is evaluated exactly. Note that the self-energy does not depend on k_z because we integrate that out.
ยง.ยง Self-energy
We start from (<ref>) to evaluate the integral and introduce the weights
ฮฑฬ_ฮป = ฮฑ_ฮป m_ฮปโ(2 m_ฮป E_ฮป)/ฮฑ_ฮป m_ฮปโ(2 m_ฮป E_ฮป)+ฮฒ m_ฮปฬ
โ(2 m_ฮปฬ
E_ฮปฬ
) and ฮฒฬ_ฮป = 1- ฮฑฬ_ฮป. Since oscillations in ฮพ_ฮป^โ are suppressed by 1/ฮพ_ฮป^โ if they appear outside of cos or sin terms, we set ฮพ_ฮป^โ = ฮพ_ฮป in these. Then we obtain the self-consistent equation for the self-energy
ฮฃ_ฮป = -ฯ T_D,ฮป(1+ [ฮฑฬ_ฮป + ฮฒฬ_ฮปA] 1/โ(2 ฮพ_ฮป)
รโ_k=1^โ(-1)^k/โ(k)^2ฯ k ฮพ_ฮป^โ - ฯ/4 R_ฮป(ฮพ)^k).
(<ref>) is the analog of (<ref>) and (<ref>). However, the QOs in the self-energy come in 3D with an additional small prefactor 1/โ(ฮพ_ฮป).
ยง.ยง QOs in 3D
The expansion is done in the same manner as in 2D. However in 3D two types of second harmonics appear: The intrinsic second harmonic of ฮฃ_ฮป and ฯฬ_xx with phase ฯ/4 and one resulting from interference with a phase ฯ/2. The later one is suppressed with 1/โ(ฮพ_ฮป) with respect the to other and hence neglected in the following.
The oscillating part of the conductivity reads
ฯ_xx/ฯ_0 = โ_ฮป A_1Fฮปcos(2 ฯฮผ+W_ฮป/ฯ_ฮป-ฯ/4) R_D,ฮป R_LK(2 ฯ^2 T/ฯ_ฮป)
+ โ_ฮป A_2Fฮปcos(4 ฯฮผ+W_ฮป/ฯ_ฮป-ฯ/4) R_D,ฮป^2 R_LK(4 ฯ^2 T/ฯ_ฮป)
+ A_+cos(2 ฯฮผ+W_+/ฯ_+-ฯ/2) R_D,1 R_D,2 R_LK(2 ฯ^2 T/ฯ_+)
+ A_-cos(2 ฯฮผ+W_-/ฯ_-) R_D,1 R_D,2 R_LK(2 ฯ^2 T/ฯ_-)
with the amplitudes <cit.>
A_1Fฮป(ฮพ) = 1/ฯโ_B( -ฮพ _ฮปฯ _ฮป/1+4 ฯ _ฮป^2+16 ฮพ _ฮปฯ _ฮป^3 ฮฑฬ_ฮป/3 (1+4 ฯ _ฮป^2)^2-2 ฮพ
_ฮปฯ _ฮปฮฑฬ_ฮป/3 (1+4 ฯ _ฮป^2)+16 ฮพ _ฮปฬ
^3/2ฯ _ฮปฬ
^3 ฮฒฬ_ฮปฬ
/3 โ(ฮพ _ฮป)(1+4 ฯ _ฮปฬ
^2)^2-2 ฮพ _ฮปฬ
^3/2ฯ _ฮปฬ
ฮฒฬ_ฮปฬ
/3 โ(ฮพ _ฮป)(1+4 ฯ _ฮปฬ
^2))
A_2Fฮป(ฮพ) = 1/ฯโ_B(ฮพ _ฮปฯ _ฮป/โ(2)(1+4 ฯ _ฮป^2)-8 โ(2)ฮพ _ฮปฯ _ฮป^3 ฮฑฬ_ฮป/3 (1+4 ฯ _ฮป^2)^2+โ(2)ฮพ _ฮปฯ _ฮปฮฑฬ_ฮป/3 (1+4 ฯ _ฮป^2)-8 โ(2)ฮพ _ฮปฬ
^3/2ฯ
_ฮปฬ
^3 ฮฒฬ_ฮปฬ
/3 โ(ฮพ _ฮป)(1+4 ฯ _ฮปฬ
^2)^2+โ(2)ฮพ _ฮปฬ
^3/2ฯ _ฮปฬ
ฮฒฬ_ฮปฬ
/3 โ(ฮพ _ฮป)(1+4 ฯ _ฮปฬ
^2))
A_+(ฮพ) = 1/ฯโ_Bโ_ฮปฮพ
_ฮปฮฒฬ_ฮป/โ(ฮพ_ฮปฬ
)(1+4 ฯ _ฮป^2)(ฯ _ฮป/2โ(2) - โ(2)ฯฯ _ฮป^2 - 2โ(2)/3ฯฯ_ฮป^2 ฮฑฬ_ฮป - 2โ(2)/3ฯฯ_ฮปฯ_ฮปฬ
ฮฒฬ_ฮปฬ
)
+ฮพ _ฮปฮฒฬ_ฮป/โ(ฮพ
_ฮปฬ
)(1+4 ฯ _ฮป^2)^2(-2 โ(2)ฯ _ฮป^3-4โ(2)ฯ _ฮป^3 ฮฑฬ_ฮป + 16/3โ(2)ฯฯ _ฮป^4 ฮฑฬ_ฮป + 16/3โ(2)ฯฯ _ฮป^3 ฯ _ฮปฬ
ฮฒฬ_ฮปฬ
)
+ 64 โ(2)ฮพ _ฮปฯ _ฮป^5 ฮฑฬ_ฮปฮฒฬ_ฮป/3 โ(ฮพ _ฮปฬ
)(1+4 ฯ _ฮป^2)^3
A_-(ฮพ) = 1/ฯโ_Bโ_ฮปฮพ
_ฮปฮฒฬ_ฮป/โ(ฮพ_ฮปฬ
)(1+4 ฯ _ฮป^2)ฯ _ฮป/2 โ(2)
+ฮพ _ฮปฮฒฬ_ฮป/โ(ฮพ
_ฮปฬ
)(1+4 ฯ_ฮป^2)^2(-2 โ(2)ฯ_ฮป^3-4 โ(2)ฯ_ฮป^3 ฮฑฬ_ฮป)
+ 64 โ(2)ฮพ _ฮปฯ _ฮป^5 ฮฑฬ_ฮปฮฒฬ_ฮป/3 โ(ฮพ _ฮปฬ
)(1+4 ฯ _ฮป^2)^3
which are evaluated at the chemical potential A=A(ฮผ). Note that for 3D O(A_ยฑ) = โ(ฮพ) whereas O(A_2Fฮป) = ฮพ. This makes sum and difference frequencies more difficult to observe in 3D systems.
ยง SUPPLEMENT FOR 2D DOUBLE WEYL MODEL
ยง.ยง Self-energy
The Green's function in real space reads
G_ฯฮป (ล,ล',ฮพ) = 1/ฯ_ฮปโ_l,k_xฮจ_ฯฮป,l,k_x(ล)ฮจ_ฯฮป,l,k_x(ล')^*/ฮพ_ฮป^โ-ฯโ(l)+ฮ_ฮป(ฮพ)
with the wavefunction given in (<ref>). For ล=ล' we can sum out the wave function such that the real space Green's function is spatially invariant
G_ฯฮป(ล,ล,ฮพ) = e B/4 ฯโ_l1/E + W_ฮป-ฯฯ_ฮปโ(l) -ฮฃ_ฮป
รโซแปต_0 (|ฯ_l(y-y_0)|^2+|ฯ_l-1(y-y_0)|^2)
= -ฯ e B/2 ฯฯ_ฮปโ_kโซ_0^โแธท^2ฯ k l/โ(l) -ฯฮพ^*_ฮป-ฯฮ_ฮป
The self-energy can be easily calculated, since the Green's function does not depend on ล anymore and not on ฯ
ฮฃ_ฮป(ฮพ) = n_imp U_0^2 L_y L_x โ_ฯ(ฮฑ_ฮป G_ฯฮป(ฮพ)+ฮฒ G_ฯฮปฬ
(ฮพ))
The crucial part is the calculation of the term
โ_ฯ G_ฯฮป(ฮพ) = -e B/2 ฯฯ_ฮปโ_kโซ_0^โแธทโ_ฯฯ^2ฯ k l/โ(l) -ฯฮพ^*_ฮป-ฯฮ_ฮป
=
-e B/2 ฯฯ_ฮปโ_kโซ_0^โแธท2^2ฯ k l(ฮพ^*_ฮป+ฮ_ฮป)/l -(ฮพ^*_ฮป+ฮ_ฮป)^2
where we have used the Poisson summation formula to transform the sum over LLs to a sum over harmonics times an integral which we can compute with complex contour integration. We use again ฮพ_ฮป^โโซ 1 such that the lower integration boundary can be shifted to -โ. For k=0 the real part of this integral is divergent but the integrand is antisymmetric and will be set to zero. The imaginary part can be calculated explicitly
โซ_0^โแธท/l -(ฮพ^*_ฮป+ฮ_ฮป)^2 =
โซ_-โ^โแธทl+ 2ฮพ^*_ฮปฮ_ฮป/l^2 +4(ฮพ^*_ฮปฮ_ฮป)^2
= ฯ(ฮ_ฮป).
For the higher harmonics kโ 0 we use complex contour integration
โ_kโ 0โซ_0^โแธท^2ฯ k l/l -(ฮพ^*_ฮป+ฮ_ฮป)^2
=
2 ฯ(ฮ_ฮป) โ_k>0^2ฯ k (ฮ_ฮป) (ฮพ^*_ฮป+ฮ_ฮป)^2
in order to find in total
โ_ฯ G_ฯฮป(ฮพ) = -e B/ฯ_ฮป(ฮ_ฮป) (ฮพ^*_ฮป+ฮ_ฮป)
ร[1+2 โ_k>0^2ฯ k (ฮ_ฮป) (ฮพ^*_ฮป+ฮ_ฮป)^2].
At this point it is useful to make several approximations to simplify the remaining calculations. First, we replace |โจฮฃ_ฮปโฉ| = ฯ_c ฮป |โจฮ_ฮปโฉ| by the empirical Dingle temperature ฯ T_D,ฮป. Note that T_D,ฮป does not depend on the magnetic field as expected. Since the self-energy is of the order of the Dingle temperature which is only a few Kelvin we may use the approximations ฮพ_ฮป^โโฮพ_ฮป and ฮพ_ฮป^โโซฮ_ฮป. This can however only be used outside the argument of the exponential terms. We then obtain the self-consistent (<ref>) and (<ref>)
ยง.ยง Amplitudes
The conductivity kernel is expanded analogously to the parabolic case <cit.>. The amplitudes read
A_1Fฮป(ฮพ) = 2 ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ _ฮป^2 ฯ _ฮป^2
+ฮฑฬ_ฮป(ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ _ฮป^2 ฯ _ฮป^2 - 32 ฮพ _ฮป^5 ฯ _ฮป^3/(1+16 ฮพ_ฮป^2 ฯ _ฮป^2)^2)
+ฮฒฬ_ฮปฬ
(ฮพ _ฮปฬ
^3 ฯ _ฮปฬ
/1+16 ฮพ _ฮปฬ
^2 ฯ _ฮปฬ
^2 - 32 ฮพ_ฮปฬ
^5 ฯ_ฮปฬ
^3/(1+16 ฮพ _ฮปฬ
^2 ฯ _ฮปฬ
^2)^2)
A_2Fฮป(ฮพ) = 2 ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ _ฮป^2 ฯ _ฮป^2
+ฮฑฬ_ฮป(2 ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ _ฮป^2 ฯ _ฮป^2-64 ฮพ _ฮป^5 ฯ _ฮป^3/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^2-8
ฯฮพ _ฮป^4 ฯ _ฮป^2/1+16 ฮพ _ฮป^2 ฯ _ฮป^2 -4 ฯฮฒฬ_ฮปฬ
ฮพ_ฮปฯ_ฮปฮพ _ฮปฬ
^3 ฯ _ฮปฬ
(1-16 ฮพ_ฮปฬ
^2 ฯ_ฮปฬ
^2)/(1+16 ฮพ_ฮปฬ
^2 ฯ_ฮปฬ
^2)^2)
-4ฮฑฬ_ฮป^2ฮพ _ฮป^4 ฯ_ฮป^2 ฯ+6 ฮพ_ฮปฯ_ฮป-32ฮพ_ฮป^3 ฯ_ฮป^3-256ฯฮพ_ฮป^4 ฯ_ฮป^4/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^3+ฮฒฬ_ฮปฬ
ฮพ _ฮปฬ
^3 ฯ_ฮปฬ
1-16 ฮพ _ฮปฬ
^2 ฯ _ฮปฬ
^2/(1+16 ฮพ _ฮปฬ
^2 ฯ_ฮปฬ
^2)^2-8 ฮฒฬ_ฮปฬ
^2 ฮพ _ฮปฬ
^5 ฯ_ฮปฬ
^3 3-16 ฮพ _ฮปฬ
^2 ฯ _ฮปฬ
^2/(1+16 ฮพ _ฮปฬ
^2 ฯ_ฮปฬ
^2)^3
A_+(ฮพ) = โ_ฮปฮฒฬ_ฮป[ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ _ฮป^2 ฯ _ฮป^2
-32 ฮพ _ฮป^5 ฯ _ฮป^3/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^2- 8 ฯฮพ _ฮป^4 ฯ _ฮป^2/1+16 ฮพ _ฮป^2 ฯ _ฮป^2.
+ ฮฑฬ_ฮป(1024 ฮพ _ฮป^7 ฯ
_ฮป^5/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^3-48 ฮพ _ฮป^5 ฯ_ฮป^3/(1+16 ฮพ _ฮป^2 ฯ_ฮป^2)^2 + 128 ฯฮพ_ฮป^6 ฯ _ฮป^4/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^2 - 4 ฯฮพ _ฮป^4 ฯ _ฮป^2/1+16 ฮพ _ฮป^2 ฯ _ฮป^2)
.+ฮฒฬ_ฮปฬ
(128 ฯฮพ _ฮป^5 ฮพ _ฮปฬ
ฯ _ฮป^3 ฯ _ฮปฬ
/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^2-4 ฯฮพ _ฮป^3 ฮพ _ฮปฬ
ฯ _ฮปฯ _ฮปฬ
/1+16 ฮพ _ฮป^2 ฯ _ฮป^2)]
A_-(ฮพ) = โ_ฮปฮฒฬ_ฮป[ฮพ _ฮป^3 ฯ _ฮป/1+16 ฮพ_ฮป^2 ฯ_ฮป^2 -32 ฮพ_ฮป^5 ฯ_ฮป^3/(1+16 ฮพ_ฮป^2 ฯ_ฮป^2)^2 + ฮฑฬ_ฮป(1024 ฮพ _ฮป^7
ฯ _ฮป^5/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^3-48 ฮพ _ฮป^5 ฯ _ฮป^3/(1+16 ฮพ _ฮป^2 ฯ _ฮป^2)^2) ]
ยง SUPPLEMENT FOR THE DE HAASโVAN ALPHEN EFFECT
ยง.ยง Calculation for 2D parabolic bands
We evaluate the density of states (<ref>) and show how to obtain (<ref>).
ฯ(E) = 1/ฯ L_x L_y โ_k_x,l,ฮปฮ_ฮป/ฯ_ฮป/(ฮพ_ฮป^โ-l-1/2)^2+ฮ_ฮป^2
= โ_ฮปN_ฮฆฮ_ฮป/ฯ L_x L_y ฯ_ฮปโ_l1/(ฮพ_ฮป^โ-l-1/2)^2+ฮ_ฮป^2
= โ_ฮป,km_ฮปฮ_ฮป/2ฯ^2 (-1)^k ^2ฯ k ฮพ_ฮป^โโซ_-โ^โแธท^2ฯ k l/l^2+ฮ_ฮป^2
= โ_ฮป,km_ฮป/2ฯ(ฮ_ฮป) (-1)^k ^2ฯ k ฮพ_ฮป^โ R_ฮป^|k|(ฮพ)
The integrals to obtain the zero temperature thermodynamic potential (<ref>) are of the form
ฮฉฬ(ฮผ) = โซ_-โ^โฮธ(ฮผ-E) cos(2ฯE+W/ฯ_c)
= ฯ_c/2ฯsin(2ฯฮผ+W/ฯ_c).
Hence, the amplitudes for (<ref>) read <cit.>
A_1Fฮป = -1/2ฯ^2 โ_B^2
A_2Fฮป = 1/2ฯ^2 โ_B^2 (1-2ฯฮฑฬ_ฮปฯ_ฮป)
A_+ = -1/ฯโ_B^2(m_1+m_2)(m_1 ฮฒฬ_1ฯ_1+m_2 ฮฒฬ_2ฯ_2).
ยง.ยง Calculation for relativistic dispersions in 2D
The density of states reads
ฯ(E) = โ_ฮปฮพ_ฮป^โ/โ(2)ฯ v_ฮปโ_B[1+2โ_k>0cos(2ฯ k (ฮพ_ฮป^โ 2-ฮ_ฮป^2)) R_ฮป^k].
The additional prefactor of ฮพ_ฮป^โ gives rise to the difference frequency. However the oscillating terms of the prefactor are small compared to the other oscillations. We take only the first non-vanishing order into account. After an expansion up to second order in the Dingle factor, we obtain
ฯ(E) = โ_ฮป A_1Fฮปcos(2ฯฮพ_ฮป^2) R_Dฮป(ฮพ)
+โ_ฮป A_2Fฮปcos(4ฯฮพ_ฮป^2) R_Dฮป(ฮพ)^2
+ A_+cos(2ฯ[ฮพ_1^2+ฮพ_2^2]) R_D1(ฮพ) R_D2(ฮพ)
+ A_-sin(2ฯ[ฮพ_1^2-ฮพ_2^2]) R_D1(ฮพ) R_D2(ฮพ)
for the oscillating part of the density of states. The amplitudes read
A_1Fฮป = โ(2)ฮพ_ฮป/ฯ v_ฮปโ_B
A_2Fฮป = โ(2)ฮพ_ฮป/ฯ v_ฮปโ_B - 4 โ(2)/ฮฝ_ฮปโ_Bฮพ_ฮป^2 ฮฑฬ_ฮปฯ_ฮป
A_+ = -4โ(2)/โ_B(ฮพ_1^2 ฮฒฬ_1ฯ_1/v_1+ ฮพ_2^2 ฮฒฬ_2ฯ_2/v_2)
A_- = 1/โ(2)ฯโ_B(ฮฒฬ_1 ฯ_1/v_1-ฮฒฬ_2 ฯ_2/v_2).
Note that the amplitude of the difference frequency is of order ๐ช(A_-/A_+) โ1/ฮพ^2 and cancels exactly for equal band parameters.
|
http://arxiv.org/abs/2306.04784v1
|
20230607210756
|
A Framework for Designing Anthropomorphic Soft Hands through Interaction
|
[
"Pragna Mannam",
"Kenneth Shaw",
"Dominik Bauer",
"Jean Oh",
"Deepak Pathak",
"Nancy Pollard"
] |
cs.RO
|
[
"cs.RO"
] |
A Framework for Designing Anthropomorphic Soft Hands through Interaction
Pragna Mannam1, Kenneth Shaw1, Dominik Bauer, Jean Oh, Deepak Pathak
and Nancy Pollard
Robotics Institute, Carnegie Mellon
University
1 Equal Contribution
=======================================================================================================================================================================================
Modeling and simulating soft robot hands can aid in design iteration for complex and high degree-of-freedom (DoF) morphologies. This can be further supplemented by iterating on the design based on its performance in real world manipulation tasks. However, this requires a framework that allows us to iterate quickly at low costs. In this paper, we present a framework that leverages rapid prototyping of the hand using 3D-printing, and utilizes teleoperation to evaluate the hand in real world manipulation tasks. Using this framework, we design a 3D-printed 16-DoF dexterous anthropomorphic soft hand (DASH) and iteratively improve its design over three iterations. Rapid prototyping techniques such as 3D-printing allow us to directly evaluate the fabricated hand without modeling it in simulation. We show that the design is improved at each iteration through the handโs performance in 30 real-world teleoperated manipulation tasks. Testing over 600 demonstrations shows that our final version of DASH can solve 16 of the 30 tasks compared to Allegro, a popular rigid hand in the market, which can only solve 7 tasks. We open-source our CAD models as well as the teleoperated dataset for further study and are available on our website <https://dash-through-interaction.github.io>.
ยง INTRODUCTION
Soft multi-fingered robotic hands have the dexterity and robustness to manipulate a wide variety of objects, recover from contact with the environment, and handle delicate objects without the fear of damaging the objectsย <cit.>.
However, existing robot hands are still far from the capabilities of a human hand. Furthermore,
there is currently no unified design iteration framework for developing complex soft robotic hands and testing them in real world tasks.
The traditional framework for robot design, shown in Figureย <ref>(a),
utilizes testing in simulation to evaluate design capabilities before fabricationย <cit.>.
However, simulating soft robots is computationally intenseย <cit.>, and the sim-to-real gap is significantย <cit.>, rendering state of the art simulators ineffective for iterative design of soft hands.
Soft material dynamics are difficult and slow to simulate for high degree-of-freedom (DoF) robots, requiring empirical coefficients to be calibrated experimentally to bridge the sim-to-real gapย <cit.>.
This difficulty of simulation makes it hard to design soft hands and test them before manufacturing.
Furthermore, fabricating soft robots can be time-consuming and re-tuning simulations for a new design is non-trivial.
In this paper, we offer a case study exploring a framework, shown in Figureย <ref>(b), that uses a sparse pipeline where our primary analysis tools are fabrication and teleoperation in order to illustrate that these techniques are now sufficiently advanced to support a process of rapid design iterations.
We start with creating the soft robot hand design, fabricate it using 3D-printing for rapid iteration, and then evaluate its performance in real world tasks using teleoperation.
Our framework can leverage customizability and rapid prototyping to help us design multiple iterations of soft robotic hands.
We show that we can significantly improve the performance of the hand in real world tasks showing the potential of this framework to supplement existing design iteration techniques.
We present a 16-DoF tendon-driven 3D-printed soft hand DASH, shown in Figureย <ref>, using our proposed framework.
This hand has a small form factor similar in size to a human hand, 3D-printable parts that are easily replaceable, and a modular customizable design that allows for easy iteration.
Through teleoperation, we explore the capabilities of the soft hand in order to inform our design iterations across three hands: , , and .
We follow the 3D-printing procedure described by <cit.> to rapidly prototype these hands.
In order to evaluate the dexterity of the hand, we perform 30 manipulation tasks that are inspired by human hand capabilities, which allow us to test the capabilities of our robot hand.
Our hands show improvements over state of the art and across iterations, and the last two iterations require only 3 and 5 days, respectively, to design and test.
, , and succeed on 70%, 82%, and 83% of executions across all tasks, respectively. We also performed the 30 tasks on the commercial dexterous robotic hand, Allegroย <cit.>, which succeeds on only 60% of the tasks.
The contributions of this paper are
* A framework for designing anthropomorphic soft hands that leverages rapid prototyping techniques and uses teleoperation for evaluation, without using simulation.
* State of the art dexterous anthropomorphic soft hand, designed using our framework, that outperforms a commercial dexterous robotic hand on a set of real world manipulation tasks.
* Open-source CAD models, data corresponding to 600 teleoperated human demonstrations, and code to democratize access to low-cost dexterous hands.
First, we discuss related research on teleoperation of dexterous robot hands, rapid prototyping, and soft hand design in Sectionย <ref>.
In Sectionย <ref>, we will discuss the design of the soft hand, fabrication using 3D-printing, hand evaluation using teleoperation and the manipulation tasks used for evaluation. Our first experimental results include testing an off-the-shelf hand Allegroย <cit.> for a baseline comparison in Sectionย <ref>.
Sectionย <ref> presents our case study and describes the design changes in DASH across iterations along with the experiments conducted to evaluate each soft hand iteration's capabilities.
Finally, Sectionย <ref> and Sectionย <ref> will discuss the conclusions of our work and future areas of research.
ยง RELATED WORK
ยง.ยง Teleoperation of Dexterous Robot Hands
Learning a control policy for dexterous manipulation is particularly difficult and time-consuming due to the large number of DoFs, and the complexity of contact-rich interactionsย <cit.>. In contrast, teleoperation of robot hands provides a swift, natural way to control robots, beyond pick-and-place scenarios. This has been used to collect human demonstrations for imitation learningย <cit.>. Beyond learning control policies, teleoperation can especially be helpful during the design process of dexterous robot hands, where the ability to quickly evaluate complex and nuanced capabilities is imperative. In order to work intuitively, teleoperation requires reliable hand pose tracking and a mapping between human and robot hand morphology.
Traditionally, hand tracking has predominantly been implemented using glove-based systems such as the CyberGloveย <cit.> or hand markers with gloves for motion captureย <cit.>. Improvements in hand tracking and pose estimationย <cit.> have led to the development of many vision-based teleoperation approaches. Li et. al.ย <cit.> developed a glove-free approach that relies on depth images and a paired human-robot dataset for hand tracking. Recently, <cit.> and <cit.> used a single uncalibrated RGB camera to track the human hand and body in real-time.
Existing approaches to mapping from human to robot hand morphology can generally be categorized into Joint-to-Joint, Point-to-Point, or Pose-basedย <cit.>.
An example of direct joint-to-joint mapping can be found in <cit.>, who directly map joint angles from a CyberGlove to the corresponding joint angles in a DLR/HIT II hand.
We use a similar technique to map joint angles from a Manus gloveย <cit.> to our soft hand.
To map between different amounts of fingers, <cit.> only use the human thumb, index, and ring finger to teleoperate a three-fingered Barret Hand. We adopt a similar strategy for our four-fingered hand.
ยง.ยง Rapid Prototyping and Soft Hand Design
Iterating for dexterous soft hand designs is a laborious process. The complex design space and the infinite degrees of freedom make it difficult to predict the effects of incremental design changes. Unlike for rigid robot hands, state-of-the-art soft body simulators are not able to provide effective, efficient, and robust evaluation of soft designsย <cit.>.
Hands such as the BRL/Pisa/IIT SoftHandย <cit.> or the RBO handย <cit.> have evolved over years to incorporate more adaptive synergies and dexterity.
To speed up development times and reduce fabrication overhead many works have recently turned towards 3D-printing to either directly print soft handsย <cit.> or to quickly create complex moldsย <cit.>.
While this has significantly reduced the cycle time for fabrication, designing dexterous soft hands still requires a lot of expertise, and trial and error due to the continuously deformable nature of soft robots. The lack of appropriate simulators means the evaluation of soft hand designs has to be done on the real prototype by using hand-crafted policiesย <cit.> or sequential keyframed open-loop posesย <cit.>.
Our work builds on the design and fabrication methodology introduced by <cit.>. They reduce the design complexity of soft hand designs through the introduction of geometric features such as bumps, creases, or the combination of different materials, to achieve segmented
โjoint-likeโ deformations.
We created a novel design by adding three more tendons to all of our fingers to add adduction and abduction as well as folding fingers forward towards the palm for more dexterity.
ยง EXPERIMENT SETUP
ยง.ยง Robot Hand design
ยง.ยง.ยง Finger Joints
All three iterations of consist of four fingers: the thumb, index, middle, and ring fingers (see Figureย <ref>).
In order to achieve modularity, each finger, including the thumb, is designed identically.
This means that any finger could be swapped with another finger without affecting the overall design.
Each finger has three joints (from the base of the finger to the fingertip): the metacarpophalangeal (MCP) joint, interphalangeal proximal (PIP) joint, and the interphalangeal distal (DIP) joint. The joints for each finger are shown in Figureย <ref>.
ยง.ยง.ยง Tendons
Each finger is controlled by four tendons.
Two tendons run along the sides of the MCP joint, closest to the palm, for abduction and adduction, which allows the fingers to move closer together and farther apart. These two tendons are controlled by a single motor, so we refer to them as tendon 0 in Figureย <ref>.
A single tendon, tendon 1 in Figureย <ref>, is used to flex the finger forward at the MCP joint, orthogonally to the axis of motion of the abduction-adduction tendons.
The last tendon, tendon 2, runs through the entire length of the finger to enable completely curling into itself.
The hand CAD designs are shown in Figureย <ref>.
ยง.ยง.ยง Motor Housing
The tendon-driven hand is connected to twelve motors, one per tendon except for the abduction and adduction tendons which are both operated antagonistically by a single motor.
We use Dynamixel XC330-M288-T motorsย <cit.> with 5.6V power supply with 3D-printed pulleys on the motor horns.
The pulleys are anchored to the tendons in order to retract and contract the tendons.
The bidirectional pulleys have two anchor points for the two tendons to move the finger side-to-side.
The palm's size is roughly the size of the average human hand and is constrained by the motor housing and tendon channeling behind the palm.
ยง.ยง Fabrication using 3D-printing
The CAD of the hand assembly for the final hand iteration, shown in Figureย <ref>, includes 4 identical fingers with the thumb attached to the palm, a palm, a top plate to support the palm against the motors, 12 motors, a bottom plate which also houses the Dynamixel U2D2 motor controller, and a xArm6 mount that bolts into the bottom plate. These 3D-printed parts are consistent across all iterations of the soft hand.
The soft dexterous hand's mounts, motor housing, and motor pulleys are all 3D-printed from PLA (rigid material) while the soft hand was printed with Ninjaflex Edge (83A shore hardness)ย <cit.> using an Ender 3 S1 Plus.
For all the robot experiments, the hand was mounted onto a xArm6ย <cit.> robot arm. DASH costs approximately $1500 to build with the majority of the cost consisting of 3D-printer ($500) and the twelve motors ($1000) required. For comparison, the Allegro hand is also 16DoF and costs around $15000 <cit.>.
ยง.ยง Hand Evaluation using Teleoperation
ยง.ยง.ยง Tendon Tension
One of the challenges in designing tendon-driven soft hand systems is maintaining consistent tendon tension and proper actuation force. This is because of the potential wear of soft materials over time which changes the motor angles that must be commanded.
To maintain consistent tendon tension, we record the motor angles by systematically commanding each motor to apply 5mA of tension for a relaxed finger position and apply 450mA of tension for the finger to curl completely.
These motor angles are mapped linearly to [0, 1].
We repeat this for each motor to record the full range of motion of the finger joints with respect to the motor angles.
This tensioning method is completely automatic and is completed in less than one minute. Positioning errors due to slack in the tendons can be resolved by this tensioning procedure.
ยง.ยง.ยง Learning Kinematics
The next challenge is to approximate the kinematics of the finger joints. Since we do not have angle sensors at the joints for real-time feedback, we learn a model using external, offline collected data.
First, we collect training data for each hand to learn a kinematic model.
Since all of the fingers are identical, we learn a model using data from a single finger.
We control the finger through small increments of 3 degrees of actuation, resulting in 1000 finger configurations.
Using AR tags on the sides and back of the finger as shown in Figureย <ref>, we track the angles of the soft ridges from two RGB cameras.
We simplify the model of the finger by assuming fixed joint lengths and that they can only bend at the ridges.
The resulting data is a collection of tuples (joint angles, motor angles) where the motor angles are normalized to [0, 1], as described in Sectionย <ref>. By doing this, we make the kinematic calibration independent of any changes during tendon re-tensioning.
A linear model is learned using the collected data to map finger joint angles to motor angle outputs. We refer to the motors controlling tendons 0, 1, and 2, as shown in Figureย <ref>, as motors 0, 1, and 2, respectively. The equations for MCP, PIP, and DIP joint angles are shown below. In equationย <ref>, we learn the MCP joint angles ฮธ_๐ฌ๐ข๐ฏ_๐๐๐ฝ๐พ, ฮธ_๐ฌ๐ข๐ฏ_๐ฟ๐๐ฝ jointly since the amount of side-to-side angle at the MCP joint can restrict the forward folding motion of the finger. In equationย <ref>, the Motor 2 angle ฮธ_๐๐๐๐๐_2 is an average measure of the motor angle for the desired PIP and DIP joint angles ฮธ_๐ฏ๐จ๐ฏ, ฮธ_๐ฃ๐จ๐ฏ since the same tendon controls both the PIP and DIP joints. The weights in equationsย <ref> andย <ref> are found by fitting our data using linear functions. We collect training data for almost two hours for each iteration of the hand to calibrate new models but our models for and resulted in the same weights as shown in Tableย <ref>.
The finger length and tendon routing changed from to but no changes were made from to that would impact finger kinematics, which explains the change in calibration weights from to .
[ ฮธ_๐๐๐๐๐_0 ฮธ_๐๐๐๐๐_1 ] =
[ ฮธ_๐ฌ๐ข๐ฏ_๐๐๐ฝ๐พ ฮธ_๐ฌ๐ข๐ฏ_๐ฟ๐๐ฝ ]ยท[ w_1 w_2; w_3 w_4 ]
+ [ b_1 b_2 ]
ฮธ_๐๐๐๐๐_2 =
ฮธ_๐ฏ๐จ๐ฏ w_5 + b_3/2 + ฮธ_๐ฃ๐จ๐ฏ w_6 + b_4/2
[ ฮธ_๐๐๐๐๐_0 ฮธ_๐๐๐๐๐_1 ฮธ_๐๐๐๐๐_2 ] =
ย ย ย [ ฮธ_๐ฌ๐ข๐ฏ_๐๐๐ฝ๐พ ฮธ_๐ฌ๐ข๐ฏ_๐ฟ๐๐ฝ ฮธ_๐ฏ๐จ๐ฏ ฮธ_๐ฃ๐จ๐ฏ ]ยท[ w_1 w_2 0; w_3 w_4 0; 0 0 w_5; 0 0 w_6 ]
ย ย ย +
[ b_1 b_2 b_3 ]
ยง.ยง.ยง Teleoperation System
We use Manus Meta Quantum Metaglovesย <cit.> designed for VR tracking and Mocap Use, as shown in Figureย <ref> (costs approximately $8000). The Manus glove is worn on the operator's hand and tracks fingertip positions within a 0.1-degree accuracy using hall effect sensors. Each finger returns 4 angles ฮธ_๐ฌ๐ข๐ฏ_๐๐๐ฝ๐พ, ฮธ_๐ฌ๐ข๐ฏ_๐ฟ๐๐ฝ, ฮธ_๐ฏ๐จ๐ฏ, ฮธ_๐ฃ๐จ๐ฏ in real time. These angles are mapped directly one-to-one to the robot hand embodiment. Once in the robot hand embodiment, we convert to motor joint angles by using the models learned fromย <ref>.
To control the robot arm in the workspace, we must track the human's wrist. We use wearable SteamVRย <cit.> trackers which use time of flight from lasers transmitted from SteamVR Lighthouses that are placed around and above the operator. One tracker is worn on the glove, and another is around the waist. The waist tracker is used as a home coordinate frame, and the wrist moves relative to it. The transformation between these two trackers is mapped to the robot frame. We do this by rotating the waist tracker to be the same rotational pose as the robot's base frame and rotating the end-effector pose to ensure it's in the same rotation as the human wrist. For example, if you hold your hand straight out and towards the ground, the robot hand should also be straight out and towards the ground. For translation, we scale the human wrist poses up slightly to cover the robot's larger workspace and adjust its translational offset to be comfortable for most users. Real-time inverse kinematics is then used to map the end effector position to robot joint angles which command the xArm6. Safety checks, including one that uses dynamic force feedback on the arm, are conducted to ensure that no accidental damage is caused to the robot or the environment around it.
ยง.ยง Manipulation Tasks
Each iteration is tested on five repetitions of each of the 30 tasks listed below which were inspired by the different types of grasps defined in <cit.> in Tableย <ref> and tasks from previous teleoperation worksย <cit.>. These tasks are categorized by the type of grasp or force necessary.
Additionally, some tasks were hand-picked as tasks where compliance of the hand may be advantageous.
ยง BASELINE STUDY: ALLEGRO DEXTEROUS HAND
ยง.ยง Design & Fabrication
Allegroย <cit.> is an off-the-shelf gripper that we use as a baseline for iterations , , and . Allegro is a dexterous robotic hand that has four fingers with motors at the joints, rigid structure, and large rubber spherical fingertips. We perform the same 30 manipulation tasks with Allegro to compare the performance against our three iterations of DASH.
ยง.ยง Evaluation
As shown in Tableย <ref>, Allegro succeeded on all 5 repetitions on 7 out of 30 tasks. These tasks included manipulating the Drill, Dry-Erase board eraser, Plush broccoli, Plush Dinosaur, Spam box, Wine glass, and Stacking cubes.
Allegro performed best on deformable objects and large objects. This was expected for the gripper because of its large size and rigidity.
From Tableย <ref>, we see that the Allegro hand and fingertips were too large to perform some tasks such as picking up the Pen, Card, Plastic bag, Scissors, and Hammer which required precise motion of the fingers. Additionally, the Cup pouring grasp was unstable due to the spherical fingertips rotating the cup in-hand during the task. The side-to-side motion (or abduction-adduction) of the fingers was limited, making Dice Rotation coarse and unpredictable. However, the larger hand, compared to human hands, had stable grasps for picking up large or heavy objects such as the Drill, Softball, Plush Dinosaur, Plush Broccoli, and Wine Glass (see Figureย <ref>).
ยง DASH ITERATIVE DESIGN STUDIES
ยง.ยง Iteration
ยง.ยง.ยง Design & Fabrication
Iteration focuses on creating a robotic finger as the first step toward making DASH.
We build on the hand design by <cit.>, which was designed iteratively through testing in simulation.
Improving their hand design with joint-like creases in the 3D-print, we add three more tendons to increase dexterity and the workspace of the finger.
The three tendons correspond to adduction, abduction, and folding the finger towards the palm, and require designing the MCP joint as a multi-axis flexure to support these motions.
Furthermore, a unique feature of is that the finger can curl completely into itself.
Our earlier CAD designs of could not fold inwards without self-collisions or high forces.
To achieve this, we change the proportions of the finger and decrease the thickness of the middle portion of the finger (in between the PIP and DIP joints) in order to eliminate self-collisions.
Thus, our iteration involves redesigning the shape of the finger to enable curling fully into itself, incorporating four total tendons per finger, and redesigning the MCP joint as a multi-axis flexure for increased dexterity in our desired hand.
We iterated on the finger design for approximately four months and use this design for all four fingers in .
ยง.ยง.ยง Evaluation
Evaluation for uses keyframed poses on testbench setups with 2 fingers and 6 motors.
Initial tests to analyze the force applied by each finger showed that the grasp force was too low to lift objects.
Accordingly, we increase the stiffness of the MCP joints and changed materials from NinjaFlex Chinchilla (Shore Hardness 75A) to NinjaFlex Edge (Shore Hardness 83A)ย <cit.>.
NinjaFlex Edge provides higher stiffness but a smoother finish compared to the tackier surface of NinjaFlex Chinchilla.
Hence, we add 3M grip tape to the fingertips to add friction to the surface.
Each finger consists of a single print of soft material (NinjaFlex Edge) to create the structure, 4 tendons made of fishing line, and 4 tendon anchors printed from rigid material (PLA).
In the next section, we describe the design of a four-fingered hand structure using the above finger design.
ยง.ยง Iteration
ยง.ยง.ยง Design & Fabrication
Moving from the finger design to hand design involved minimizing the footprint of the motor arrangement below the palm which dictates the palm-size and thumb placement. We route the tendons from the finger through the palm and motor housing into the same plane as the motor pulleys to avoid tendons falling off the rails while the hand is in motion. To minimize friction from long and windy tendon routings, we place the finger so that the tendon is aligned with its motor pulley. Designing included other considerations such as tendon anchors, 3D-printing settings, and material stiffness. For example, printing the hand with more infill makes it stiffer but requires more torque than our motors can supply to pull the tendon for the joint's full range of motion. To better understand the stiffness of the fingers, we test the finger strength by curling the finger completely and using a force gauge to pull on the finger until it uncurls (see Tableย <ref> for results). , fully assembled, is shown on the left in Figureย <ref>. We designed the full hand assembly for in one month.
ยง.ยง.ยง Evaluation
We test on the 30 manipulation tasks from Sectionย <ref>, repeating each task five times. Over 150 repetitions, succeeded on all 5 repetitions for 10 of the 30 tasks (see Tableย <ref>). For , these tasks were Scissors, Hammer, Wooden cylinder, Cloth, Plush Broccoli, Plush Dinosaur, Pringles can, Mustard bottle, Wine glass, and Cup stacking. As shown in Tableย <ref> and Figureย <ref>, struggled to grasp small objects such as the M&M, Pen, and Chip since the fingers were not able to reach and properly oppose each other. For tasks involving precise motions such as picking the grape off a stem and opening the plastic bag, uses the abduction-adduction capability to complete these tasks in more repetitions compared to Allegro. The abduction-adduction grip strength of is high and enables picking up objects such as the wooden cylinder with ease, unlike Allegro. Tasks involving big objects such as the heavy drill and plush broccoli that were successful with Allegro, were also successfully grasped with . Across all repetitions, successfully completed 70% (105 of 150) of the repetitions while Allegro only completed 60% (90 of 150).
outperforms the Allegro baseline but shows room for improvement. Grasps that require all four fingers such as picking up a tennis ball would be more successful if the thumb could reach and oppose the rest of the fingers.
The best opposability to the thumb was to the ring finger, hence pinch (or precision) grasps were easiest to execute with those two fingers. Improving the reachability and opposability of the fingertips requires a smaller palm or longer fingers. We explore these design options in in order to have more overlap in the workspace of the fingers.
ยง.ยง Iteration
ยง.ยง.ยง Design & Fabrication
The second iteration of consists of changes to the size of the hand and the MCP joint of the finger. To allow for more reachability among the fingers as well as opposability, the fingers were made longer and the palm was made smaller as shown in Tableย <ref>. The MCP joint was improved to achieve a larger range of motion.
The underlying structure of the MCP joint is a cylinder to act as a multi-axis flexure, thus we increase the height of the cylinder to increase the joint angle range for the side-to-side and forward motion of the fingers. The measurement changes also resulted in a higher maximum load of a single finger as shown under finger strength in Tableย <ref>.
For comparison, is similar in size to the average male hand which is 88.9mm wide and 193mm long (wrist to fingertip)ย <cit.>ย . Compared to , there is more than a 25% reduction in area of the palm and the finger length increased by 10% in which is shown in the middle in Figureย <ref>.
Thus, has a larger overlap in the workspace of the fingers solving the reachability and opposability issues in , and also achieves increased range of side-to-side and forward motion for the fingers by redesigning the MCP joint.
These changes in the hand result in changes for the tendon routing and motor placements. The motors are now arranged efficiently behind the palm with the tradeoff that the fingers are no longer in line with the motor pulleys requiring us to re-route the tendons.
Re-routing tendons out of plane, however, increases the amount of friction the motor needs to overcome to actuate the finger joints. This added friction is not a problem for our chosen motors, which allows us to pursue a tighter motor arrangement compared to , resulting in a smaller palm size.
Designing, printing, and assembling took 5, 83, and 6.5 hours, respectively.
Printing required us to not only re-print the soft hand, but also the rigid motor housing as the motor arrangement differs from .
In total, making from took approximately 3 days.
ยง.ยง.ยง Evaluation
With larger range of motion at the MCP joint and better reachability, we expect to achieve better performance on tasks involving smaller objects like M&M, Pen, and Chip. As shown in Tableย <ref>, did improve performance on Pen and Chip but not on M&M. Card pickup was another task that did not improve from . Both of these tasks require fine manipulation which is still a limitation in . Instead, our main improvement from to is in achieving better power grasps. Tripod grasps or using more than two fingers was necessary to have stable grasps, especially for the holding tasks such as hammer, screwdriver, and chopstick.
Small objects such as the 1 inch cubes for stacking, chips, and M&Ms are difficult as they require stable precision grasps which can be improved by repositioning the thumb.
Figureย <ref> shows the tasks where and differed in task success with the dashed line separating tasks where one hand performs better than the other. performs better than in 14 tasks, including tasks involving Soft ball, Screw driver, Tennis ball, Dry-erase board eraser, and Spam box that all require power grasps. The most significant improvements are seen for Pen, Chip (see the inset in Figureย <ref>,) and Tennis ball. Pen and Chip are among the smaller task objects, which are easier to grasp with better reachability and opposability of the thumb with the rest of the fingertips.
Having more space between the fingers made abduction-adduction tasks such as picking grape off of a stem and Wooden cylinder easier for compared to , but still performs reasonably well. Out of the 150 repetitions, is successful in 123 repetitions, which is 18 more when compared to (as shown in Tableย <ref>). Additionally, the number of tasks where all five repetitions were successful increased from 10 tasks using to 14 tasks using .
Thus, has a design that improves over 's design by differing in palm and finger size resulting in better overall performance in our evaluation.
In order to improve on finer manipulations for card pickup and small objects, we choose to focus on the fingertip shape and thumb position for .
ยง.ยง Iteration
ยง.ยง.ยง Design & Fabrication
The changes from to involve changing the thumb placement and fingertip shape. In order to make grasps with only two or three fingers more stable, the thumb has to be directly opposable to the rest of the fingers, most importantly the index finger. In , the thumb has a 45-degree angle to the palm which we change to be parallel to the index finger in as shown on the right in Figureย <ref> and in Tableย <ref>. This requires rerouting the tendons only for the thumb through the palm to the motor and rotating the position of the abduction-adduction motor for the thumb. In addition to the thumb placement, the fingertip shape was changed from a rounded surface to a wedge-like surface (see Figureย <ref>). The rounded surface in presented a point contact when interacting with objects. In contrast, the wedge-like surface will have a larger contact area and thinner fingertip (almost nail-like) in order to get under objects to grasp. This results in a thin fingertip edge, almost half the size of and 's fingertip edges (see Tableย <ref>).
We also move the tendon routing farther away from the center axis of the MCP joint so that we can exert more torque when folding the finger forward about the MCP joint.
Designing, printing, and assembling took 67.25, 4, and 4.25 hours, respectively. Similar to , we reprinted the motor housing again due to the new thumb placement. In total, making from took almost 5 days.
ยง.ยง.ยง Evaluation
has more successful tasks than the previous hand iterations and our baseline, completing 16 tasks successfully in all repetitions as opposed to the 14 tasks successfully executed (see Tableย <ref>). In total, succeeded on 124 repetitions which is 1 more than the number of repetitions is successful at. However, the trend in Tableย <ref> shows that is able to master more tasks (successful at least in 4 repetitions) when compared to . We also observe this through higher grasp stability during power grasps and handling of delicate objects, during teleoperation.
Additionally, we find that the fingertip shape makes a large difference for specific tasks. In Figureย <ref>, we show the tasks that and differ in task success. Note that we have more tasks where outperformed when compared to the tasks where performs better with the most significant differences occurring in Cube flip and Card pickup. The flat fingertips of are ideal for thin delicate card pickup but not for the cube flip. Reorienting the cube in-hand in Cube flip is better suited to the rounded fingertip on , keeping a stable point contact while the object rotates on the table.
For opening Plastic bag and picking Grape off stem task, we attempt using both abduction-adduction (similar to and ) and pinch grasp with the thinner fingertips of . We observe that the pinch grasp is unsuccessful while the abduction-adduction grasp is successful, leading us to hypothesize that would have the same performance as in these tasks if we use only abduction-adduction grasps.
When creating , our goal is to improve on M&M and card pickup tasks that performed poorly on. In Tableย <ref>, we see that successfully completed 5 repetitions of card pickup but only 1 repetition of holding an M&M. It is important to note that M&M is our most difficult task that none of the hands succeed more than two times at (see Figureย <ref>). Furthermore, as shown in Figureย <ref>, the card is able to be picked up with a pinch grasp which was not possible with and (see supplemental video). Thus, we improve the pinch grasps from to by changing thumb placement and using flatter fingertips that are well-suited for this delicate card pickup task.
ยง DISCUSSION
Across the 30 tasks, we observe that has the best performance solving 16 tasks successfully completing all repetitions, while , , and Allegro solve 14, 10, and 7 tasks, respectively. Note that all iterations of outperformed the Allegro baseline as shown in Figureย <ref>. Thus, we see a consistent improvement in performance on our tasks across iterations. This validates our framework's ability to use real world evaluation to iteratively design soft robot hands through rapid prototyping and teleoperation.
From our case study in Sectionย <ref>, we draw three crucial observations regarding our proposed framework. Firstly, the direct feedback from performing real world manipulation tasks with was crucial for us in informing the design changes required to improve performance across iterations. In contrast, testing in simulation can result in design changes that do not necessarily translate to performance improvement in the real world. Secondly, using teleoperation removed the necessity of designing different control policies for 30 various tasks across four robot hand morphologies in our case study, and allowed us to adjust grasps in real time during task execution, which is often not feasible in simulation or by using keyframed poses. Lastly, despite using real robot hands in the design iteration process, our framework has a short iteration time, consisting mostly of printing time, by leveraging 3D-printing and the use of teleoperation to evaluate the design in the real world.
Observing that our robot hand has a similar structure and size to human hands, we note a crucial limitation of our framework, shown in Figureย <ref>(b), for robot hand morphologies that diverge from human hand morphology as teleoperation might not be feasible in such cases. Additionally, calibration or mapping of the teleoperator's hand to the robot hand can have a significant impact on the robot hand's performance in real world manipulation tasks. For example, an inaccurate mapping from the teleoperator's hand to the robot hand can incorrectly evaluate the robot hand to be incapable of some tasks. Another limitation for this framework is that it can result in slower turnover times for designs that cannot be made with rapid prototyping techniques such as 3D-printing.
ยง CONCLUSION AND FUTURE WORK
This paper presents a framework that can supplement existing robot hand design iteration techniques by leveraging 3D-printing and teleoperation.
We exhibit the potential of this framework through a case study of designing a 16DoF 3D-printed dexterous anthropomorphic soft hand . By 3D-printing the new design at each iteration, and evaluating it on real world manipulation tasks using teleoperation to inform future hand designs, we consistently improve its performance over the baseline Allegro hand and across successive iterations of . CAD models of all versions and teleoperated demonstration data for all experiments are open-sourced at <https://dash-through-interaction.github.io>
Future directions include automatic design iteration by singling out features of the CAD design and correlating them with capabilities of the hand. Further study would be required to automate this process and use collected data to learn what properties of the hand should be improved for better task performance. Currently, the process of design iteration in our case study was manual in that we chose parameters to change based on task performance and observations from real-world manipulation experiments. Furthermore, trying out downstream tasks, e.g., hammering a nail with a hammer, would show more strengths and weaknesses of the robot hands that can better inform design choices. Finally, we can improve on by allowing for sensor integration within or on the 3D-printed hand that can provide feedback to the robot. This could be useful for learning closed-loop policies to execute dexterous manipulation tasks.
plainnat
|
http://arxiv.org/abs/2306.09659v1
|
20230616072838
|
Randomized Robust Price Optimization
|
[
"Xinyi Guan",
"Velibor V. Miลกiฤ"
] |
math.OC
|
[
"math.OC"
] |
Resonant Cancellation Effect in Ramsey Experiments
Florianย M.ย Piegsa
July 31, 2023
==================================================
ยง INTRODUCTION
Price optimization is a key problem in modern business. The price optimization problem can be stated as follows: we are given a collection of products. We are given a demand model which tells us, for each product, what the expected demand for that product will be as a function of the price of that product as well as the price of the other products. Given this demand model, the price optimization problem is to decide a price vector โ i.e., what price to set for each product โ so as to maximize the total expected revenue arising from the collection of products.
The primary input to a price optimization approach is a demand model, which maps the price vector to the vector of expected demands of a product. However, in practice, the demand model is never known exactly, and must be estimated from data. This poses a challenge because data is typically limited, and thus a firm often faces uncertainty as to what the demand model is. This is problematic because a mismatch between the demand model used for price optimization โ the nominal demand model โ and the demand model that materializes in reality can lead to suboptimal revenues.
As a result, there has been much research in how to address demand model uncertainty in pricing. In the operations research community, a general framework for dealing with uncertainty is robust optimization. The idea of robust optimization is to select an uncertainty set, which is a set of values for the uncertain parameter that we believe could plausibly occur, and to optimize the worst-case value of the objective function, where the worst-case is taken over the uncertainty set. In the price optimization context, one would construct an uncertainty set of potential demand models and determine the prices that maximize the worst-case expected revenue, where the worst-case is the minimum revenue over all of the demand models in the uncertainty set. In applying such a procedure, one can ensure that the performance of the chosen price vector is good under a multitude of demand models, and that one does not experience the deterioration of a price vector optimized from a single nominal demand model.
Typically in robust optimization, the robust optimization problem is to find the single best decision that optimizes the worst-case value of the objective function. Stated in a slightly different way, one deterministically implements a single decision. However, a recent line of research <cit.> has revealed that with regard to the worst-case objective, it is possible to obtain better performance than the traditional deterministic robust optimization approach by randomizing over multiple solutions. Specifically, instead of optimizing over a single decision in some feasible set that optimizes the worst-case objective, one optimizes over a distribution supported on the feasible set that informs the decision maker how to randomize.
In this work, we propose a methodology for robust price optimization that is based on randomization. In particular, we propose solving a randomized robust price optimization (RRPO) problem, which outputs a probability distribution that specifies the frequency with which the firm should use different price vectors. From a practical perspective, such a randomization scheme has the potential to be implemented in modern retailing as a strategy for mitigating demand uncertainty. In particular, in an ecommerce setting, randomization is already used for A/B testing, which involves randomly assigning some customers to one experimental condition and other customers to a different experimental condition. Thus, it is plausible that the same form of randomization could be used to display different price vectors with certain frequencies. In the brick-and-mortar setting, one can potentially implement randomization by varying prices geographically or temporally. For example, if the RRPO solution is the three price vectors _1, _2, _3 with probabilities 0.2, 0.3, 0.5, then for a set of 50 regions, we would choose 10 (= 50 ร 0.2) to assign to price vector _1, 15 (= 50 ร 0.3) to assign to _2, and 25 (= 50 ร 0.5) to assign to _3. Similarly, if one were to implement the same randomization scheme temporally, then for a selling horizon of 20 weeks, one would use the price vector _1 for 4 (= 20 ร 0.2) weeks, _2 for 6 (= 20 ร 0.3) weeks and _3 for 10 (= 20 ร 0.5) weeks.
We make the following specific contributions:
* Benefits of randomization. We formally define the RRPO problem and analyze it under different conditions to determine when the underlying robust price optimization problem is randomization-receptive โ there is a benefit from implementing a randomized decision over a deterministic decision โ versus when it is randomization-proof โ the optimal randomized and deterministic decisions perform equally well. We show that the robust price optimization problem is randomization-proof in several interesting settings, which can be described roughly as follows: (1) when the set of feasible price vectors is convex and the set of uncertain revenue functions is concave; (2) when the set of feasible price vectors and the set of uncertain demand parameters are convex, and the revenue function obeys certain quasiconvexity and quasiconcavity properties with respect to the price vector and the uncertain parameter; and (3) when the set of feasible price vectors is finite, and a certain minimax property holds. We showcase a number of examples of special cases that satisfy the hypotheses of these results and consequently are randomization-proof. We also present several examples showing how these results can fail to hold when certain assumptions are relaxed and the problem thus becomes randomization-receptive.
* Tractable solution algorithms. We propose algorithms for solving the RRPO problem in two different settings:
* In the first setting, we assume that the set of possible price vectors is a finite set and that the uncertainty set of demand function parameters is a convex set. In this setting, when the revenue function is quasiconvex in the uncertain parameters, we show that the RRPO problem can be solved via delayed constraint generation. The separation problem that is solved to determine which constraint to add is exactly the nominal pricing problem for a fixed uncertain parameter vector of the demand function. For the log-log and semi-log demand models, we show that this nominal pricing problem can be reformulated and solved to global optimality as a mixed-integer exponential cone program. We believe these reformulations are of independent interest as to the best of our knowledge, these are the first exact mixed-integer convex formulations for these problems in either the marketing or operations literatures, and they leverage recent advances in solution technology for mixed-integer conic programs (as exemplified in the conic solver Mosek).
* In the second setting, we assume that both the price set and the uncertainty set are finite sets. In this setting, we show how the RRPO problem can be solved using a double column generation method, which involves iteratively generating new uncertainty realizations and price vectors by solving primal and dual separation problems, respectively. We show how the primal and dual separation problems can be solved exactly for the linear, semi-log and log-log demand models.
* Numerical evaluation with synthetic and real data. We evaluate the effectiveness of randomized pricing on different problem instances generated synthetically and problem instances calibrated with real data. Using synthetic data instances, we show that randomized pricing can improve worst-case revenues by as much as 1300% over deterministic pricing, while in our real data instances, the benefit can be as high as 92%. Additionally, we show that for instances of realistic size (up to 20 products), our algorithm can solve the RRPO problem in a reasonable amount of time (no more than four minutes on average).
The rest of this paper is organized as follows. Sectionย <ref> reviews the related literature on pricing, robust optimization and randomized robust optimization. Sectionย <ref> formally defines the nominal price optimization problem, the deterministic robust price optimization problem and the randomized robust price optimization problem. Sectionย <ref> analyzes the robust price optimization problem and provides conditions under which the price optimization problem is randomization-receptive and randomization-proof. Sectionย <ref> presents our constraint generation approach for solving the RRPO problem when the price set is a finite set and the uncertainty set is a convex set, and discusses how this approach can be adapted for different families of demand models. Sectionย <ref> provides a brief overview of our methodology for solving the RRPO problem when the price set and uncertainty sets are finite sets, with the details provided in Sectionย <ref> of the companion. Sectionย <ref> presents our numerical experiments. Lastly, in Sectionย <ref>, we conclude and highlight some directions for future research.
ยง LITERATURE REVIEW
Our paper is closely related to three streams of research: pricing, robust optimization and the use of randomized strategies in optimization. We discuss each of these three streams below.
Pricing optimization and demand models. Optimal pricing has been extensively studied in many fields such as revenue management and marketing research; for a general overview of this research area, we refer readers to <cit.> and <cit.>. An important stream of pricing literature is on static pricing, which involves setting a fixed price for a product. The most commonly considered demand models in the static pricing literature are the linear and log-log models. For example, the papers of <cit.> and <cit.> assume linear demand functions in the study of pricing strategies. The papers of <cit.>, and <cit.> use log-log (multiplicative) demand functions to represent aggregate demand. The paper of <cit.> considers a semi-log demand model. Besides linear, semi-log and log-log demand functions, another type of demand form that is extensively discussed in pricing literature is based on an underlying discrete choice model. For example, <cit.>, <cit.>, and <cit.> consider the product line pricing problem under the multinomial logit (MNL) model. <cit.> consider attraction demand models which subsume MNL models. <cit.> study the pricing problem with the nested logit (NL) models, and show the concavity of the profit function with respect to market share holds. <cit.> characterize the optimal pricing structure under the general nested logit model with product-differentiated price sensitivities and arbitrary nest coefficients. The papers of <cit.> and <cit.> study the multiproduct pricing problem under the family of generalized extreme value (GEV) models which includes MNL and NL models as special cases. Our work differs from this prior work on multiproduct pricing in that the demand model is not assumed to be known, and that there is an uncertainty set of plausible demand models that could actually materialize. Correspondingly, the firm is concerned not with expected revenue under a single, nominal demand model, but with the worst-case revenue with respect to this uncertainty set of demand models.
Another significant stream of pricing literature is to consider multiple-period pricing decisions where the prices of products change over time and there is a fixed inventory of each product; we refer readers to <cit.>, <cit.>, <cit.>, and <cit.> for a comprehensive review on dynamic pricing strategies with inventory considerations. As in the static pricing literature, studies in dynamic pricing also vary in the types of demand models. <cit.> assume linear demand in a multiperiod single product pricing problem, and show that the corresponding pricing policy can perform well even under model misspecification. <cit.> consider multiplicative models where the demand rate and price discount have the logarithmic relationship, and consider a multiproduct clearance pricing optimization problem for the fast-fashion retailer Zara. <cit.> consider dynamic pricing under MNL models for horizontally differentiated products and show that the profit function is unimodal in prices, while <cit.> and <cit.> reformulate the MNL profit as a concave function of its market share rather than prices. Our work focuses on the static, single-period setting where there is no inventory consideration, and is thus not directly related to this stream of the pricing literature.
Robust Optimization. Within the literature mentioned above, either the probability distributions of demand are assumed to be known exactly or the demand models are estimated from historical data. However, in practice, the decision maker often has no access to the complete information of demand distributions. Also, in many real applications, the lack of sales data makes it hard to obtain a good estimation of demand models, which leads to model misspecification and thus suboptimal pricing decisions. In operations research, this type of challenge is most commonly addressed using the framework of robust optimization, where uncertain parameters are assumed to belong to some uncertainty set and one optimizes for the worst-case objective of parameters within the set. We refer readers to <cit.>, <cit.>, <cit.> and <cit.> for a detailed overview of this approach. Robust optimization has been widely applied in various problem settings such as assortment optimization <cit.>, inventory management <cit.> and financial option pricing <cit.>.
Within this literature, our paper contributes to the substream that considers robust optimization for pricing. <cit.> consider tractable robust counterparts to the deterministic multiproduct pricing problem with the budget-of-resource-consumption constraint in the case of additive demand uncertainty, and investigate the impact of uncertainty on the optimal prices of multiple products sharing capacitated resources. <cit.> consider robust multiproduct pricing optimization under the generalized extreme value (GEV) choice model and characterize the robust optimal solutions for unconstrained and constrained pricing problems. <cit.> study the robust pricing problem with interval uncertainty of the price sensitivity parameters under the multi-product linear demand model. For robust dynamic pricing problems, <cit.> and <cit.> use relative entropy to represent uncertainty in the demand rate and thus the demand uncertainty can be expressed through a constraint on relative entropy. <cit.>, <cit.> and <cit.> all study robust dynamic pricing problems with demand uncertainty modeled by intervals.
<cit.> study robust dynamic price optimization on an omnichannel network with cross-channel interactions in demand and supply where demand uncertainty is modeled through budget constraints.
From a different perspective, <cit.> develop a data-driven framework for solving the robust dynamic pricing problem by directly using samples in the optimization. Specifically, the paper considers three types of robust objective (max-min, min-max regret and max-min ratio), and uses the given sampled scenarios to approximate the uncertainty set by a finite number of constraints.
Our work differs from the majority of this body of work that takes a robust optimization approach to pricing in that the decision we seek to make is no longer a deterministic decision, but a randomized one. Within this body of work, the papers closest to our work are <cit.> and <cit.>. The paper of <cit.> considers a pricing problem under a valuation-based model of demand, where each customer has a valuation drawn from an unknown cumulative distribution function on the positive real line, and the firm only has one historical data point. The paper considers the pricing problem from a max-min ratio standpoint, where the firm seeks to find a pricing mechanism that maximizes the percentage attained of the true maximum revenue under the worst-case (minimum) valuation distribution consistent with the data point. The paper shows the approximation rate that is attainable given knowledge of different quantiles of the valuation distribution when the valuation distribution is a regular distribution or a monotone non-decreasing hazard rate distribution. The mechanisms that are proposed in the paper, which are mappings from the data point to a price, include deterministic ones that offer a fixed price, as well as randomized ones that offer different prices probabilistically. The paper of <cit.> considers a similar setting, where instead of knowing a point on the valuation CDF exactly or to within an interval, one has access to an IID sample of valuations drawn from the unknown valuation CDF, and similarly proposes deterministic and randomized mechanisms for this setting.
With regard to <cit.> and <cit.>, our setup differs in a number of ways. First, our methodology focuses on a max-min revenue objective, as opposed to a max-min ratio objective that considers performance relative to oracle-optimal revenue. Our methodology also does not start from a valuation model, but instead starts from an aggregate demand model, and additionally considers the multi-product case in the general setting. Additionally, we do not take data as a starting point, but instead assume that the aggregate demand model is uncertain. Lastly, the overarching goals are different: while <cit.> and <cit.> seek to understand the value of data, and how well one can do with limited data, our goal is to demonstrate that from the perspective of worst-case revenue performance, a randomized pricing strategy can be preferable over a deterministic fixed price, and to develop tractable computational methods for computing such strategies under commonly used demand models in the multi-product setting.
Randomized strategies in optimization under uncertainty. The conventional robust optimization problems mentioned above only consider deterministic solutions. In recent years, the benefit of using randomized strategies has received increasing attention in the literature on decision making under uncertainty and robust optimization. <cit.> study randomized strategies for min-max regret combinatorial optimization problems in the cases of interval uncertainty and uncertainty representable by discrete scenarios, and provide bounds on the gains from randomization for these two cases. <cit.> consider randomness in a network interdiction min-max problem where the interdictor can benefit from using a randomized strategy to select arcs to be removed.
The paper of <cit.> considers the problem of making a decision whose payoff is uncertain and minimizing a risk measure of this payoff, and studies under what circumstances a randomized decision leads to lower risk than a deterministic decision. The paper characterizes the classes of randomization-receptive and randomization-proof risk measures in the absence of distributional ambiguity (i.e., classical stochastic programs), and discusses conditions under which problems with distributional ambiguity (i.e., distributionally robust problems) can benefit from randomized decisions.
Subsequently, the paper of <cit.> studies the value of randomized solutions for mixed-integer distributionally robust optimization problems. The paper develops bounds on the magnitude of improvement given by randomized solutions over deterministic solutions, and proposes a two-layer column generation method for solving single-stage and two-stage linear DRO problems with randomization. Our paper relates to <cit.> in that we apply a similar two-layer column generation approach for solving the randomized robust pricing problem when the price set and the uncertainty set are both finite; we discuss this connection in more detail in our discussion of the paper of <cit.> below.
The paper of <cit.> develops a randomization approach for solving a distributionally robust maximum flow network interdiction problem with a conditional-value-at-risk objective, which is also solved using a column generation approach.
Our work is most closely related to the excellent paper of <cit.>. The paper of <cit.> introduces randomization into the robust assortment optimization and characterizes the conditions under which a randomized strategy strictly improves worst-case expected revenues over a deterministic strategy. The paper proposes several different solution methods for finding an optimal distribution over assortments for the MNL, Markov chain and ranking-based models. For the MNL model in particular, the paper adapts the two-layer column generation method of <cit.> to solve the randomized robust assortment optimization problem when the uncertainty set is discrete.
Our paper shares a high-level viewpoint with the paper of <cit.> in that revenue management decisions, such as assortment decisions and pricing decisions, are subject to uncertainty and from an operational point of view, have the potential to be randomized and to benefit from randomization. From a technical standpoint, several of our results on the benefit of randomization when the price set is discrete that are stated in Sectionย <ref> are generalizations of results in <cit.> to the pricing setting that we study. In terms of methodology, the solution approach we apply when the price set and uncertainty set are discrete in Sectionย <ref> (described more fully in Sectionย <ref>) is related to the approach in <cit.> for the MNL model, as we also use the two-layer column generation scheme of <cit.>. The main difference between the method in <cit.> and our method lies in the nature of the subproblems. In the paper of <cit.>, the primal subproblem is a binary sum of linear fractional functions problem, and the dual subproblem is essentially a mixture of multinomial logits assortment problem that can be reformulated as a mixed-integer linear program. In our paper, the primal and dual subproblems that are used to generate new price vectors and uncertainty realizations comes from the underlying pricing problem and the structure of different demand models (linear, semi-log and log-log), which lead to different subproblems than in the assortment setting. In particular, in the semi-log and log-log cases, both the primal and dual subproblems can be formulated as mixed-integer exponential cone programs. In the semi-log and log-log cases specifically, the formulations of the dual subproblems, which are used to identify new price vectors to add, require a logarithmic transformation together with a biconjugate representation of the log-sum-exp function. This technique is also used to develop a constraint generation scheme for the randomized robust pricing problem when the price set is discrete and the uncertainty set is convex (Sectionย <ref>); the subproblem in this case involves solving a nominal pricing problem under the semi-log or log-log model, which we are also able to reformulate exactly as a mixed-integer exponential cone program. As noted in the introduction, we believe these are the first exact formulations of these problems using mixed-integer conic programming.
Other work on randomization. Lastly, we comment on several streams of work that use randomization but are unrelated to our paper.
Within the revenue management community, there are instances where randomization is an operational aspect of the algorithm. For example, in network revenue management, the heuristic of probabilistic allocation control involves using the primal variable values in the deterministic linear program (DLP) to decide how frequently requests should be accepted or rejected <cit.>. Here, randomization is used to ensure that the long run frequency with which different requests are accepted or rejected is as close as possible to the DLP solution, which corresponds to an idealized upper bound on expected revenue. As another example, randomization is often also a part of methods for problems that involve learning. For example, <cit.> propose a method for network revenue management where there is uncertainty in demand rates based on Thompson sampling, which is a method from the bandit literature that involves taking a random sample from the posterior distribution of an uncertain parameter and taking the action that is optimal with respect to that sample. Here, randomization is a way of ensuring that the decision maker explores possibly suboptimal actions. In our work, the focus is not on using randomization to achieve better expected performance or using randomization to achieve a balance between exploration and exploitation, but rather to operationalize randomization to achieve better worst-case performance.
ยง PROBLEM DEFINITION
In this section, we begin by defining the nominal price optimization problem in Sectionย <ref>. We subsequently define the deterministic robust price optimization problem in Sectionย <ref>. Lastly, we define the randomized robust price optimization problem in Sectionย <ref>.
ยง.ยง Nominal price optimization problem
We assume that the firm offers I products, indexed from 1 to I. We let p_i denote the price of product i โ [I], where we use the notation [n] = {1,โฆ,n} for any positive integer n. We use = (p_1,โฆ, p_I) to denote the vector of prices. We assume that the price vector is constrained to lie in the set โ_+^I, where _+ is the set of nonnegative real numbers.
We let d_i denote the demand function of product i, so that d_i() denotes the demand of product i when the price vector is chosen. The revenue function R(ยท) can then be written as R() = โ_i =1^I p_i ยท d_i().
The nominal price optimization (NPO) problem can be written simply as
NPO: max_โ R().
There are numerous demand models that can be used in practice, which lead to different price optimization problems; we briefly review some of the more popular ones here.
* Linear demand model: A linear demand model is defined by parameters โ^I, โ^I, = (ฮณ_i,j)_i, j โ [I], i โ jโ^I ยท(I-1). The demand function d_i(ยท) of each product i โ [I] has the form
d_i() = ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j,
where ฮฒ_i โฅ 0 is the own-price elasticity parameter of product i, which indicates how much demand for product i is affected by the price of product i, whereas ฮณ_i,j is a cross-price elasticity parameter that describes how much demand for product i is affected by the price of a different product j. Note that ฮณ_i,j can be positive, which generally corresponds to products i and j being substitutes (i.e., when the price of product j increases, customers tend to switch to product i), or negative, which corresponds to products i and j being complements (i.e., products i and j tend to be purchased together, so when the price of product j increases, this causes a decrease in demand for product i).
The corresponding revenue function R(ยท) is then
R() = โ_i=1^I p_i ยท (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j).
* Semi-log demand model: A semi-log demand model is defined by parameters โ^I, โ^I, = (ฮณ_i,j)_i, j โ [I], i โ jโ^I ยท(I-1). In a semi-log demand model, the logarithm of the demand function d_i(ยท) of each product i โ [I] has a linear form in the prices p_1,โฆ, p_I:
log d_i() = ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j.
This implies that the demand function is
d_i() = e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j.
The corresponding revenue function R(ยท) is then
R() = โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j.
* Log-log demand model: A log-log demand model is defined by parameters โ^I, โ^I, = (ฮณ_i,j)_i, j โ [I], i โ jโ^I ยท(I-1). In a log-log demand model, the logarithm of the demand function d_i(ยท) of each product i โ [I] has a linear form in the log-transformed prices log p_1, โฆ, log p_I:
log d_i() = ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j.
This implies that the demand function for product i is
d_i() = e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j
= e^ฮฑ_i p_i^-ฮฒ_iยทโ_j โ i p_j^ฮณ_i,j,
and that the revenue function is therefore
R() = โ_i=1^I p_i ยท e^ฮฑ_i p_i^-ฮฒ_iยทโ_j โ i p_j^ฮณ_i,j
= โ_i=1^I e^ฮฑ_iยท p_i^1 - ฮฒ_iยทโ_j โ i p_j^ฮณ_i,j.
ยง.ยง Deterministic robust price optimization problem
We now define the deterministic robust price optimization (DRPO) problem. To define this problem abstractly, we let denote an uncertainty set of possible revenue functions. The DRPO problem is to then maximize the worst-case revenue, where the worst-case is the minimum revenue of a given price vector taken over all revenue functions in . Mathematically, this problem can be written as
DRPO: max_โmin_R โ R().
We use Z^*_ to denote the optimal objective value of the DRPO problem.
Although can be defined in many different ways, we will now focus on one general case that we will assume for most of our subsequent results in Sectionsย <ref>, <ref> and <ref>. Suppose that we fix the demand model to a specific parametric family, such as a log-log demand model. Let denote the vector of demand model parameters. For example, for log-log, would be the tuple = (, , ). Let denote a set of possible values of ; is then an uncertainty set of model parameters. With a slight abuse of notation, let d_i(, ) denote the demand for product i when the demand model parameters are specified by . Then can be defined as:
= { R(ยท) โกโ_i = 1^I p_i ยท d_i(ยท, ) |โ},
i.e., it is all of the possible revenue functions spanned by the uncertain parameter vector in .
To express the DRPO problem in this setting more conveniently, we will abuse our notation slightly and use R(, ) to denote the revenue function evaluated at a price vector with a particular parameter vector specified. With this abuse of notation, the DRPO problem can be written as
DRPO: max_โmin_โ R(, ).
ยง.ยง Randomized robust price optimization problem
In the DRPO problem, we assume that the decision maker will deterministically implement a single price vector in the face of uncertainty in the revenue function. In the RRPO problem, we instead assume that the decision maker will randomly select a price vector according to some distribution F over the feasible price set . Under this assumption, we can write the RRPO problem as
RRPO: max_F โmin_R โโซ_ R() dF(),
where is the set of all distributions supported on . We use Z^*_ to denote the optimal objective value of the RRPO problem. Note that Z^*_โฅ Z^*_. This is because for every ' โ, the distribution F(ยท) = ฮด_'(ยท), where ฮด_'(ยท) is the Dirac delta function at ', is contained in . For this distribution, min_R โโซ R() dF() = min_R โ R('), which is exactly the worst-case revenue of deterministically selecting '.
A special instance of this problem arises when is a discrete, finite set. In this case, F is a discrete probability distribution, and one can re-write the inner problem as an optimization problem over a discrete probability distribution = (ฯ_)_โ over :
RRPO-D: max_โฮ_min_R โโ_โ R() ฯ_,
where we use ฮ_S to denote the (|S|-1)-dimensional unit simplex, i.e., ฮ_S = {โ^S |โ_i โ Sฯ_i = 1, ฯ_i โฅ 0 โ i โ S }.
Lastly, under the assumption that is the set of revenue functions of a fixed demand model family whose parameter vector belongs to a parameter uncertainty set , we can restate the RRPO problem when is a generic set and when is finite as
RRPO: max_F โmin_โโซ_ R(, ) dF(),
RRPO-D: max_โฮ_min_โโ_โ R(,) ฯ_.
ยง BENEFITS OF RANDOMIZATION
In this section, we analyze when randomization can be beneficial. To aid us, we introduce some additional nomenclature in this section, which follows the terminology established in the prior literature on randomized robust optimization <cit.>. We say that a robust price optimization (RPO) problem is randomization-receptive if Z^*_ > Z^*_, that is, randomizing over price vectors leads to a higher worst-case revenue than deterministically selecting a single price vector. Otherwise, we say that a RPO problem is randomization-proof if Z^*_ = Z^*_, that is, there is no benefit from randomizing over price vectors.
In the following three sections, we derive three classes of results that establish when the RPO problem is randomization-proof. The first condition (Sectionย <ref>) for randomization-proofness applies in the case when is a convex set and is an arbitrary set of revenue functions each of which is concave in . The second and third conditions apply to the case where arises out of a single demand model, where the parameter vector belongs to an uncertainty set. The second condition (Sectionย <ref>) applies in the case when and are compact, convex sets and R(, ) obeys certain quasiconvexity and quasiconcavity properties. The third condition (Sectionย <ref>) is for the case when is finite and involves a certain minimax condition being met; as corollaries, we show that randomization-proofness occurs if the DRPO problem satisfies a strong duality property, and that randomization-receptiveness is essentially equivalent to the DRPO solution being different from the nominal price optimization problem solution at the worst-case . Along the way, we also give a number of examples where our results can be used to establish that a particular family of RPO problems is randomization-proof, and also highlight how the results fail to hold when certain hypotheses are relaxed.
The main takeaway from these results is that the set of RPO problems that are randomization-proof is small. As we will see, the conditions under which a RPO problem will be randomization-proof are delicate and quite restrictive, and are satisfied only for certain very special cases; in most other realistic cases, the RPO problem will be randomization-receptive. Consequently, in Sectionsย <ref> and <ref>, we will develop algorithms for solving the randomized robust when the candidate price vector is finite, and in Sectionsย <ref> we will show a wide range of both synthetic and real data instances in which the RPO problem is randomization-receptive.
ยง.ยง Concave revenue function uncertainty sets
Our first major result is for the case where consists of concave revenue functions.
Suppose that is a convex set and that is such that every R โ is a concave function of . Then the RPO problem is randomization-proof, that is, Z^*_ = Z^*_.
The proof of this results (see Sectionย <ref> of the ecompanion) follows from a simple application of Jensen's inequality. We pause to make a few important comments about this result. First, one aspect of this result that is special is that can be a very general set: it could be countable or uncountable, and it could consist of revenue functions corresponding to different families of demand models. This will not be the case for our later results in Sectionย <ref> and <ref>, which require that is defined based on a single demand model family, and that the uncertainty set of parameter vectors for that family be a convex compact set.
Second, we remark that the condition that all functions in be concave cannot be relaxed in general. We illustrate this in the following example, where consists of two functions and one of the two is non-concave.
Consider a single-product RPO problem, and suppose that the revenue function uncertainty set = {R_1, R_2}, where R_1(ยท) and R_2(ยท) are defined as
R_1(p) = p (10 - 2p),
R_2(p) = p ยท 10p^-2 .
Note that R_1(p) is the revenue function corresponding to the linear demand function d_1(p) = 10 - 2p, while R_2(p) is the revenue function corresponding to the log-log demand function d_2(p) = exp( log(10) - 2 log(p)) = 10p^-2. Note also that R_1(ยท) is concave, while R_2(ยท) is convex. Suppose additionally that = [1,4].
We first calculate the optimal value of the DRPO problem. Observe that in the interval [1,4], the only root of the equation 10 - 2p = 10p^-2 is p โ p' = 1.137805โฆ. For p < p', d_2(p) > d_1(p), and for p > p', d_1(p) > d_2(p). Therefore, the optimal value of the of the DRPO problem can be calculated as
max_p โ [1,4]min_R โ R(p)
= max{max_p โ [1, p']min_R โ R(p), max_p โ [p', 4]min_R โ R(p) }
= max{max_p โ [1, p'] p ยท (10 - 2p), max_p โ [p', 4] p ยท 10 p^-2}
= 10 p' - 2 p'^2
= 8.78885
In the above, the first step follows because the best value of the worst-case revenue over [1,4] is equivalent to taking the higher of the best worst-case revenue over either [1,p'] or [p', 4]. The second step follows because for every p โ [1,p'], d_1(p) < d_2(p), and so R_1(p) = p ยท d_1(p) < p ยท d_2(p) = R_2(p); similarly, for every p โ [p', 4], d_1(p) > d_2(p), and so R_2(p) < R_1(p). The third step follows by carrying out the maximization of each of the two functions from the prior step over its corresponding interval.
Now, let us lower bound the optimal value of the RRPO problem. Consider a distribution F that randomizes over prices in the following way:
p = {[ 1 with probability 17/21,; 2.5 with probability 4/21. ].
The worst-case revenue for this distribution is
min_R โโซ_1^4 R(p) dF(p)
= min{17/21ยท R_1(1) + 4/21ยท R_1(2.5), 17/21ยท R_2(1) + 4/21ยท R_2(2.5) }
= min{17/21ยท 8 + 4/21ยท 12.5, 17/21ยท 10 + 4/21ยท 4 }
= min{62/7, 62/7}
= 62/7
= 8.857143.
This implies that Z^*_โฅ 8.857143, whereas Z^*_ = 8.78885, and thus Z^*_ > Z^*_.
Third, we note that the requirement that be a convex set also cannot be relaxed in general. The following example illustrates how Theoremย <ref> can fail to hold when is not a convex set.
Consider again a single-product RPO problem. Suppose that = { R_1, R_2}, where R_1(p) = p (10 - p), R_2(p) = p (4 - 0.2p); R_1 and R_2 correspond to linear demand functions d_1(p) = 10 - p, d_2(p) = 4 - 0.2p. Suppose that = {p_1, p_2}, where p_1 = 5, p_2 = 10. From this data, observe that:
R_1(p_1) = 5 (10 - 5) = 25,
R_1(p_2) = 10 (10 - 10) = 0,
R_2(p_1) =5 (4 - 0.2 (5) ) = 15,
R_2(p_2) = 10 (4 - 0.2 (10)) = 20.
We first calculate the optimal value of the DRPO problem:
Z^*_ = max_p โ{p_1, p_2}min{ R_1(p), R_2(p) }
= max{min{ 25, 15 }, min{ 0, 20 }}
= max{ 15, 0 }
= 15.
For the RRPO problem, the optimal value is given by the following LP:
ฮท, maximize ฮท
subject to ฮทโคฯ_p_1 ยทp_1 ยท(10 - p_1) + ฯ_p_2 ยทp_2 ยท(10 - p_2)
ฮทโคฯ_p_1ยทp_1 ยท(4 - 0.2 p_1) + ฯ_p_2 ยทp_2 ยท(4 - 0.2 p_2)
ฯ_p_1 + ฯ_p_2 = 1
ฯ_p_1, ฯ_p_2 โฅ0.
The optimal distribution over = { p_1, p_2 } is given by ฯ_p_1 = 2/3, ฯ_p_2 = 1/3, which leads to Z^*_ = 50 / 3 = 16.6667. Since this is higher than Z^*_, we conclude that this particular instance is randomization-receptive.
Lastly, Theoremย <ref> has a number of implications for different classes of demand models.
(Single-product pricing under linear demand). Suppose that I = 1, which corresponds to a single-product pricing problem. Let = (ฮฑ,ฮฒ) โ^2 denote the vector of linear demand model parameters, and let โ^2 be an uncertainty set of possible values of (ฮฑ,ฮฒ). Let = { R(ยท, ) |โ} be the set of revenue functions that arise from the uncertainty set . Note that each revenue function is of the form R(p) = ฮฑ p - ฮฒ p^2. Therefore, the condition that each R โ is concave implies that Rโ(p) = - 2 ฮฒโค 0. Thus, if is such that inf{ฮฒ| (ฮฑ, ฮฒ) โ}โฅ 0, then the robust price optimization problem is randomization-proof.
(Multi-product pricing under linear demand). In the more general multi-product pricing problem, let = (, , ) โ^Iร^Iร^I(I-1) denote the vector of linear demand model parameters, and let be an arbitrary uncertainty set of these model parameter vectors. Let = { R(ยท, ) |โ} be the set of revenue functions that arise from the uncertainty set . Observe that each revenue function R(ยท, ) is of the form
R() = โ_i=1^I p_i (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j )
= ^T - ^T _,,
where _, is the matrix
_, = [ [ -ฮฒ_1 ฮณ_1,2 ฮณ_1,3 โฏ ฮณ_1,I-1 ฮณ_1,I; ฮณ_2,1 - ฮฒ_2 ฮณ_2,3 โฏ ฮณ_2,I-1 ฮณ_2,I; โฎ โฎ โฑ โฎ โฎ; ฮณ_I,1 ฮณ_I,2 ฮณ_I,3 โฏ ฮณ_I,I-1 -ฮฒ_I ]].
This implies that
โ^2 R() = - 2 _,.
The function R is therefore concave if the matrix _, is positive semidefinite. Therefore, if is such that inf{ฮป_min( _,) | (, , ) โ}โฅ 0, where ฮป_min() denotes the minimum eigenvalue of a symmetric matrix , then the robust price optimization problem is randomization proof.
(Single-product pricing under semi-log demand). For the single-product pricing problem under semi-log demand, d(p) = exp(ฮฑ - ฮฒ p) is the demand function given the parameter vector = (ฮฑ, ฮฒ). Let โ^2 be an uncertainty set of possible values of (ฮฑ, ฮฒ), and assume that ฮฒ is bounded away from zero, that is, inf{ฮฒ| (ฮฑ,ฮฒ) โ}โฅ 0. = { R(ยท, ) |โ} be the revenue function uncertainty set. For a given R โ, its second derivative is Rโ(p) = Rโ(p) = ฮฒ ( ฮฒ p - 2 ) e^ฮฑ - ฮฒ p.
Thus, for Rโ(p) to be nonpositive, we need ฮฒ p - 2 โค 0 or equivalently ฮฒ p โค 2 (since ฮฒ is assumed to be nonnegative) for all p โ in order for R(p) to be concave. Thus, if sup_p โsup_ (ฮฑ,ฮฒ) โ{ฮฒ p }โค 2 and inf{ฮฒ| (ฮฑ,ฮฒ) โ}โฅ 0, then the RPO problem is randomization-proof.
(Single-product pricing under log-log demand). For the single-product pricing problem under log-log demand, d(p) = exp(ฮฑ - ฮฒlog p) = e^ฮฑยท p^-ฮฒ is the demand function and = (ฮฑ, ฮฒ) โ^2 is the vector of uncertain demand model parameters. Let โ^2 be an uncertainty set of possible values of (ฮฑ, ฮฒ), and assume that ฮฒ is bounded away from zero from below, that is, inf{ฮฒ| (ฮฑ,ฮฒ) โ}โฅ 0. Let = { R(ยท, ) |โ} be the revenue function uncertainty set. For a given R โ, its second derivative is Rโ(p) = e^ฮฑยท (ฮฒ - 1 ) (ฮฒ) ยท p^- ฮฒ - 1.
Thus, for Rโ(p) to be nonpositive, we need ฮฒ - 1 โค 0, or equivalently ฮฒโค 1. Thus, if sup_ (ฮฑ, ฮฒ) โฮฒโค 1 and inf_(ฮฑ,ฮฒ) โฮฒโฅ 0, then the RPO problem is randomization-proof.
ยง.ยง Quasiconcavity in and quasiconvexity in
The second result we establish concerns the RRPO problem when there is a demand parameter uncertainty set . In this case, the RRPO and DRPO problems are
RRPO: max_F โmin_โโซ_ R(, ) dF(),
DRPO: max_โmin_โ R(, ).
We make the following assumption about R.
R is a continuous function of (,).
Under these assumptions, we obtain the following result.
Suppose that Assumptionsย <ref> holds. Suppose that โ^I and โ^d are compact convex sets. Suppose that โซ_ R(, ) dF() is a quasiconvex function of on for any F โ. Suppose that R(, ) is quasiconcave in on for any โ and quasiconvex in on for any โ. Then, the robust price optimization problem is randomization-proof, that is, Z^*_ = Z^*_.
The proof of Theoremย <ref> (see Sectionย <ref> of the ecompanion) follows from applying Sion's minimax theorem twice. This result allows us to show that a larger number of RPO problems are randomization-proof. We provide a few examples below.
Consider a single-product price optimization problem where the demand follows a semi-log model. The uncertain parameter is therefore = (ฮฑ, ฮฒ).
Observe that R(p,) = p e^ฮฑ - ฮฒ p is convex in . Thus, it is also quasiconvex in for a fixed p. Additionally, for any distribution F over , we have that the function โซ_ R(p, ) dF(p) is convex in (it is a nonnegative weighted combination of the functions = (ฮฑ,ฮฒ) โฆ p e^ฮฑ - ฮฒ p, each of which is convex), and is thus also quasiconvex in .
Note also that the function R is quasi-concave in p. To see this, observe that log R(p, ) = log p + ฮฑ - ฮฒ p,
which is concave in p; this means that R is log-concave in p. Since any log-concave function is quasiconcave, it follows that R is quasiconcave in p.
Thus, if โ and โ^2 are compact and convex, then Theoremย <ref> asserts that the RPO problem is randomization-proof.
Consider a single-product price optimization problem where the demand follows a log-log model. The uncertain parameter is = (ฮฑ, ฮฒ), and R(p, ) = p e^ฮฑ - ฮฒlog p. Assume that โ is a compact convex set, and that min{ p | p โ} > 0.
Observe that R(p, ) = p ยท e^ฮฑ - ฮฒlog p is convex in , and therefore quasiconvex in for a fixed p. Additionally, for any distribution F over , we have that the function โซ_ R(p, ) dF(p) is convex in and therefore also quasiconvex in for a fixed F.
Lastly, with regard to quasiconcavity in p, observe that log R(p, ) = log p + ฮฑ - ฮฒlog p = (1 - ฮฒ) log p + ฮฑ,
which means that R is log-concave in p whenever 1 - ฮฒ > 0 or equivalently ฮฒ < 1. Therefore, R will also be quasiconcave whenever ฮฒ < 1.
Thus, if โ and โ^2 are compact and convex, and max{ฮฒ| (ฮฑ, ฮฒ) โ} < 1, then Theoremย <ref> guarantees that the RPO problem is randomization-proof.
With regard to the above two examples, we note that in general, the revenue function for a semi-log or a log-log demand model is not concave in p. Thus, Theoremย <ref> cannot be used in these cases, and we must use Theoremย <ref>. Note, however, that the two examples above critically rely on the revenue function being log-concave and therefore quasiconcave, which is only the case for single product price optimization problems. Log-concavity and quasiconcavity are in general not preserved under addition (i.e., the sum of quasiconcave functions is not always quasiconcave, and the sum of log-concave functions is not always log-concave), and so Theoremย <ref> will in general not be applicable for multiproduct pricing problems involving the semi-log or log-log demand model.
ยง.ยง Finite price set
In this section, we analyze randomization-receptiveness when is a finite set. To study this setting, let us define the set as the set of all probability distributions supported on . We note that these results are adaptations of several results from <cit.> to the pricing setting that we study, which develop analogous conditions for randomization-proofness for the robust assortment optimization problem.
Our first result establishes that randomization-proofness is equivalent to the existence of a distribution Q over under which any price vector's expected performance is no better than the deterministic robust optimal value.
Suppose that R(, ) is continuous in for any fixed โ.
A robust price optimization problem with finite is randomization-proof if and only if there exists a distribution Q โ such that for all โ,
โซ_ R(, ) dQ() โค Z^*_.
To prove this result, we use Sion's minimax theorem to establish that
Z^*_ = inf_Q โmax_โโซ_ R(, ) dQ();
with that result in hand, the condition in Theoremย <ref> is equivalent to establishing that
Z^*_โฅinf_Q โmax_โโซ_ R(, ) dQ() = Z^*_,
which, together with the inequality Z^*_โค Z^*_ immediately yields randomization-proofness. We note that this result is analogous to Theorem 1 in <cit.>, which provides a similar necessary and sufficient condition for randomization-proofness in the context of robust assortment optimization. Our proof, which relies on Sion's minimax theorem, is perhaps slightly more direct than the proof of Theorem 1 in <cit.>, although this is a matter of taste.
Our next two results are consequences of this theorem. The first essentially states that a price optimization problem proof will be randomization-proof if the robust price optimization problem obeys strong duality. The second states that, under some conditions, a robust price optimization problem is randomization-receptive if and only if the deterministic robust price vector ^*_ is not an optimal solution of the nominal price optimization problem under the worst-case ^* that attains the worst-case objective under ^*_. We note that these results are both analogous to Corollaries 1 and 2 in <cit.>.
Suppose that R(, ) is a continuous function of for every โ.
A robust price optimization problem with finite is randomization-proof if and only if it satisfies strong duality:
max_โmin_โ R(, ) = min_โmax_โ R(, ).
Suppose that is a compact subset of ^d, and that R(, ) is a continuous function of for every โ. Suppose that ^*_โmax_โmin_โ R(, ) is an optimal solution of the deterministic robust price optimization problem, and suppose that min_โ R(^*_; ) has a unique solution ^*. Then the robust price optimization is randomization-receptive if and only if ^*_โmax_โ R(, ^*).
With regard to Corollaryย <ref>, we note that the uniqueness requirement for ^* cannot in general be relaxed. In Sectionย <ref> of the ecompanion, we show an instance where min_โ R(^*_, ) has multiple optimal solutions, ^*_โmax_โ R(, ') for every ' that solves min_โ R(^*_, ), and yet the problem is randomization-proof, i.e., Z^*_ = Z^*_.
We remark that the necessary and sufficient conditions for randomization-proofness in Corollariesย <ref> and <ref> are rather stringent and demanding. With regard to Corollaryย <ref>, strong duality is in general unlikely to hold given that is a finite set. With regard to Corollaryย <ref>, we note that in general, the solution of the deterministic robust price optimization problem is unlikely to also be an optimal solution of an appropriately defined nominal price optimization problem; this is frequently not the case in many applications of robust optimization outside of pricing. Given this, these conditions are suggestive of the fact that most robust price optimization problems will be randomization-receptive. This motivates our study of solution algorithms for numerically solving the RRPO problem in the next two sections.
ยง SOLUTION ALGORITHM FOR FINITE PRICE SET , CONVEX UNCERTAINTY SET
In this section, we describe a general solution algorithm for solving the RRPO problem when the price set is a finite set, and the uncertainty set is a general convex uncertainty set. Sectionย <ref> describes the general solution algorithm, which is a constraint generation algorithm that involves solving a nominal pricing problem over as a subroutine. Sectionsย <ref>, <ref> and <ref> describe how the solution algorithm specializes to the cases of the linear, semi-log and log-log demand models, respectively, and in particular, how the nominal pricing problem can be solved for each of these three cases; the formulations we present for the semi-log and log-log models here may be of independent interest as they are, to the best of our knowledge, the first exact mixed-integer convex formulations for the multi-product pricing problem under a finite price set for these demand models.
ยง.ยง General solution approach
The first general solution scheme that we consider is when is a discrete set and the uncertainty set is a convex uncertainty set. In this case, if the revenue function R(, ) is quasiconvex and continuous in โ, then the RRPO problem can be reformulated as follows:
max_โฮ_min_โโ_โฯ_ R(, )
= min_โmax_โฮ_โ_โฯ_ R(, )
= min_โmax_โ R(, ) ,
where the first equality follows by Sion's minimax theorem, and the second equality follows by the fact that the inner maximum is attained by setting ฯ_ = 1 for some and setting ฯ_' = 0 for all ' โ . This last problem can be written in epigraph form as
, tminimize t
subject to t โฅR(, ), โ โ,
โ.
Problemย (<ref>) can be solved using constraint generation. In such a scheme, we start with constraintย (<ref>) enforced only at a subset โ, and solve problemย (<ref>) to obtain a solution (,t). At this solution, we solve the problem max_โ R(, ), and compare this objective value to the current value of t. If it is less than or equal to t, we conclude that (, t) satisfies constraintย (<ref>) and terminate with (,t) as the optimal solution. Otherwise, if it is greater than t, we have identified a for which constraintย (<ref>) and we add the new constraint to . We then re-solve the problem to obtain a new solution (,t) and repeat the process until we can no longer identify any violated constraints. To recover the optimal randomization scheme from the solution of this problem (i.e., the distribution ), we simply consider the optimal dual variable of each constraint t โค R(, ).
The viability of this solution approach critically depends on our ability to solve the separation problem max_โ R(, ) efficiently, and to solve the problemย (<ref>) efficiently for a fixed subset โ. In what follows, we shall demonstrate that this problem can actually be solved practically for the linear, semi-log and log-log problems.
To develop our approaches for the linear, semi-log and log-log models, we will make the following assumption about the price set , which simply states that is a Cartesian product of finite sets of prices for each of the products.
= _1 รโฆร_I, where _i is a finite subset of _+ for each i.
ยง.ยง Linear demand model
We begin by showing how our solution approach for convex applies to the linear demand model case. Recall that the linear model revenue function is
R(, ) = โ_i=1^I p_i (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ).
For a fixed , the function R(, ) is linear and therefore convex and quasiconvex in = (, , ). Thus, given a subset โ, the problemย (<ref>) should be easy to solve, assuming that is also a sufficiently tractable convex set. For example, if is a polyhedron, then since each constraintย (<ref>) is linear in , problemย (<ref>) would be a linear program.
The separation problem for the linear demand model case is
max_โโ_i=1^I p_i (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j )
= max_โโ_i=1^I p_i ฮฑ_i - โ_i=1^I ฮฒ_i p^2_i + โ_i=1^I โ_j โ iฮณ_i,j p_i p_j
Since = _1 รโฆร_I, we can formulate this as a mixed-integer program. Let x_i,t be a binary variable that is 1 if product i has price t โ_i, and 0 otherwise. Similarly, let y_i,j,t_1,t_2 be a binary decision variable that is 1 if product i is given price t_1 and product j is given price t_2 for i โ j, and 0 otherwise. Then the separation problem can be straightforwardly written as
, maximize โ_i=1^I โ_t โ_i ฮฑ_i ยทt ยทx_i,t + โ_i=1^I โ_t โ_i t^2 ยทฮฒ_i ยทx_i,t + โ_i=1^I โ_ j โ i โ_t_1 โ_i โ_t_2 โ_j ฮณ_i,j ยทt_1 ยทt_2 ยทy_i,j,t_1,t_2
subject to โ_t โ_i x_i,t = 1, โ i โ[I],
โ_t_2 โ_j y_i,j,t_1, t_2 = x_i,t_1, โ i, j โ[I], j โ i, t_1 โ_i,
โ_t_1 โ_i y_i,j,t_1, t_2 = x_i,t_2, โ i, j โ[I], j โ i, t_2 โ_j,
x_i,t โ{0,1}, โ i โ[I], t โ_i,
y_i,j,t_1,t_2 โ{0,1}, โ i, jโ[I], i โ j, t_1 โ_i, t_2 โ_j,
where the first constraint simply enforces that exactly one price is chosen for each product, while the second and third constraints require that the y_i,j,t_1,t_2 variables are essentially equal to x_i,t_1ยท x_j, t_2.
ยง.ยง Semi-log demand model
We will now show how the solution approach we have defined earlier applies to the semi-log demand model.
Recall that the semi-log revenue function is
R(, ) = โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j.
Observe that for a fixed , the function R(, ) is convex (and therefore quasiconvex) in , since it is the nonnegative weighted combination of exponentials of linear functions of = (, , ). Thus, given a subset โ, solving problemย (<ref>) should again be โeasyโ, assuming also that is a sufficiently tractable convex set. (In particular, the function R(,) can be represented using I exponential cones; assuming that is also representable using conic constraints, problemย (<ref>) will thus be some type of continuous conic program.)
We now turn our attention to the separation problem, max_โ R(, ). Specifically, this problem is
max_โโ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j.
Observe that since the function f(t) = log(t) is monotonic, the set of optimal solutions remains unchanged if we consider the same problem with a log-transformed objective
max_โlog( โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j)
= max_โlog( โ_i=1^I e^ฮฑ_i + log p_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j).
To now further re-formulate this problem, we observe that the objective function can be re-written using the function g() = log( โ_i=1^I e^y_i). The function g is what is known as the log-sum-exp function, which is a convex function <cit.>. More importantly, a standard result in convex analysis is that any proper, lower semi-continuous, convex function is equivalent to its biconjugate function, which is the convex conjugate of its convex conjugate <cit.>. For the log-sum-exp function, this in particular means that g() can be written as
g() = max_โฮ_[I]{^T - โ_i=1^I ฮผ_i logฮผ_i }.
The function h(x) = x log x is the negative entropy function <cit.>, and is a convex function; thus, the function inside the max{ยท} is a linear function minus a sum of convex functions, and is a concave function.
For our problem, this means that (<ref>) can be re-written as
max_โlog R(, )
= max_โlog( โ_i=1^I e^ฮฑ_i + log p_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j)
= max_โmax_โฮ_[I]{โ_i=1^I ฮผ_i (ฮฑ_i + log p_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }
= max_โ, โฮ_[I]{โ_i=1^I ฮผ_i (ฮฑ_i + log p_i - ฮฒ_i p_i + โ_jโ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }.
To further reformulate this problem, we now make use of Assumptionย <ref>, which states that is the Cartesian product of finite sets. Let us introduce a new binary decision variable x_i,t which is 1 if product i's price is set to price t โ_i, and 0 otherwise. Using this new decision variable, observe that we can replace p_i wherever it occurs with โ_t โ_i t ยท x_i,t. We can also similarly replace log p_i with โ_t โ_ilog t ยท x_i,t. Therefore, problemย (<ref>) can be further reformulated as
, maximize โ_i=1^I ฮผ_i (ฮฑ_i + โ_t โ_i logt ยทx_i,t - ฮฒ_i ยทโ_t โ_i t ยทx_i,t + โ_jโ i ฮณ_i,j โ_t โ_j t ยทx_j,t ) - โ_i=1^I ฮผ_i logฮผ_i
subject to โ_i=1^I ฮผ_i = 1,
โ_t โ_i x_i,t = 1, โ i โ[I],
x_i,t โ{0,1}, โ i โ[I], t โ_i,
ฮผ_i โฅ0, โ i โ[I].
This last problem is almost a mixed-integer convex program: as noted earlier, the expression - โ_i=1^I ฮผ_i logฮผ_i is concave in ฮผ. The main wrinkle is the presence of the bilinear terms in the objective function, specifically terms of the form ฮผ_i ยท x_j,t. Fortunately, we can circumvent this difficulty by introducing a new decision variable, w_i,j,t, which is the linearization of ฮผ_i ยท x_j,t, for each i, j โ [I], t โ_j. By adding this new decision variable and additional constraints, we arrive at our final formulation, which is a mixed-integer convex program.
, , maximize โ_i=1^I ฮผ_i ฮฑ_i + โ_i=1^I โ_t โ_i logt ยทw_i,i,t - โ_i=1^I ฮฒ_i ยทโ_t โ_i t ยทw_i,i,t + โ_i=1^I โ_jโ i ฮณ_i,j ยท(โ_t โ_j t ยทw_i,j,t) - โ_i=1^I ฮผ_i logฮผ_i
subject to โ_t โ_j w_i,j,t = ฮผ_i, โ i โ[I], j โ[I],
โ_i=1^I w_i,j,t = x_j,t, โ j โ[I], t โ_j,
โ_i=1^I ฮผ_i = 1,
โ_t โ_i x_i,t = 1, โ i โ[I],
w_i,j,t โฅ0, โ i โ[I], j โ[I], t โ_j,
x_i,t โ{0,1}, โ i โ[I], t โ_i,
ฮผ_i โฅ0, โ i โ[I].
There are a few important points to observe about this formulation. First, note that because the ฮผ_i's sum to 1 over i, and the x_j,t's are binary and sum to 1 over t โ_j for any j, then ensuring that w_i,j,t = ฮผ_i ยท x_j,t can be done simply through constraintsย (<ref>) and (<ref>). This is different from the usual McCormick envelope-style linearization technique, which in this case would involve the four inequalities:
w_i,j,t โค x_j,t,
w_i,j,t โคฮผ_i,
w_i,j,t โฅ x_j,t + ฮผ_i - 1,
w_i,j,t โฅ 0,
for every i โ [I], j โ [I], t โ_j. It is not difficult to show that these constraints are implied by constraintsย (<ref>), (<ref>) and (<ref>).
Second, at the risk of belaboring the obvious, the optimal objective value of problemย (<ref>) is the value of max_โlog R(, ), where R is the semi-log revenue function. Upon solving problemย (<ref>) to obtain the objective value Z', we can obtain the optimal objective value of the untransformed problem max_โ R(, ) as e^Z'.
Third, this formulation is notable because, to our knowledge, this is the first exact mixed-integer convex formulation of the nominal multi-product pricing problem under semi-log demand and a price set defined as the Cartesian product of finite sets. To date, virtually all research that has considered solving this type of problem in the marketing and operations management literatures has involved heuristics (see, for example, Section EC.3 of , which solves log-log and semi-log multi-product pricing problems for a collection of stores using local search). From this perspective, although we developed this formulation as part of the overall solution approach for the RRPO problem, we believe it is of more general interest.
Building on the previous point, problemย (<ref>) can be formulated as a mixed-integer exponential cone program. Such problems are garnering increasing attention from the academic and industry sides. In particular, since 2019, the MOSEK solver <cit.> supports the exponential cone and can solve mixed-integer conic programs that involve the exponential cone to global optimality. Although the solution technology for mixed-integer conic programs is not as developed as that of mixed-integer linear programs (as exemplified by state-of-the-art solvers such as Gurobi and CPLEX), it is reasonable to expect that these solvers will continue to improve and allow larger and larger problem instances to be solved to optimality in the future.
Lastly, we comment that the same reformulation technique used above โ taking the logarithm, replacing the log-sum-exp function with its biconjugate, and then linearizing the products of the binary decision variables and the probability mass function values (the ฮผ_i variables) that arise from the biconjugate โ can also be used to derive an exact formulation of the deterministic robust price optimization problem. By taking the same approach, one obtains a max-min-max problem, and one can use Sion's minimax theorem again to swap the inner maximization over ฮผ with the minimization over to obtain a robust counterpart that can then be further reformulated using duality or otherwise solved using delayed constraint generation. We provide the details of this derivation in Sectionย <ref> of the ecompanion.
ยง.ยง Log-log demand model
To now show how the solution scheme in Sectionย <ref> applies to the log-log approach, we again recall the form of the log-log revenue function:
R(, ) = โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i log p_i + โ_jโ iฮณ_i,jlog p_j
= โ_i=1^I e^ฮฑ_i + log p_i - ฮฒ_i log p_i + โ_jโ iฮณ_i,jlog p_j
Using the same biconjugate trick as with the semi-log approach, we can show that
max_โlog R(, )
= max_โlog( โ_i=1^I e^ฮฑ_i + log p_i - ฮฒ_i log p_i + โ_jโ iฮณ_i,jlog p_j)
= max_โmax_โฮ_[I]{โ_i=1^I ฮผ_i (ฮฑ_i + log p_i - ฮฒ_i log p_i + โ_jโ iฮณ_i,jlog p_j) - โ_i=1^I ฮผ_i logฮผ_i }
If we now invoke Assumptionย <ref>, then we can introduce the same decision variables x_i,t and w_i,j,t as in problemย (<ref>) to obtain a mixed-integer convex formulation of the log-log price optimization problem, which has the same feasible region as the semi-log formulationย (<ref>):
, , maximize โ_i=1^I ฮผ_i ฮฑ_i + โ_i=1^I โ_t โ_i logt ยทw_i,i,t - โ_i=1^I ฮฒ_i ยทโ_t โ_i logt ยทw_i,i,t + โ_i=1^I โ_jโ i ฮณ_i,j โ_t โ_j logt ยทw_i,j,t - โ_i=1^I ฮผ_i logฮผ_i
subject to constraintsย (<ref>) โ (<ref>).
While the feasible region of problemย (<ref>) is the same as that of (<ref>), the objective function of (<ref>) is different. Just like problemย (<ref>), problemย (<ref>) can be written as a mixed-integer exponential cone program, and similarly, to the best of our knowledge, this is the first exact mixed-integer convex formulation of the log-log multi-product price optimization problem under a Cartesian product price set. Lastly, just like the semi-log problemย (<ref>), one can easily modify the formulation to obtain an exact formulation of the deterministic robust price optimization problem under log-log demand (see Sectionย <ref> of the ecompanion).
We note that the log-log separation problem has an interesting property, which is that there exist optimal solutions that are extreme, in the sense that each product's price is set to either its lowest or highest allowable price. This property is formalized in the following proposition (see Sectionย <ref> for the proof).
Suppose that Assumptionย <ref> holds. Let (, ) be an optimal solution of problemย (<ref>). Then there exists an optimal solution (, '), such that for each i โ [I], either p'_i = min_i or p'_i = max_i.
ยง SOLUTION METHOD FOR FINITE , FINITE
In addition to the case where is convex, we also consider the case where is a finite discrete set. Due to page limitations, our presentation of our solution method for this case is relegated to Sectionย <ref> of the ecompanion. At a high level, the foundation of our approach is double column generation, which alternates between solving the primal version of the RRPO problem, which is max_โฮ_min_โโ_โฯ_ R(, ), and the dual version of the RRPO problem, which is min_โฮ_max_โโ_โฮป_ R(, ). In each iteration, we solve the primal problem with replaced by a subset โ, where we use constraint generation to handle the inner minimization over โ; this gives rise to a finite set of uncertainty realizations โ. We then solve the dual problem with replaced by , where we use constraint generation to handle the inner maximization over โ, which gives rise to a finite set of price vectors โ. At each step of the algorithm, the objective value of the primal problem restricted to is a lower bound on the true optimal objective, while the objective value of the dual problem restricted to is an upper bound on the optimal objective; the algorithm terminates when these two bounds are equal or are otherwise within a pre-specified tolerance.
To implement this approach for the demand models that we consider, one needs to be able to solve the primal separation problem (solve min_โโ_โฯ_ R(,)) and the dual separation problem (solve max_โโ_โฮป_ยท R(, )). We show how both of these problems can be reformulated as mixed-integer exponential cone programs for the semi-log and log-log demand models, and as mixed-integer linear programs for the linear demand model.
ยง NUMERICAL EXPERIMENTS
In this section, we conduct several sets of experiments involving synthetic problem instances to understand the tractability of the RRPO approach and the improvement in worst-case revenue of the randomized robust pricing strategy over the deterministic robust pricing strategy. In Sectionย <ref>, we consider instances involving the linear, semi-log and log-log models where the uncertainty set is a convex set. In Sectionย <ref>, we consider instances involving the linear, semi-log and log-log models where the uncertainty set is a discrete set.
Finally, in Sectionย <ref>, we consider log-log and semi-log robust price optimization instances derived from a real data set on sales of orange juice products from a grocery store chain.
All of our code is implemented in the Julia programming language <cit.>. All optimization models are implemented using the JuMP package <cit.>. All linear and mixed-integer linear programs are solved using Gurobi <cit.> and all mixed-integer exponential cone programs are solved using Mosek <cit.>, with a maximum of 8 threads per program. All of our experiments are conducted on Amazon Elastic Compute Cloud (EC2), on a single instance of type m6a.48xlarge (AMD EPYC 7R13 processor, with 192 virtual CPUs and 768 GB of memory).
ยง.ยง Experiments with convex and linear, log-log and semi-log demand models
In our first set of experiments, we consider the log-log and semi-log demand models, and specifically consider a L1-norm uncertainty set :
={=(,,) |_1โคฮธ,[_k=_k-_0k/_0k โ kโ{1,..,I+I^2}]},
where _0 is the vector of nominal values of the uncertain parameters = (, ,), and ฮธ is the budget of the uncertainty set.
For each of the three demand models (linear, semi-log and log-log), we vary the number of products I varies in {5, 10, 15, 20}. For each value of I, we generate 24 random instances, where the values of , , are independently randomly generated as follows:
* Linear demand. Each ฮฑ_i โผ(200, 300), ฮฒ_i โผ(5,15), ฮณ_i,jโผ(-0.1,+0.1).
* Semi-log demand. Each ฮฑ_i โผ(4,7), ฮฒ_i โผ(1,1.5), ฮณ_i,jโผ(-0.4, +0.4).
* Log-log demand. Each ฮฑ_i โผ(10,14), ฮฒ_i โผ(1,2), ฮณ_i,jโผ(-0.6, +0.6).
For each product i โ [I], we set _i = {1,2,3,4,5}.
For the uncertainty set , the budget parameter ฮธ varies in {0.1, 0,5, 1, 1.5, 2} for each instance.
For each instance, we solve the nominal problem, the DRPO problem and the RRPO problem. For DRPO and RRPO, we vary the budget parameter ฮธ that defines the uncertainty within the set {0.1, 0.5, 1, 1.5, 2}. To solve the RRPO problem for each instance, we execute the constraint generation solution algorithm described in Sectionย <ref>. For instances with log-log demand, we take advantage of Propositionย <ref> and thus simplify the price set to contain the highest and lowest price levels for each product only. To solve the DRPO problem, we formulate it as either a mixed-integer linear program (for linear demand) or a mixed-integer exponential cone program (for semi-log and log-log demand) via the log-sum-exp biconjugate-based technique described in Sectionย <ref> of the ecompanion, and use standard LP duality techniques to reformulate the objective function of the resulting problem (formulationย (<ref>) and formulationย (<ref>) in Sectionย <ref>). Due to the prohibitive computation times that we encountered for the DRPO problem with log-log and semi-log demand, we impose a computation time limit of 20 minutes. From our experimentation with the DRPO problem for log-log and semi-log, it is often the case that an optimal or nearly optimal solution is found early on, and the bulk of the remaining computation time, which can be in the hours, is required by Mosek to prove optimality and close the gap.
Finally, to solve the nominal problem for each instance, we also use the same biconjugate-based technique to formulate the nominal price optimization problem as a mixed-integer exponential cone program.
We present the objective value as well as the computation time of each RRPO, DRPO and nominal problem. We additionally compute several other metrics. We compute [R(^*_,_0)], which is the expected revenue of the randomized RPO solution assuming that the nominal parameter values are realized. We also compute R(^*_,_0), the nominal revenue of DRPO solution, and Z_, = min_โ R(^*_,), the worst-case revenue of the nominal solution. We use the following metric to show the benefit of randomized strategy in robust price optimization:
RI = (Z_^* - Z_^* )/Z_^* ร 100%
For each metric, we compute its average over the 24 instances for each value of I and ฮธ.
Tablesย <ref>, <ref> and <ref> shows the results for the linear, semi-log and log-log demand models, respectively. For linear demand, we find that the improvement by randomized robust pricing over deterministic robust pricing is modest; the largest average improvement is 4.63% for I = 5, ฮธ = 2. We note that we experimented with other forms of uncertainty sets and choices of the nominal parameter values, but we generally did not encounter large improvements of the same size as we did for the other two demand models.
Besides linear demand, these results also show that for semi-log and log-log demand, there can be a very large difference between the randomized and deterministic robust pricing schemes. The benefit of randomization, quantified by the metric RI, ranges from about 3% to as much as 1320% for semi-log instances, and from about 7% to 243% for log-log instances. Note that the magnitude of RI for the semi-log instances is larger than that for log-log, because the logarithm of demand in the semi-log model has a linear dependence on price which results in an exponential dependence of demand on price, but in log-log, the logarithm of demand is linear in the logarithm of price, resulting in a milder polynomial dependence of demand on price. For semi-log and log-log demand, both the worst-case revenue of RRPO solution and the worst-case revenue of DRPO solution decrease as the uncertainty set becomes larger, and the rate of reduction becomes less as the uncertainty budget ฮธ is larger. In addition, for linear, semi-log and log-log demand, the RI generally increases as the uncertainty budget ฮธ increases. Also, as we expect, Z_^* โฅ Z_^* โฅ Z_,. Interestingly, the randomized robust pricing scheme can achieve better performance than the deterministic robust scheme under the nominal demand model (for example, compare [R(^*_,_0)] and R(^*_,_0) for log-log demand with I = 10); this appears to be the case for almost all (I,ฮธ) combinations for log-log, and for a smaller set of (I, ฮธ) combinations for semi-log.
With regard to the computation time, we observe that the computation time generally grows with the number of products for both RRPO and DRPO. For linear demand, both RRPO and DRPO can be solved extremely quickly (no more than 3 seconds on average, even with I = 20 products). For log-log and semi-log, when the number of products is held constant, the amount of time required to solve either RRPO generally becomes larger as the uncertainty set becomes larger. However, what we find is that for both log-log and semi-log demand, RRPO generally requires much less time to solve to complete optimality than DRPO; this is likely because the nominal problem (which is a key piece of the constraint generation method for RRPO when is convex) can be solved rapidly, whereas the robust version of this mixed-integer exponential cone program is more challenging for Mosek.
ยง.ยง Experiments with discrete and log-log and semi-log demand models
In our second set of experiments, we consider linear, log-log and semi-log demand models, where uncertainty is modeled through a discrete uncertainty set. We specifically consider a discrete budget uncertainty set here:
= { = _0 - (_0-)โฮพ - (_0-)โฮท|๐^โคฮพ+๐^โคฮทโคฮ, ฮพ+ฮทโค, ฮพ,ฮทโ{0,1}^I+I^2},
where and are respectively the component-wise lower and upper bounds of , _0 is the nominal value of the uncertain parameter vector = (, ,), ฮ is the budget of uncertainty and I + I + I(I-1) = I + I^2 is the total number of demand model parameters. Under the budget uncertainty set , up to ฮ parameters can attain their lower bounds or upper bounds, whereas the remaining parameters can only attain their nominal values. We shall assume that the lower bound vector and upper bound vector are defined as =0.7_0 and =1.3_0, where _0 is the vector of nominal parameters.
For each of the three demand models (linear, semi-log and log-log), we vary the number of products I in {5, 10, 15}. For each value of I, we generate 24 random instances, where the values of , , are independently randomly generated as follows:
* Linear demand. Each ฮฑ_i โผ(100, 200), ฮฒ_i โผ(5,15), ฮณ_i,jโผ(-0.1,+0.1).
* Semi-log demand. Each ฮฑ_i โผ(8,10), ฮฒ_i โผ(1.5,2), ฮณ_i,jโผ(-0.5, +0.5).
* Log-log demand. Each ฮฑ_i โผ(10,14), ฮฒ_i โผ(1.5,2), ฮณ_i,jโผ(-0.8, +0.8).
We set the price set of each i โ [I] as _i = {1,2,3,4,5}.
For each instance, we solve the nominal problem, the DRPO problem and the RRPO problem. For both RRPO and DRPO, we test a different collection of ฮ values for the uncertainty set depending on the value of I.
To solve the RRPO problem for each instance, we execute the double column generation algorithm described in Sectionย <ref>. In our preliminary experimentation with the restricted dual problem, we observed that exactly solving the dual separation problem (<ref>) (for semi-log demand) or (<ref>) (for log-log demand) via Mosek takes quite a long time. Therefore, to reduce the computation time of RRPO with discrete , we instead use a random improvement heuristic to obtain the solution of dual separation problem. Specifically, we randomly select a price vector ^0 as a starting point. We start with changing the price of product i=1 and keeping the prices of all other products unchanged, to search for a price vector ^1 that makes the objective value of the dual separation problem the largest. Then based on the current price vector ^1, we change the price of product i=2 and keep the prices of all other products unchanged, to search for a better price vector ^2. We repeat this for all of the products, yielding the price vector ^I. We repeat this procedure with 100 random starting points, and retain the best solution over these 100 repetitions. Although this approximate method cannot guarantee that the overall double column generation procedure converges to a provably optimal solution, our preliminary experimentation with small instances suggests that it obtains the exact solution of RRPO that one would obtain if the dual separation problem were solved to provable optimality. For the linear demand model, we solve both primal and dual separation problems as mixed-integer programs in Gurobi.
With regard to the DRPO problem for each log-log and semi-log instance, we note that we do not have a solution algorithm or formulation to solve it exactly. Therefore, we again use the same random improvement heuristic to obtain an approximate solution of DRPO with these demand models. We randomly pick a starting price vector, and change the price of one product at a time to improve the worst case objective value until we no longer get an improvement. We repeat this procedure 50 times and select the best resulting price vector from these 50 repetitions as the approximate solution of DRPO. We note that we use a smaller number of repetitions because each repetition involves solving worst-case problem over โ repeatedly in order to evaluate the robust objective of each candidate price vector; this contributes to a large overall computation time for this approach. With regard to the DRPO problem for linear demand, we observe that the objective function of DRPO is linear in the uncertain parameter vector , and that the description of the set polyhedronย (<ref>) is integral (i.e., extreme points of this polyhedron naturally correspond to , โ{0,1}^2I + I^2). Therefore, DRPO can be solved exactly by relaxing the requirement , โ{0,1}^2I + I^2 in the uncertainty setย (<ref>), and reformulating the worst-case objective using LP duality, leading to a mixed-integer linear program.
Lastly, for the nominal problem for each instance, we use the biconjugate technique to formulate it as a mixed-integer exponential cone program.
We report the same metrics as in Sectionย <ref>, with two minor modifications. We use แบ_ and _ to denote the approximate objective value and solution of DRPO given by the random improvement heuristic. The approximate improvement percentage is then Rฬร = (Z_^* - แบ_ )/แบ_ร 100%.
Tableย <ref> shows the results for the linear demand model. Here, we interestingly find that the vast majority of instances are randomization-proof, i.e., the average is below 1%, if not exactly 0%. We note here that we tested other families of instances where (, , ) and _i are generated differently, but in virtually every case we found that the relative improvement of randomized over deterministic robust pricing was very small. These results, together with those for the convex case, suggest that randomized pricing is of limited benefit compared to deterministic pricing for the uncertain linear demand model case.
Tablesย <ref> and <ref> show how the results vary for different values of discrete uncertainty budget ฮ for semi-log and log-log demand.[We note here that for the log-log model, we encountered one instance (I = 10, ฮ = 44) where แบ_ was higher than Z^*_; in general, Z^*_ should be higher than Z^*_. We have verified that the reason for this anomaly was a numerical error in the solution of the worst-case subproblem in Mosek within the DRPO random improvement heuristic. This instance is omitted in our calculation of แบ_, and [R(_, _0)], and the affected entries are indicated by * in Tableย <ref>. ] We can see that, in most of the cases we test, the randomized robust pricing strategy provides a substantial benefit over the deterministic robust price solution. The percentage improvement given by randomization ranges from 0% to as much as 488.59% for semi-log instances, and from 0% to 175.18% for log-log instances. Similar to the cases with convex , both Z_^* and แบ_ decrease as the uncertainty set becomes larger. While the RI metric generally decreases as ฮ increases, in some instances it can be increasing in ฮ at small values of ฮ (this is visible in the average results metrics for I = 10 with semi-log demand). When ฮ is large enough, the RI metric often becomes very small or even zero. This makes sense when interpreted through Corollaryย <ref>. Specifically, when nature is able to make a large number of demand model parameters take their worst values, it is likely that at the ^* at which the optimal objective of DRPO is attained is such that the price vector for the nominal problem with ^* coincides with the optimal price vector for DRPO.
Thus, by Corollaryย <ref>, the problem will be randomization-proof.
With regard to the computation time, the computation time for both RRPO and DRPO increases with I. Interestingly, the computation time required by RRPO does not necessarily increase as the discrete uncertainty budget ฮ increases; in some cases, when ฮ is large, the RRPO solution degenerates to the DRPO solution, allowing the double column generation algorithm to terminate quickly. By comparing t_ and t_, we can see that RRPO in general takes less time than DRPO. The computation time of the RRPO problem in semi-log instances is no more than approximately two minutes on average (I=15, ฮ=60), while in log-log instances, solving RRPO requires no more than 1.5 minutes on average (I=15, ฮ=18). Lastly, for linear demand, the computation time for RRPO is extremely small, requiring no more than a few seconds on average.
ยง.ยง Results using real data instances
In our last set of experiments, we evaluate the effectiveness of solution algorithms on problem instances calibrated with real data. For these experiments, we consider the data set from <cit.>, which was accessed via the bayesm package in R <cit.>. This data set contains price and sales data for I=11 different orange juice brands at the Dominickโs Finer Foods chain of grocery stores in the Chicago area. Each observation in the data set consists of: the store s; the week t; the log of the number of units sold log(q_t,s,i) for brand i; the prices p_t,s,1...p_t,s,11 of the eleven orange juice brands; the dummy variable d_t,s,i indicating whether brand i had any in-store displays at store s in week t; and the variable f_t,s,i indicating if brand i was featured/advertised at store s in week t.
We fit log-log and semi-log regression models for each brand i according to the following specifications:
(semi-log) log(q_t,s,i) = ฮฑ_i - ฮฒ_i p_t,s,i + โ_jโ iฮณ_ij p_t,s,j + ฮธ_i d_t,s,i + ฮผ_i f_t,s,i + ฯต_t,s,i,
(log-log) log(q_t,s,i) = ฮฑ_i - ฮฒ_i log(p_t,s,i) + โ_jโ iฮณ_ijlog(p_t,s,j) + ฮธ_i d_t,s,i + ฮผ_i f_t,s,i+ ฯต_t,s,i,
where {ฯต_t,s,i}_t,s,i is a collection of IID normally distributed error terms. The point estimates of the model parameters are provided in Sectionย <ref> of the ecompanion. We note that prior work has considered the estimation of both of these types of models (see the examples in ; see also and ).
We consider the problem of obtaining a price vector = (p_1, p_2,โฆ, p_11) for this collection of 11 products. To formulate the price vector set , we assume that each product i has five allowable prices, which are shown in Tableย <ref>. These prices correspond to the 0th (i.e., minimum), 25th, 50th, 75th and 100th (i.e., maximum) percentiles of the observed prices in the dataset.
For each type of demand model, we consider two forms of uncertainty set: a convex L1-norm uncertainty set (as in equationย (<ref>)) and a discrete budget uncertainty set (as in equationย (<ref>)). We vary the budget ฮธ of the L1-norm uncertainty set and present the results in Tables <ref> and <ref>. We also vary the budget ฮ of the discrete budget uncertainty set and present the results in tables <ref> and <ref>. Specifically, for discrete budget uncertainty set, we assume that = 1.2, = 0.8, = 1.3, = 0.7, = 1.4, and = 0.6. Tablesย <ref> and <ref> below present the results under the convex L1-norm uncertainty set for the semi-log and log-log demand models, respectively. Due to page considerations, the results for the discrete case are provided in Sectionย <ref> of the ecompanion.
We can see from Tables <ref> and <ref> that the randomized pricing strategy performs significantly better than the deterministic pricing solution under the worst-case demand model, with the RI ranging from 17.86% to 47.81% for semi-log demand and from 27.71% to 92.31% for log-log demand. In addition, for the same demand type and uncertainty set, the computation time of RRPO is comparable to that of DRPO. With regard to the discrete uncertainty set case, the results shown in Sectionย <ref> are qualitatively similar, with the randomized robust pricing strategy similarly outperforming the deterministic robust solution. We do also observe that under bothdemand models, solving RRPO with the discrete uncertainty set requires more time than solving it with convex uncertainty set, although the overall time is still reasonable (in the most extreme case, RRPO for the discrete uncertainty set can take up to approximately 300 seconds, and DRPO requires up to 600 second, compared to 60 seconds for both RRPO and DRPO for the L1-norm uncertainty set).
Lastly, it is also interesting to compare the randomized robust pricing strategy to the deterministic robust price vector. Taking the log-log demand model and the convex L1 uncertainty set with ฮธ = 0.8 as an example, the solution of the RRPO problem is the following randomized pricing strategy:
= {[ (3.87, 5.82, 1.25, 0.99, 3.17, 5.09, 3.07, 0.91, 0.69, 2.69, 1.99) w.p. 0.1628,; (1.29, 5.82, 3.35, 3.06, 0.88, 2.76, 3.07, 2.69, 3.08, 2.69, 4.99) w.p. 0.1752,; (3.87, 2.86, 1.25, 3.06, 3.17, 5.09, 0.91, 2.69, 3.08, 0.52, 4.99) w.p. 0.2658,; (3.87, 5.82, 3.35, 3.06, 0.88, 2.76, 3.07, 0.91, 3.08, 2.69, 4.99) w.p. 0.0381,; (3.87, 2.86, 1.25, 3.06, 3.17, 2.76, 3.07, 2.69, 0.69, 2.69, 4.99) w.p. 0.3258,; (3.87, 2.86, 1.25, 3.06, 3.17, 5.09, 0.91, 0.91, 3.08, 0.52, 4.99) w.p. 0.0323.; ].
Observe that in this randomized pricing strategy, each price vector is such that the product is set to either its lowest or highest allowable price. This is congruent with Propositionย <ref>, which suggests that the nominal problem under the log-log demand model will always have a solution that involves setting each product to its highest or lowest price; since our solution algorithm is based on constraint generation using this nominal problem as a separation procedure, it makes sense that the randomized price vector will be supported on such extremal price vectors. On the other hand, the solution of the DRPO problem is the price vector _ = (3.87, 2.86, 1.25, 3.06, 3.17, 2.76, 0.91, 2.69, 0.69, 0.52, 4.99), for which we observe that the chosen prices are also either the lowest or highest for each product.
ยง CONCLUSIONS
In this paper, we considered the problem of designing randomized robust pricing strategies to maximize worst-case revenue. We presented idealized conditions under which such randomized pricing strategies fare no better than the deterministic robust pricing approach, and subsequently we developed solution methods for obtaining the randomized pricing strategies in different settings (when the price set is finite, and when the uncertainty set is either convex or discrete). We showed using both synthetic instances and real data instances that such randomized pricing strategies can lead to large improvements in worst-case revenue over deterministic robust price prescriptions.
With regard to future research, an interesting direction is to consider a version of the robust price optimization problem that incorporates contextual information. For example, in the ecommerce setting, different customers who log onto a retailer's website will differ in characteristics (age, web browser, operating system, etc.). This information could be used to craft a richer uncertainty set, and to motivate randomization strategies that randomize differently based on user characteristics. More generally, we hope that this work, which was inspired by the paper of <cit.>, motivates further study in how randomization can be used to mitigate risk in revenue management applications.
plainnat
ยง OMITTED PROOFS
ยง.ยง Proof of Theoremย <ref>
To prove this, we prove that Z^*_โฅ Z^*_. For any R โ, and any distribution F supported on , we have
โซ_ R() dF() โค R ( โซ_ dF() ),
which follows by Jensen's inequality and the concavity of R. This implies that for any F โ,
inf_R โโซ_ R() dF() โคinf_R โ R ( โซ_ dF() ).
Therefore, we have that
Z^*_ = max_F โ{inf_R โโซ_ R() dF() }
โคmax_F โ{inf_R โ R ( โซ_ dF() ) }
โคmax_โ{inf_R โ R ( ) ) }
= Z^*_,
where the second inequality follows because is assumed to be convex, and thus for any F โ, โซ_ dF() is contained in .
ยง.ยง Proof of Theoremย <ref>
Before we begin, we recall Sion's minimax theorem:
Let M and N be convex spaces, with at least one of the two spaces being compact Let f: M ร N โ be a function such that f(ฮผ, ฮฝ) is quasiconcave and upper semi-continuous in ฮผ for any fixed ฮฝ, and quasiconvex and lower semi-continuous in ฮฝ for any fixed ฮผ. Then sup_ฮผโ Minf_ฮฝโ N f(ฮผ, ฮฝ) = inf_ฮฝโ Nsup_ฮผโ M f(ฮผ, ฮฝ).
We have:
Z^*_ = max_F โmin_โโซ_ R(, ) dF()
= min_โmax_F โโซ_ R(, ) dF()
= min_โmax_โ R(, )
= max_โmin_โ R(, )
= Z^*_.
In the above, the steps are justified as follows. The first step follows by the definition of Z^*_.
The second step follows by applying Sion's minimax theorem to interchange the order of minimization over and maximization over . The justification for applying Sion's minimax theorem here is that (1) the set of distributions supported on is a convex set; (2) โซ_ R(, ) dF() is linear in F when is fixed; and (3) โซ_ R(, ) dF() is quasiconvex in when F is fixed by the hypotheses of the theorem. Note that โซ_ R(, ) dF() is continuous in F if the set of measures is endowed with the topology of weak convergence. Additionally, note that โซ_ R(, ) dF() is continuous in . This is guaranteed because, by compactness of and and continuity of R in (, ) from Assumptionย <ref>, there exists a constant C such that |R(,)| < C for all โ and โ; thus, by the bounded convergence theorem, for any sequence (_k)_k=1^โ such that _k โ, we shall also have โซ_ R(, _k) dF() โโซ_ R(, ) dF().
The third step follows by the fact that max_F โโซ_ R(, ) dF() = max_โ R(, ), since includes the distribution the Dirac delta distribution ฮด_' that places unit probability mass on ', for every ' โ.
The fourth step follows by applying Sion's minimax theorem again, using the hypotheses that R(, ) is quasiconvex in and quasiconcave in , and additionally that R is continuous in both and (Assumptionย <ref>).
ยง.ยง Proof of Theoremย <ref>
To establish this result, observe that
inf_Q โmax_โโซ_ R(, ) dQ()
= inf_Q โmax_โฮ_โ_โฯ() ยทโซ_ R(, ) dQ()
= max_โฮ_inf_Q โโ_โฯ() ยทโซ_ R(, ) dQ()
= max_โฮ_inf_Q โโซ_โ_โฯ() ยท R(, ) dQ()
= max_โฮ_min_โโ_โฯ() ยท R(, )
= Z^*_.
In the above, the steps are justified as follows. The first step follows because maximizing a function of over the finite set is the same as maximizing the expected value of that same function with respect to all probability mass functions supported on .
The second step follows by Sion's minimax theorem, because the quantity โ_โฯ() ยทโซ_ R(, ) dQ() is linear and therefore quasiconcave in for a fixed Q, and is linear and therefore quasiconvex in Q for a fixed ; additionally, the set ฮ_ = {โ^|||^T = 1, โฅ} is a compact convex set, and is a convex set. Additionally, note that โ_โฯ() ยทโซ_ R(, ) dQ() is clearly continuous in . It is also continuous in Q, because each term โซ_ R(, ) dQ() is continuous in Q when is endowed with the topology of weak convergence, and there are finitely many such terms.
The third step follows by the linearity of integration. The fourth step follows by the fact that contains the Dirac delta distribution that places unit mass on , for every โ. The final step just follows from the definition of Z^*_.
With this result in hand, observe that the existence of a Q โ such that for all โ, โซ_ R(, ) dQ() โค Z^*_ is equivalent to the existence of Q โ such that
max_โโซ_ R(, ) dQ() โค Z^*_,
which is equivalent to
inf_Q โmax_โโซ_ R(, ) dQ() โค Z^*_.
Since the left-hand side of this inequality is equal to Z^*_, the existence of the distribution Q โ as in the theorem statement is equivalent to Z^*_โค Z^*_; since it is always the case that Z^*_โฅ Z^*_, this is equivalent to the problem being randomization-proof.
ยง.ยง Proof of Corollaryย <ref>
Observe that since max_โmin_โ R(, ) โฅmin_โmax_โ R(, ) always holds, equationย (<ref>) is equivalent to
max_โmin_โ R(, ) โคmin_โmax_โ R(, ),
or equivalently,
Z^*_โฅmin_โmax_โ R(, ).
Observe that the condition min_โmax_โ R(, ) โค Z^*_ is exactly equivalent to the condition that there exists a โ such that for all โ, R(, ) โค Z^*_.
To connect this to Theoremย <ref>, consider Q = ฮด_, where ฮด_ is the Dirac delta distribution centered at . For any โ, R(, ) = โซ_ R(, ') dQ('). Thus, for this choice of Q, it is the case that for all โ, โซ_ R(, ') dQ(') โค Z^*_. By Theoremย <ref>, this is equivalent to randomization-proofness. Thus, it follows that the strong duality conditionย (<ref>) is equivalent to the RPO problem being randomization-proof.
ยง.ยง Proof of Corollaryย <ref>
To prove the โ direction of the equivalence, suppose that the robust price optimization problem is randomization-receptive. From Theoremย <ref>, a robust price optimization problem is randomization-proof if and only if there exists a distribution Q โ such that for all โ, โซ_ R(, ) dQ() โค Z^*_. The negation of this latter statement is the following statement:
โ Q โ, โโ such that โซ_ R(, ) dQ() > Z^*_.
We need to show that ^*_โmax_โ R(, ^*). To establish this, it is sufficient to show that there exists a โ such that R(, ^*) > R(^*_, ^*). Let Qฬ = ฮด_^*, where ฮด_^* denotes the Dirac delta distribution centered at ^*. By invoking (<ref>), we are assured of the existence of a price vector such that โซ_ R(, ) dQฬ() > Z^*_. Since Qฬ = ฮด_^*, we have that โซ_ R(, ) dQฬ() = R(, ^*), and thus we have that
R(, ^*) > R(_, ^*),
exactly as needed. Thus, it follows that ^*_โmax_โ R(, ^*).
To prove the โ direction of the equivalence, let โ be a price vector for which R(, ^*) > R(^*_, ^*). To establish that the problem is randomization-receptive, we shall again use the conditionย (<ref>).
Let Q โ be an arbitrary distribution. We need to show that there exists a โ that satisfies โซ_ R(, ) dQ() > Z^*_. There are two mutually exclusive and collectively exhaustive cases to consider:
Case 1: There exists a closed set B โ such that ^* โ B and Q(B) > 0. The candidate price vector we will consider in this case is ^*_. In this case, observe that since is compact, then B is also compact, and together with the extreme value theorem we can assert that min_โ B R(^*_, ) = R(^*_, ) for some โ B. Additionally, since min_โ R(^*_, ) has a unique solution ^* and ^* โ B, we are assured that R(^*_, ) > R(^*_, ^*). Armed with these facts, we have that
โซ_ R(^*_, ) dQ()
= โซ_B R(^*_, ) dQ() + โซ_โ B R(^*_, ) dQ()
โฅ R(^*_, ) ยท Q(B) + R(^*_, ^*) ยท (1 - Q(B))
> R(^*_, ^*) ยท ( Q(B) + 1 - Q(B))
= R(^*_, ^*)
= Z^*_,
which establishes that conditionย (<ref>) holds in Case 1.
Case 2: For every closed set B โ, either ^* โ B or Q(B) = 0. The candidate price vector in this case will be .
To establish conditionย (<ref>) in this case, let ฯต be any number such that 0 < ฯต < R(, ^*) - R(^*_, ^*). The assumption about the continuity of R(, ) in implies that there must exist a ฮด > 0 such that for any โ with - ^* < ฮด, R(, ) > R(^*_, ^*) + ฯต.
Let C = {โ| - ^* < ฮด}, which is an open set. Additionally, let B = โ C = {โ| - ^* โฅฮด} be the complement of C, which must be a closed set. By the assumption of Case 2, any closed subset of must be such that either ^* is inside that set, or the measure of that set under Q is zero. Here, by construction, B cannot contain ^*; therefore, we must have that Q(B) = 0. Since C and B are complements, it must also be the case that Q(C) = 1.
Armed with these facts, we now have that
โซ_ R(, ) dQ()
= โซ_C R(, ) dQ() + โซ_B R(, ) dQ()
โฅ [ R(^*_, ^*) + ฯต] ยท Q(C) + 0
= R(^*_, ^*) + ฯต
> R(^*_, ^*)
= Z^*_,
which again establishes that conditionย (<ref>) holds.
Since we have shown that conditionย (<ref>) holds in these two mutually exclusive and collectively exhaustive cases, it follows that the problem is randomization-receptive, as required.
ยง.ยง Example of necessity of uniqueness assumption in Corollaryย <ref>
Consider a single product pricing instance, i.e., I = 1, which we define as follows. Let = { p_1, p_2, p_3 } where p_1 = 5, p_2 = 8, p_3 = 9. Let the demand model d be a linear demand model, so that the uncertain parameter = (ฮฑ, ฮฒ) and d(p, ) = ฮฑ - ฮฒ p. Finally, let = { (ฮฑ_1, ฮฒ_1), (ฮฑ_2, ฮฒ_2), (ฮฑ_3, ฮฒ_3) }, where (ฮฑ_1, ฮฒ_1) = (10, 1), (ฮฑ_2, ฮฒ_2) = (3,0.1), (ฮฑ_3, ฮฒ_3) = (3.6, 0.2).
We first calculate min_โ R(p, ) for each p โ. We have:
* For p_1 = 5: p_1 (ฮฑ_2 - ฮฒ_2 p_1) = 12.5 < p_1 (ฮฑ_3 - ฮฒ_3 p_1) = 13 < p_1(ฮฑ_1, ฮฒ_1 p_1) = 25. Hence, min_โ R(p_1, ) = min{12.5, 13, 25} = 12.5.
* For p_2 = 8: p_2 (ฮฑ_1 - ฮฒ_1 p_2) = p_2 (ฮฑ_3 - ฮฒ_3 p_2) = 16 < p_2 (ฮฑ_2 - ฮฒ_2 p_2) = 17.6. Hence, min_โ R(p_2, ) = min{16, 16, 17.6} = 16, and note also that the minimizing is not unique (the minimum is attained at both (ฮฑ_1, ฮฒ_1) and (ฮฑ_3, ฮฒ_3)).
* For p_3 = 9: p_3 (ฮฑ_1 - ฮฒ_1 p_3) = 9 < p_3 (ฮฑ_3 - ฮฒ_3 p_3) = 16.2 < p_3 (ฮฑ_2 - ฮฒ_2 p_3) = 18.9. Hence, min_โ R(p_3, ) = min{9, 16.2, 18.9} = 9.
From this, we can see that the optimal deterministic robust price is p^*_ = p_2 = 8 and the optimal deterministic robust objective value is Z^*_ =16. At p = 8, we can see that min_โ R(p_2, ) = { (ฮฑ_1, ฮฒ_1), (ฮฑ_3, ฮฒ_3)}.
Let us now consider the RRPO problem. When we write the problem max_โฮ_min_โโ_โฯ_ R(, ) as a linear program, we get the following problem:
, tmaximize t
subject to t โคฯ_p_1 ยทp_1 (ฮฑ_1 - ฮฒ_1 p_1) + ฯ_p_2 ยทp_2 (ฮฑ_1 - ฮฒ_1 p_2) + ฯ_p_3 ยทp_3 (ฮฑ_1 - ฮฒ_1 p_3),
t โคฯ_p_1 ยทp_1 (ฮฑ_2 - ฮฒ_2 p_1) + ฯ_p_2 ยทp_2 (ฮฑ_2 - ฮฒ_2 p_2) + ฯ_p_3 ยทp_3 (ฮฑ_2 - ฮฒ_2 p_3),
t โคฯ_p_1 ยทp_1 (ฮฑ_3 - ฮฒ_3 p_1) + ฯ_p_2 ยทp_2 (ฮฑ_3 - ฮฒ_3 p_2) + ฯ_p_3 ยทp_3 (ฮฑ_3 - ฮฒ_3 p_3),
ฯ_p_1 + ฯ_p_2 + ฯ_p_3 = 1,
ฯ_p_1, ฯ_p_2, ฯ_p_3 โฅ0,
or equivalently,
, tmaximize t
subject to t โค25 ฯ_p_1 + 16 ฯ_p_2 + 9 ฯ_p_3
t โค12.5 ฯ_p_1 + 17.6 ฯ_p_2 + 18.9 ฯ_p_3 ,
t โค13 ฯ_p_1 + 16 ฯ_p_2 + 16.2 ฯ_p_3,
ฯ_p_1 + ฯ_p_2 + ฯ_p_3 = 1,
ฯ_p_1, ฯ_p_2, ฯ_p_3 โฅ0,
for which the optimal objective value is Z^*_ = 16, which is the same as Z^*_. Thus, if the uniqueness condition on min_โ R(^*_, ) is relaxed, then it is possible for the problem to be randomization proof.
ยง.ยง Proof of Propositionย <ref>
Observe that the objective function in (<ref>) can be re-arranged as
max_โฮ_[I], โ{โ_i=1^I ฮผ_i (ฮฑ_i + log p_i - ฮฒ_i log p_i + โ_jโ iฮณ_i,jlog p_j) - โ_i=1^I ฮผ_i logฮผ_i }
= max_โฮ_[I]max_โ{โ_i = 1^I ฮผ_i ยทฮฑ_i + โ_i=1^I ฮผ_i ยท[ 1 - ฮฒ_i + โ_j โ iฮณ_j,i] ยทlog p_i - โ_i=1^I ฮผ_i logฮผ_i }
= max_โฮ_[I][ โ_i = 1^I ฮผ_i ยทฮฑ_i + โ_i=1^I max_p_i โ_i{ฮผ_i ยท[ 1 - ฮฒ_i + โ_j โ iฮณ_j,i] ยทlog p_i } - โ_i=1^I ฮผ_i logฮผ_i ]
where the first step follows by algebra, and the second by the separability of the objective in p_1,โฆ, p_I and Assumptionย <ref> (since the price set is a Cartesian product and the objective is separable, each product's price can be optimized independently). Thus, when is fixed, the optimal value of p_i for the above objective depends on the sign of (1 - ฮฒ_i + โ_j โ iฮณ_j,i). If this coefficient is positive, then since log p_i is increasing in p_i, it is optimal to set p'_i = max_i. If this coefficient is negative, then it is optimal to set p'_i = min_i. It thus follows that for any for which we can find a price vector such that (, ) is optimal, it will be the case that (, ') will also be optimal.
ยง DETERMINISTIC ROBUST PRICE OPTIMIZATION FOR FINITE , CONVEX UNDER THE SEMI-LOG AND LOG-LOG DEMAND MODELS
In this section, we describe how to formulate the DRPO problem as a mixed-integer exponential cone program for the semi-log and log-log demand models. In both cases, we assume that is a convex uncertainty set, and that Assumptionย <ref> on the structure of holds.
ยง.ยง Semi-log model
For the semi-log demand model, we can write the DRPO problem as
max_โmin_โ R(, )
= max_โmin_โโ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j .
To accomplish our reformulation, we will make use of the fact that the optimal solution set of the DRPO problem is unchanged upon log-transformation, that is,
max_โmin_โ R(, ) = max_โmin_โlog R(, ).
Thus, instead of problemย (<ref>), we can focus on the following problem:
max_โmin_โlog( โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j )
= max_โmin_โlog( โ_i=1^I e^log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j )
Here, we can again use the log-sum-exp biconjugate technique to reformulate the objective function in the following way:
log( โ_i=1^I e^log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j )
= max_โฮ_[I]{โ_i=1^I ฮผ_i ยท (log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }.
Thus, the overall problem becomes the following max-min-max problem:
max_โmin_โmax_โฮ_[I]{โ_i=1^I ฮผ_i ยท (log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }.
Here, we observe that the objective function is linear in = (, , ), and is concave in ; additionally, the feasible region of is assumed to be convex, and the feasible region of is convex and compact (being just the (|I|-1)-dimensional unit simplex). Therefore, we can use Sion's minimax theorem to interchange the minimization over and the maximization over , which gives us
max_โmin_โmax_โฮ_[I]{โ_i=1^I ฮผ_i ยท (log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }
= max_โmax_โฮ_[I]min_โ{โ_i=1^I ฮผ_i ยท (log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }
= max_โ, โฮ_[I]min_โ{โ_i=1^I ฮผ_i ยท (log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j) - โ_i=1^I ฮผ_i logฮผ_i }
Under Assumptionย <ref>, this final problem can then be reformulated as robust mixed-integer exponential cone program, just as in Sectionย <ref>. We introduce the same binary decision variable x_i,t which is 1 if product i is offered at price t โ_i, and 0 otherwise, and we use w_i,j,t to denote the linearization of ฮผ_i ยท x_j,t for i, j โ [I], t โ_j. This gives rise to the following program:
, , maximize min_โ { โ_i=1^I ฮผ_i ฮฑ_i + โ_i=1^I ( โ_t โ_i logt ยทw_i,i,t - ฮฒ_i ยทโ_t โ_i t ยทw_i,i,t + โ_jโ i ฮณ_i,j โ_t โ_j t ยทw_i,j,t ) - โ_i=1^I ฮผ_i logฮผ_i }
subject to โ_t โ_j w_i,j,t = ฮผ_i, โ i โ[I], j โ[I],
โ_i=1^I w_i,j,t = x_j,t, โ j โ[I], t โ_j,
โ_i=1^I ฮผ_i = 1,
โ_t โ_i x_i,t = 1, โ i โ[I],
w_i,j,t โฅ0, โ i โ[I], j โ[I], t โ_j,
x_i,t โ{0,1}, โ i โ[I], t โ_i,
ฮผ_i โฅ0, โ i โ[I].
Note that the feasible region of this problem is identical to that of problemย (<ref>), which appeared in our discussion of the separation problem for the RRPO problem when is convex and is finite. The difference here is that the objective is now a robust objective; it is the worst-case value of the objective of problemย (<ref>), taken over the convex uncertainty set . Depending on the structure of , the overall problem can remain in the mixed-integer convex program problem class. For example, if is a polyhedron, then one can use LP duality to reformulate the robust problem exactly by introducing additional variables and constraints, as is normally done in robust optimization <cit.>. Similarly, if is a second-order cone representable set, then one can again use conic duality to reformulate the problem. Alternatively, one can also consider solving the problem using a cutting plane method/delayed constraint generation approach, whereby one reformulates the program in epigraph form and then solves the inner minimization over to identify new cuts to add <cit.>.
ยง.ยง Log-log model
For the log-log demand model, we can write the DRPO problem as
max_โmin_โ R(, )
= max_โmin_โโ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j
= max_โmin_โโ_i=1^I e^ฮฑ_i + (1 - ฮฒ_i) log p_i + โ_j โ iฮณ_i,jlog p_j
Again, as with the semi-log model, solving the above problem is equivalent to solving the same problem with a log-transformed objective. Taking this log-transformed problem as our starting point, replacing the log-sum-exp function with its biconjugate and applying Sion's minimax theorem gives us:
max_โmin_โlog( โ_i=1^I e^ฮฑ_i + (1 - ฮฒ_i) log p_i + โ_j โ iฮณ_i,jlog p_j )
= max_โmin_โmax_โฮ_[I]{โ_i=1^I [ ฮฑ_i ฮผ_i + โ_i=1^I (1 - ฮฒ_i) ฮผ_i ยทlog p_i + โ_j โ iฮณ_i,jฮผ_i ยทlog p_j ] - โ_i=1^I ฮผ_i logฮผ_i }
= max_โ, โฮ_[I]min_โ{โ_i=1^I [ ฮฑ_i ฮผ_i + โ_i=1^I (1 - ฮฒ_i) ฮผ_i ยทlog p_i + โ_j โ iฮณ_i,jฮผ_i ยทlog p_j ] - โ_i=1^I ฮผ_i logฮผ_i }.
Under Assumptionย <ref>, this last problem can be re-written as the following robust version of problemย (<ref>), with the decision variables defined identically:
, , maximize min_โ { โ_i=1^I ฮผ_i ฮฑ_i + โ_i=1^I (โ_t โ_i logt ยทw_i,i,t - ฮฒ_i ยทโ_t โ_i logt ยทw_i,i,t + โ_jโ i ฮณ_i,j โ_t โ_j logt ยทw_i,j,t ) - โ_i=1^I ฮผ_i logฮผ_i }
subject to constraintsย (<ref>) โ (<ref>).
Again, this problem has exactly the same feasible region as the log-log separation problemย (<ref>) and the semi-log separation problemย (<ref>). Additionally, just as with the deterministic robust problemย (<ref>) for the semi-log model, this problem can be further reformulated by exploiting the structure of , or otherwise one can design a cutting plane method that generates violated uncertain parameter vectors โ on the fly.
ยง SOLUTION METHOD FOR FINITE , FINITE
The second solution approach we consider is for the case where both and are finite sets. In particular, we assume that the uncertainty set is a binary representable set. For fixed positive integers m and n, we let be defined as
= { = |โค, โ{0,1}^n },
where is a m dimensional real vector, is a m-by-n real matrix and is a d-by-n real matrix, where d is the dimension of the uncertain parameter vector .
Recall that when is finite, then the RRPO problem is
Z^*_ = max_โฮ_min_โโ_โฯ_ R(, ).
We can transform this problem into a dual problem where the outer problem is to optimize a distribution over uncertain parameter vectors, and the inner problem is to optimize over the price vector, as follows:
Z^*_ = max_โฮ_min_โโ_โฯ_ R(, )
= max_โฮ_min_โฮ_โ_โโ_โฯ_ฮป_ R(, )
= min_โฮ_max_โฮ_โ_โโ_โฯ_ฮป_ R(, )
= min_โฮ_max_โโ_โฮป_ R(, ),
where the first equality follows because minimization of a function of over the finite set is the same as minimizing the expected value of that function over all probability mass functions supported on ; the second equality follows by linear programming duality; and the final equality follows because maximization of a function of over is the same as maximizing the expected value of that function over all probability mass functions supported on . We refer to problemย (<ref>) as the primal problem and (<ref>) as the dual problem.
Consider now the restricted primal problem, where we replace with a subset โ in problemย (<ref>), and the restricted dual problem, where we replace with a subset โ in problemย (<ref>). Let us denote the objective values of these two problems with Z_P, and Z_D,, respectively. These two problems are:
Z_P, = max_โฮ_min_โโ_โฯ_ R(, ),
Z_D, = min_โฮ_max_โโ_โฮป_ R(, ).
Observe that Z_P, and Z_D, bound Z^*_ from below and above, that is,
Z_P, โค Z^*_โค Z_D, .
In the above, the justification for the first inequality is because maximizing over distributions supported on the smaller set of price vectors cannot result in a higher worst-case objective than solving the full primal problem with , which gives the value Z^*_. The second inequality similarly follows because minimizing over distributions supported on the smaller set of uncertainty realizations cannot result in a lower worst-case objective than solving the full dual problem with , which gives Z^*_.
The idea of double column generation is as follows. Let us pick some subset of price vectors โ and some subset of uncertainty realizations โ. Observe that the restricted primal problemย (<ref>) for can be written in epigraph form as
, tmaximize t
subject to t โคโ_โ ฯ_ R(, ), โ โ,
โ_โ ฯ_ = 1,
ฯ_ โฅ0, โ โ.
This problem has a huge number of constraints (one for each โ). However, we can solve it using delayed constraint generation, starting from the set . Upon solving it in this way, at termination we will have a subset ' of uncertainty realizations from that were found during the constraint generation process. We update to be equal to '.
With this (updated) subset in hand, we now solve the restricted dual problemย (<ref>) for . This problem can be written in epigraph form as
, ฯminimize ฯ
subject to ฯโฅโ_โ ฮป_ R(, ), โ โ,
โ_โ ฮป_ = 1,
ฮป_ โฅ0, โ โ.
This problem also has a huge number of constraints, but again we can solve it using delayed constraint generation, with the initial subset of price vectors set to . At termination, we will have a new subset ' of price vectors, which will contain the original set of price vectors in . We then update to ', and go back to solving the restricted primal problem. The process then repeats: after solving the restricted primal, we will have a new (bigger) ; we then solve the restricted dual, after which we have a new (bigger) ; we then go back to the restricted primal, and so on. After each iteration of solving the restricted primal and restricted dual, the set expands and the set expands. Thus, the bounds Z_P, and Z_D, get closer and closer to Z^*_. The algorithm can then be terminated either when Z_P, = Z_D,, which would imply that both restricted primal and restricted dual objective values exactly coincide with Z^*_; or otherwise, one can terminate when Z_D, - Z_P, < ฯต, where ฯต > 0 is a user specified tolerance.
The overall algorithmic approach is formalized as Algorithmย <ref>. This algorithm invokes two procedures, (Algorithmย <ref>) and (Algorithmย <ref>), which are delayed constraint generation algorithms for solving the restricted primal and dual problems respectively. We note that Algorithmย <ref> is an adaptation of the double column generation algorithm of <cit.> for the randomized robust assortment optimization problem, which is itself adapted from the double column generation algorithm of <cit.> for solving mixed-integer distributionally robust optimization problems. The proof of correctness of this procedure follows similarly to <cit.>, and is omitted for brevity. The novelty in our approach lies in how we handle the separation problems which are at the heart of and , which we discuss next.
Note that the doubly restricted primal and dual problemsย (<ref>) and (<ref>) solved in and are both linear programs, and can be thus be solved easily. The principal difficulty in these procedures comes from the primal and dual separation problemsย (<ref>) and (<ref>), which require optimizing over a price vector โ and an uncertain parameter vector โ respectively. In the following sections, we discuss how these two separation problems can be tackled for the linear, semi-log and log-log demand models. Note that in all three sections, we continue to make Assumptionย <ref>, which states that can be written as the Cartesian product of finite sets of prices for each product, i.e., = _1 รโฆร_I, where _1, โฆ, _I are finite sets.
ยง.ยง Primal and dual subproblems for linear demand model
For the linear demand model, the primal separation problem is
min_โโ_โฯ_ยท[ โ_i=1^I p_i ยท (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ) ].
Note that this objective function is linear in = (, , ). Therefore, the whole problem can be expressed as
minimize โ_โ ฯ_ ยท[ โ_i=1^I p_i ยท(ฮฑ_i - ฮฒ_i p_i + โ_j โ i ฮณ_i,j p_j ) ]
subject to = ,
โค,
โ{0,1}^n,
which is a mixed-integer linear program.
The dual separation problem is
max_โโ_โฮป_ยท[ โ_i=1^I p_i (ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ) ].
By introducing the same binary variables as in the separation problemย (<ref>) (the linear demand separation problem for the convex setting), we obtain the following mixed-integer linear program:
, maximize โ_โ ฮป_ ยท[ โ_i=1^I โ_t โ_i ฮฑ_i ยทt ยทx_i,t + โ_i=1^I โ_t โ_i t ยทฮฒ_i ยทx_i,t + โ_i=1^I โ_ j โ i โ_t_1 โ_i โ_t_2 โ_j ฮณ_i,j ยทt_1 ยทt_2 ยทy_i,j,t_1,t_2 ]
subject to โ_t โ_i x_i,t = 1, โ i โ[I],
โ_t_2 โ_j y_i,j,t_1, t_2 = x_i,t_1, โ i, j โ[I], j โ i, t_2 โ_j,
โ_t_1 โ_i y_i,j,t_1, t_2 = x_i,t_1, โ i, j โ[I], j โ i, t_1 โ_i,
x_i,t โ{0,1}, โ i โ[I], t โ_i,
x_i,j,t_1,t_2 โ{0,1}, โ i, jโ[I], i โ j, t_1 โ_i, t_2 โ_j.
Importantly, note that the size of this problem does not scale with the number of uncertainty realizations inside ; the form of this problem is equivalent to problemย (<ref>) where is replaced with โ_โฮป_ยท (the โaverageโ uncertain demand parameter). As we will see in the next couple of sections, the same will not be true for the semi-log and log-log demand models.
ยง.ยง Primal and dual subproblems for semi-log demand model
For the semi-log demand model, the primal separation problem is
min_โโ_โฯ_ยท[ โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ].
Note that this objective function is convex in = (, , ), because the weights ฯ_ and p_i for a given โ and i โ [I] are nonnegative, and because the function e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j is convex in = (, , ). Thus, the whole problem can be expressed as
, minimize โ_โ ฯ_ ยท[ โ_i=1^I p_i ยทe^ฮฑ_i - ฮฒ_i p_i + โ_j โ i ฮณ_i,j p_j ]
subject to = ,
โค,
โ{0,1}^n,
which can be re-written as a mixed-integer exponential cone program.
The dual separation problem is
max_โโ_โฮป_ยท[ โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ].
The objective function of this problem is in general not concave in . However, just as in Sectionย <ref>, the related problem of optimizing the logarithm of this objective, which is
max_โlog[ โ_โฮป_ยท[ โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ] ]
= max_โlog[ โ_โโ_i=1^I e^logฮป_ + log p_i + ฮฑ_i - ฮฒ_i p_i + โ_j โ iฮณ_i,j p_j ]
can be reformulated as a mixed-integer exponential cone program using the same biconjugate-based technique in Sectionย <ref>. In particular, when Assumptionย <ref> holds, then problemย (<ref>) is equivalent to
, , maximize โ_โ โ_i = 1^I ฮผ_, i ยท( logฮป_ + ฮฑ_i )
+ โ_โ โ_i = 1^Iโ_t โ_i logt w_,i,i,t
+ โ_โ โ_i = 1^I โ_t โ_i (- ฮฒ_i) ยทt ยทw_,i,i,t
+ โ_โ โ_i = 1^I โ_j โ i ฮณ_i,j ยทโ_t โ_j t ยทw_,i,j,t
- โ_โ โ_i=1^I ฮผ_,i logฮผ_,i
subject to โ_โ โ_i = 1^I ฮผ_,i = 1,
โ_t โ_j w_,i,j,t = ฮผ_,i, โ โ, i, j โ[I],
โ_โ โ_i = 1^I w_, i, j, t = x_j,t, โ j โ[I], t โ_j,
โ_t โ_j x_j,t = 1, โj โ[I],
w_,i,j,t โฅ0, โ โ, i, j โ[I], t โ_j,
ฮผ_,i โฅ0, โ โ, i โ[I],
x_j,t โ{0,1}, โ j โ[I], t โ_j,
where x_j,t is a binary decision variable that is 1 if product j is offered at price t โ_j, and 0 otherwise; ฮผ_,i is a nonnegative decision variable introduced as part of the biconjugate-based reformulation; and w_,i,j,t is a decision variable that represents the linearization of ฮผ_,iยท x_j,t for all โ, i,j โ[I], and t โ_j.
As with problemย (<ref>), this problem can be expressed as a mixed-integer exponential cone program. One notable difference between formulationย (<ref>) and formulationย (<ref>) from earlier is that the number of decision variables and constraints is larger because the decision variable ฮผ_,i is introduced for every combination of an uncertainty realization in and each product i; thus, represents a probability mass function over the set ร [I].
ยง.ยง Primal and dual subproblems for log-log demand model
For the log-log demand model, the primal separation problem is
min_โโ_โฯ_ยทโ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j .
Note that the objective function is convex in = (, , ); it is the nonnegative weighted combination of terms of the form e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j, each of which are convex in (, , ). Thus, the overall problem, which can be stated as
minimize โ_โ ฯ_ ยทโ_i=1^I p_i ยทe^ ฮฑ_i - ฮฒ_i logp_i + โ_j โ i ฮณ_i,j logp_j
subject to = ,
โค,
โ{0,1}^n,
is a mixed-integer convex program, and can be expressed as a mixed-integer exponential cone program.
The dual separation problem is
max_โโ_โฮป_ยท[ โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j ].
The objective function of this problem is in general not concave in . However, following the same method as in Sectionย <ref>, we can reformulate the related problem of maximizing the logarithm, which is
max_โlog( โ_โฮป_ยท[ โ_i=1^I p_i ยท e^ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j ] )
= max_โlog( โ_โโ_i=1^I e^logฮป_ + log p_i + ฮฑ_i - ฮฒ_i log p_i + โ_j โ iฮณ_i,jlog p_j )
as a mixed-integer exponential cone program. Under Assumptionย <ref>, the resulting formulation is
, , maximize โ_โ โ_i = 1^I ฮผ_, i ยท( logฮป_ + ฮฑ_i )
+ โ_โ โ_i = 1^Iโ_t โ_i (1 - ฮฒ_i) logt ยทw_,i,i,t
+ โ_โ โ_i = 1^I โ_j โ i ฮณ_i,j ยทโ_t โ_j logt ยทw_,i,j,t
- โ_โ โ_i=1^I ฮผ_,i logฮผ_,i
subject to โ_โ โ_i = 1^I ฮผ_,i = 1,
โ_t โ_j w_,i,j,t = ฮผ_,i, โ โ, i, jโ[I],
โ_โ โ_i = 1^I w_, i, j, t = x_j,t, โ j โ[I], t โ_j,
โ_t โ_j x_j,t = 1, โj โ[I],
w_,i,j,t โฅ0, โ โ, i, j โ[I], t โ_j,
ฮผ_,i โฅ0, โ โ, i โ[I],
x_j,t โ{0,1}, โ j โ[I], t โ_j,
where the decision variables have the same meaning as those in formulationย (<ref>).
ยง ADDITIONAL NUMERICAL RESULTS
ยง.ยง Estimation results for data set
Tablesย <ref> and <ref> display the point estimates of , and for the semi-log and log-log demand models for the data set.
ยง.ยง Performance results for data set
Tablesย <ref> and <ref> below compare the performance of the nominal, deterministic robust and randomized robust pricing solutions under a discrete budget uncertainty set for the data set.
|
http://arxiv.org/abs/2306.07950v1
|
20230613174528
|
Electron Localization in Rydberg States
|
[
"Jan Mostowski",
"Joanna Pietraszewicz"
] |
quant-ph
|
[
"quant-ph"
] |
*50pt
Electron Localization in Rydberg States
J. Mostowski et al.
Electron Localization in Rydberg States
J. Mostowski, J. PietraszewiczElectron Localization in Rydberg States
J. Mostowski^* and J. PietraszewiczInstitute of Physics, Polish Academy of Sciences, al. Lotnikรณw 32/46, PL-02668 Warsaw, Poland
We discuss the possibility of localizing anย electron in aย highly excited Rydberg state. The second-order correlation of emitted photons is the tool for the determination of electron position. This second-order correlation of emitted radiation and, therefore, the correlation of operators describing the acceleration of the electron allows for aย partial localization of the electron in its orbit. The correlation function is found by approximating the transition matrix elements by their values in the classical limit. It is shown that the second-order correlation, depending on two times, is aย function of the time difference and is aย periodic function of this argument with the period equal to the period of the corresponding classical motion. The function has sharp maxima corresponding to large electron acceleration in the vicinity of the โperihelion.โ This allows the localization of the electron in its consecutive approach to the perihelion point.Rydberg state, radiation, second order correlation, localization*[email protected]
ยง INTRODUCTION*6PT
The measurement process, since the early days of quantum physics, has been one of the central issues in many attempts to understand the relation between the classical and quantum description of physical systemsย [1] (for aย more recent analysis, see, e.g.,ย [2]). The most common theory of quantum measurementย [3โ5]ย assumes that the quantum system is coupled to aย meter. The interaction between them entangles the two systems. The measurement, described as aย projection onto the state of the meter, provides information on the state of the system. Continuous measurements of quantum systems treated as stochastic processes were first considered inย [6] in the context of photon counting. The formalism based on path integrals was initiated inย [7] and further developed inย [4] (see alsoย [8]).
The classical motion of the electron bound in aย Coulomb field is periodic. The wavefunction describing the bound electron in aย stationary state does not show any time-dependent features. Time dependence, and hence classical features of wavefunctions, can be obtained for non-stationary states, linear combinations of energy eigenstates with different energies. Such aย construction is well known in the case of aย harmonic oscillator, and the most classical states are well-known coherent statesย [9] (see also, e.g.,ย [10]). The corresponding time-dependent states in the case of Rydberg states were introduced inย [11] (see alsoย [12โ15]).
Another point of view was presented inย [16], where it is pointed out that when aย measurement breaks the time-translational symmetry of aย stationary state, aย periodic motion of the system is initiated. This approach was further elaborated inย [17,ย 18].
The classical limit of quantum mechanics is still aย vivid subject of investigation (see, e.g.,ย [19]). One of the recently discussed problems in this area relates to the successive measurements of particle position and detection of the trajectory. Most of the interest has been limited to free particles, and not much has been done in the case of bound states.
Quantum description of the hydrogen atom is well known. All energies and wavefunctions of stationary states are well known. The classical limit is approached in the limit of large quantum numbers โ the wavefunction should be related to classical trajectories. This relation has been discussed in many papers. Both time-dependent states, analogs of harmonic oscillator coherent states, and stationary states in the limit of high excitation were shown to exhibit classical features.
In this paper, we will present yet another aspect of the classical limit in the case of Rydberg states. Namely, we will use the radiation emitted from the highly exited state to determine the electron position as aย function of time. Detection of radiation at aย given time breaks the time-translational symmetry and allows observation of the time dependence of subsequent evolution. This approach provides aย partial but straightforward way of estimating the elements of the time-dependent classical trajectory hidden in the stationary wavefunction.
Radiation from aย quantum system, such as aย hydrogen atom, is usually studied in the frequency domain. The spectrum consists of several lines. Measurement of the spectrum is not the only possibility โ time dependence of radiation can be studied as well. The time dependence of the spontaneous emission from aย highly excited vibrational state of aย diatomic molecule was used to determine the time-dependent relative position of the constituents. This allowed us to demonstrate the time dependence of various states, such as coherent states and others, e.g., the Schrรถdinger cat stateย [20]. Let us note that in the case of Rydberg states with the principle quantum number nโ100, the characteristic frequency of radiation is ฮฝโ 10^10ย Hz, so the time dependence of the radiation for times smaller than 1/ฮฝ is within experimental reach. Radiation observed for such small times of the order of 1/ฮฝ exhibits different features as compared to the long-time measurements. This and the relation to the position measurement will be discussed below.
*4pt
ยง A SIMPLE CASE โ HARMONIC OSCILLATOR*4PT
We will begin the discussion of electromagnetic radiation in the time domain and its relation to the measurement of the electron position with aย simple example of aย harmonic oscillator. The charged particle oscillates with the frequency ฯ along the x axis; its motion is given by x_cl(t)=Acos(ฯ t). This electron is aย source of electromagnetic radiation. Weย will find the x component of the electric field in the far zone along the y axis (to simplify the geometry). Weย have, in the dipole approximation,
E_x(R,t) = -e/4ฯฯต_0 R ฯ^2 x_cl(t_ret),
1
where t_ret=t-|R|/ c is the retarded time, and c is the speed of light. We have skipped the R dependence of the field โ it is just like in classical electrodynamics, namely E_x โผ R^-1. It follows fromย (1) that the electric field oscillates with the frequencyย ฯ. This classical treatment does not take into account radiation damping, thus it is valid only for aย short time, shorter than the characteristic damping time.
We will now discuss anย emission of radiation, taking into account the quantum nature of the oscillator. We will concentrate on the highly excited states of the oscillator and hence on the classical limit.
The position of anย oscillating particle is described by the position operator x. It can be expressed in terms of the lowering and raising operators a and a^โ , respectively, as follows
x=x_0 (a+a^โ )/โ(2),
2
where x_0=โ(ฤง/Mฯ), ฤง is the Planck constant, and M denotes the mass of the oscillating particle. The component E_x of the electric field operator (the radiated part) in the dipole approximation is given by
E_x(R,t)= -e/4ฯฯต_0 R ฯ^2 x(t_ret),
3
just like in the classical case. This time, however, the electric field is anย operator, and we will find the expectation values of this operator. We assume that at time t=0, the oscillator is in the energy eigenstate |nโฉ with energy E_n=ฤงฯ n. Thus the expectation value of the x operator, and hence of the E_x(r,t) operator, is equal to zero. The first-order correlation function becomes
โจE_x(R,t_2) E_x(R,t_1)โฉ=1/2 e^2/(4ฯฯต_0 R)^2 ฯ^4x_0^2
ร[n ^ฯ(t_2-t_1)+(n+1/2) ^-ฯ(t_2-t_1) ].
4
In the case of the highly excited state, i.e., when nโซ 1, we can approximate โ(n(n+1))โ nโโ(n(n-1)). Then we get
โจE_x(R,t_2)E_x(R,t_1)โฉ=
e^2/(4ฯฯต_0 R)^2 ฯ^4x_0^2 n cos(ฯ(t_2-t_1)),
5
just as in the classical case. The average intensity of radiation given by the first correlation function at t_2=t_1 is aย constant. The first correlation function for t_2>t_1 gives the spectrum of radiation and, in this case, consists of one line only.
The second-order correlation function is more interesting. For nโซ 1, we get
*-2mm โจE_x^2(R,t_2)E_x^2(R,t_1)โฉ=
n^2[1+cos(2ฯ(t_2-t_1))].
6
The second correlation function oscillates with the frequency 2ฯ. This tells us that the maxima of radiation occur every half period of the electron motion. Thus, the second correlation function can be used to determine the position of the oscillating particle in the vicinity of aย turning point. The high intensity is due to the large acceleration of the oscillating charge and this takes place when the electron is close to one of the turning points. Thus, if high intensity has been detected at t_1, then the electron will reach another turning point half the period later, and the intensity will be high once more. Thus, the time dependence of the second correlation function provides information about the motion of the electron. The information is not complete, as the radiation does not distinguish between the two turning points. It is worth noting that the correlation function allows the detection of the particle close to the turning point in spite of the dipole approximation.
*4pt
ยง CLASSICAL RADIATION FROM KEPLER ORBIT*4PT
Before we discuss radiation from the Rydberg states, we will give aย classical description of motion in the Coulomb fieldย [21]. If the motion is in the xy plane, the coordinates x and y as functions of time are given by
x(t)=a [cos(ฮพ(t)) - ฯต],
y(t)=a โ(1-ฯต^2) sin(ฮพ(t)),
7
where
ฯt+ฯ=ฮพ(t)-ฯตsin(ฮพ(t)),
8
where ฯ is anย arbitrary phase. The radial variable r=โ(x^2+y^2) can also be expressed as aย function of time
r=a[1-ฯต cos(ฮพ(t))] .
9
The parameters a, ฯ, and ฯต characterize the trajectory. They can be related to energy and angular momentum in the standard wayย [21].
We will also need more general trajectories that differ by anย orientation in the plane of the motion described by the phase ฯ and by the phase of the motion, ฯ. Thus we define
X(t)=x(t)cos(ฯ)+y(t)sin(ฯ),
Y(t)=-x(t)sin(ฯ)+y(t)cos(ฯ),
10
with ฯ t+ฯ=ฮพ(t)-ฯตsin(ฮพ(t)).
The classical description of the radiation of aย charge moving along such anย orbit is found to be in complete analogy to the harmonic oscillator case. We will use the dipole approximation since the size of the orbit is much smaller than the characteristic wavelengths of the emitted radiation. The electric field in the far zone is given by
E(R,t)=1/R nร[nรa(t_ret)],
11
where a is the acceleration, and n=R/|R|. Radiation damping is neglected, as in the previous section.
Also, the Fourier decomposition of the trajectory can be found (seeย [21]). Here we will give the Fourier decomposition of the x variable
x(t)=โ_kexp(k(ฯt+ฯ)) x_k ,
12
where
x_k=a/2 k[J_k-1(kฯต)-J_k+1(kฯต)], k0.
13
A similar formula holds for y(t). This will be used in the next section.
ยง CLASSICAL LIMIT OF MATRIX ELEMENTS*4PT
From now on, we will use atomic units.
Consider the quantum description of anย atom in aย highly excited energy eigenstate. We label the states by standard quantum numbers: n โ principle quantum number, l โ angular momentum quantum number, and m โ magnetic quantum number. The energy E_n of this state depends on the principal quantum number n as E_n=-1/(2n^2). We will be interested only in states with m=l, thus, we will skip the magnetic quantum number to avoid confusion. This means that the wavefunctions considered in this paper are well concentrated in the xy plane, which is perpendicular to the angular momentum. This can be seen from the explicit form of the spherical harmonics function |Y_l,l(ฮธ,ฯ)|^2โผsin^2l(ฮธ) that has aย sharp maximum at ฮธ=ฯ2 for large l. We will, therefore, not consider the wavefunction dependence along the z axis.
The expectation values of the radiated field depend on the matrix elements of the position operator between the quantum states of the atom, i.e., โจฯ_n,l,l| x |ฯ_n',l',l'โฉ, where x is the coordinate. In spherical coordinates, x=rsin(ฮธ)cos(ฯ), and aย similar expression is valid for the y coordinate, y=rsin(ฮธ)sin(ฯ). The wavefunctions ฯ_n,l,l(r,ฮธ,ฯ) = R_n,l(r)Y_l,l(ฮธ,ฯ) are the standard states of the hydrogen atom, with R_n,l(r) describing the radial part of the wavefunction and Y_l,l(ฮธ,ฯ) denotes the spherical harmonics. Because of selection rules, these matrix elements are different from zero only if l'=lยฑ 1.
The radial part of the matrix element of r^k (for any k), i.e.,
โซ_0^โr r^2+k R_n,l(r)R_n',l'(r) ,
14
can be found explicitly in terms of special functionsย [22]. In fact, the classical limit of this expression, valid for nโโ, lโโ with l/n =, has been found inย [23]. In this limit, (14) approaches the Fourier transform of the classical trajectory r_classical^k for the frequency ฯ=(E_n-E_n')/ฤง. The classical trajectory r(t) corresponds to the average energy E=12(E_n+E_n') and the eccentricity ฯต = โ(1-(l/n)^2). Thus, for the matrix element of r, we find for l'=lยฑ 1 that
โจn',l',l'|r|n,l,lโฉโa_0 n^2/2 (n-n')
ร[J_n-n'+1((n-n')ฯต)-J_n-n'-1((n-n')ฯต)],
15
where a_0 denotes the Bohr radius, and ฯต corresponds to the eccentricity of the classical orbit with energy and angular momentum equal to the average of the energies of the initial and final state. It should be noted that (15) is analogous toย (5) for the harmonic oscillator, where โ(n(n+1)) is replaced by n for large quantum numbersย n.
The transition elements for x can also be found
โจn',l+1,l+1|x|n,l,lโฉ+
โจn',l-1,l-1|x|n,l,lโฉ โx_n-n',
16
where x_n-n' is given by (13). These formulas allow describing the radiation from the Rydberg states using classical approximations.
The values of the matrix elements can be modeled classically by random trajectories. Consider then the trajectories
X(t)=x(t)cos(ฯ)+y(t)sin(ฯ),
Y(t)=-x(t)sin(ฯ)+y(t)cos(ฯ),
17
with ฯ t+ฯ=ฮพ-ฯตsin(ฮพ). The quantities ฯ and ฯ are random phases, with uniform distributions between 0 andย 2ฯ. In this case, the expectation values of the x and y operators are equal to the mean values of the classical quantities X and Y with the same values of energy and angular momentum.
*4pt
ยง RADIATION FROM Aย RYDBERG STATE*4PT
In this and subsequent sections, we will use atomic units in the description of aย quantum state.
The electric field E(R,t) in the far field is given by the same formula as in the classical field, with the difference that the acceleration a is anย operator acting on the quantum state of the system consisting of anย electron and the photon vacuum. In the quantum case, also the electric field is anย operator. Thus, for the radiated part of the field, we get in the dipole approximation
E(R,t)=1/R nร[nรa(t_ret)],
18
where n is the unit vector in the direction of the observation point, n=R|R|.
In what follows, we will find the expectation values of the electric field, as well as the first and second correlation function. It should be noted that the radiation is weak, and therefore the measurement of light intensity in the classical sense is questionable. The expectation value of the electric field squared at aย given point should be understood as the photon counting rate.
We assume that at t=0, the state describes the photon vacuum and the atom is in the state ฯ_n,l,l. This requires matrix elements of the operators x and y and their second derivatives over time.
The first correlation function of the x component of the field radiated in the y direction is given by
โจE_x(R,t_2)E_x(R,t_1)โฉ=โจa_x(t_2,ret) a_x(t_1,ret)โฉ/(4ฯฯต_0 R)^2.
19
The expectation value of the product of accelerations will be found in the classical limit. First, we will linearize the energy in the vicinity of the initial state energy with the principal quantum numberย n_0. We get
E_nโ-1/2n_0^2+n-n_0/n_0^3.
20
This allows for the approximation of the expectation values of the acceleration operator a(t) by the expectation values of the r operator
โจn',l-1,l-1|a_x(t)|n,l,lโฉโ -(n-n')^2ฯ_0^2 exp(-(n-n')ฯ_0t) x_n-n' ,
21
with ฯ_0=1/n_0^3. Thus, for the two-time correlation function of acceleration in the state |n,l,lโฉ, the following can be found
โจa_x(t_2)a_x(t_1โฉ)=
โ_n'l'โจn,l,l|x|n'l'l'โฉโจn'l'l'|x|n,l,lโฉ รexp(ฯ(t_2-t_1)(n-n')).
22
The same can be expressed by the correlation of the classical trajectories
โจa_x(t_2)a_x(t_1)โฉโโซ ฯ/2ฯโซฯ/2ฯ ^2 X(t_2)/t_2^2^2X(t_1)/t_1^2.
23
This is aย good approximation for large n and l. The main point is that the matrix elements of the angular part
*-1mmโซ*-0.5mm ฮธ sin(ฮธ) ฯ Y_l,l(ฮธ,ฯ)sin(ฮธ)^ฯย Y_l-1,l-1(ฮธ,ฯ) ,
24
hence the matrix element of the position operator x weakly depends on l for largeย l. The correlation function obtained above is showninย Fig.ย 1.
From the above considerations, it follows that the average intensity of radiation is proportional to the correlation function at t_1=t_2 and does not depend on time. The Fourier transform of the correlation function
โซt โจa_x(t)a_x(0)โฉ exp(kฯt)
25
determines the radiation spectrum. Thus the spectrum of radiation from aย Rydberg state can be approximated by the spectrum of radiation from the corresponding classical orbit.
*0pt
ยง SECOND ORDER CORRELATION*0PT
In this section, we will discuss the second-order correlation function of the radiation originating from aย Rydberg state. This is given by
G(t_2,t_1)=โจE_x(t_1)E_x(t_2)E_x(t_2)E_x(t_1)โฉ.
26
The state is, as before, the photon vacuum and the Rydberg state of the atom. Expressing the electric field by the acceleration of anย electron in the atom, we get
G(t_2,t_1)=1/R^4exp(-2 n ฯ(t_1-t_2))
รโจa_x(t_1)a_x(t_2)a_x(t_2)a_x(t_1)โฉ.
27
Just as before, we insert aย complete set of states |n,l,lโฉ between the a operators and apply the approximation of l independence of the matrix elements in the case of large l. This leads to the following representation of the correlation functionG(t_2,t_1)=1/R^4 โซฯ/2ฯโซdฯ/2ฯ
ร^2X(t_2)/t_2^2^2X(t_2)/t_2^2^2X(t_1)/t_1^2^2X(t_1)/t_1^2.
28
Integration over the angle ฯ can be done explicitly, whereas integration over the angle ฯ has to be done numerically.
This is our final result. It gives the second correlation function of radiation emitted by the atom in aย Rydberg state. The formula is approximated and valid for small time differences t_2-t_1 because it does not take radiation damping into account. It is valid only in the case of Rydberg states with large n and large l, with the maximal magnetic quantum number m=l.
An example of the second-order correlation function is shown in Fig.ย 2. One can notice very strong correlation of radiation for small times โ much smaller than the period of motion โ and the periodic behavior of the correlation.
*4pt
ยง CONCLUSIONS*4PT
Electromagnetic radiation from anย atom in the Rydberg state can be used to partially localize the electron on the orbit. According to the classical view of radiation, the electron moves along anย elliptic orbit and emits radiation most efficiently when the acceleration is large. This happens when the electron is close to the nucleus. The quantum wavefunction ฯ(r,ฮธ,ฯ) describing the electron state does not indicate the time when the electron is close to the nucleus. Therefore the emitted radiation is time-dependent, and its period reflects the period of motion. The time-averaged intensity, as well as the spectrum of radiation, is constant in time (for aย relatively short time; radiation damping is not taken into account). The second correlation function, G(t_2,t_1), depends on the time difference t_2-t_1 and is aย periodic function of time, with the frequency of the classical electron motion.
In the quantum language, the atom is in aย highly excited Rydberg state with the principal quantum number n. The state is stationary, therefore, the average intensity of emitted radiation is constant in time. The spectrum is stationary since radiation damping is neglected, and consists of several narrow lines corresponding to the transition to lower energy states. The second correlation function, however, breaks the time translation symmetry, and this unravels the time evolution of radiation. Based on the measurement of radiation, we can reconstruct the motion of the electron.
The second correlation function was found in the classical approximation, however, its meaning is indeed purely quantum. The classical approximation means that transition matrix elements have been approximated by the corresponding classical expression. If exact expressions for the matrix elements had been used, the result would have been very similar. The calculations would have been numerically more complex.
We have to stress that electron localization is limited by the uncertainty principle. Thus in the case of the state with orbital quantum number l, the angle localization is possible up to 2ฯ/l. While in the case of large l considered here, this is notaย strong limitation, it does play aย significant role in the case of l of the order of 1, even for states with large principle quantum numbers n.
Our results show that the correlation function is strongly time-dependent. This correlation function clearly shows that if aย strong and short impulse of radiation is detected, the next such pulse will come after one period of the corresponding classical motion, or in the quantum language, after time T=2ฯ/(E_n-E_n-1). This is due to the large acceleration of anย electron in the vicinity of the nucleus.
The first strong pulse localizes the electron at this point and breaks the time independence of the radiation. The second pulse comes after one period. Between the strong pulses, the radiation is much weaker because of the small acceleration. Thus the observation of the time dependence of radiation allows the localization of the Rydberg electron in the vicinity of the nucleus.
This method of localizing anย electron on the orbit is non-standard. The recent approach to quantum particle localization is based on successive measurements of aย single particle. Measurement means entangling the particle with another system โ aย pointer โ and then the measurement of the pointer state. In the present approach, the electromagnetic field serves as the pointer. The electron position is not measured directly โ remember the dipole approximation โ the electron acceleration is being measured. Obviously, the second-order correlation gives aย deeper insight into the dynamics than the average values of observables. Also, it provides some insight into the measurement process in quantum mechanics, due to which the difficult process of position measurement is replaced by aย standard measurement of radiation.
We have to point out that the approach described in this paper does not discuss the probabilities of single measurements, but rather it discusses averages such as aย correlation function. Nevertheless, it is aย possible way of detecting the motion of anย electron along aย trajectory.*4pt
*4pt
JM dedicates this paper to Professor Iwo Biaลynicki-Birula, who taught me quantum mechanics and for many years guided me through quantum physics.
*4pt
1J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University, Princeton (NJ) 1955
2W.H. Zurek, Rev. Mod. Phys. 75, 715 (2003)10.1103/RevModPhys.75.715
3E.P. Wigner, Am. J. Phys. 31, 6 (1963)10.1119/1.1969254
4C.M. Caves, G.J. Milburn, Phys. Rev. A 36, 5543 (1987)10.1103/PhysRevA.36.5543
5A.J. Scott, G.J. Milburn, Phys. Rev. A 63, 042101 (2001)10.1103/PhysRevA.63.042101
6E.B. Davies, Commun. Math. Phys. 15, 277 (1969)
7M.B. Mensky, Phys. Rev. D. 20, 384 (1979)10.1103/PhysRevD.20.384
8A. Barchielli, M. Gregoratti, Phil. Trans. R. Soc. A 370, 5364 (2012)10.1098/rsta.2011.0515
9R.J. Glauber, Phys. Rev. 131, 2766 (1963)10.1103/physrev.131.2766
10J.R. Klauder, B. Skagerstam, Coherent States, World Scientific, Singapore 1985
11J. Mostowski, Lett. Math Phys. 2, 1 (1977)10.1007/BF00420663
12M. Nauenberg, Phys. Rev. A 40, 1133 (1989)10.1103/PhysRevA.40.1133
13A. Rauch, J. Parisi, Adv. Stud. Theor. Phys. 8, 889 (2014)
14J.C. Gay, D. Delande, A. Bommier, Phys. Rev. A 39, 6587 (1989)10.1103/PhysRevA.39.6587
15C.C. Gerry, Phys. Rev. A 33, 6 (1986)10.1103/PhysRevA.33.6
16F. Wilczek, Phys. Rev. Lett. 109, 160401 (2012)10.1103/PhysRevLett.109.160401
17K. Sacha, J. Zakrzewski, Rep. Prog. Phys. 81, 016401 (2018)10.1088/1361-6633/aa8b38
18K. Sacha, Time Crystals10.1007/978-3-030-52523-1, Springer Series on Atomic, Optical, and Plasma Physics, Springer, Cham 2020, p.ย 114
19F. Gampel, M. Gajda, Phys. Rev. A 107, 012420 (2023)10.1103/PhysRevA.107.012420
20P. Kowalczyk, C. Radzewicz, J. Mostowski, I.A. Walmsley, Phys. Rev. A 42, 5622 (1990)10.1103/PhysRevA.42.5622
21L.D. Landau, E.M. Lifshitz, Classical Theory of Fields, Pergamon Press, 2000
22L.D. Landau, E.M. Lifshitz, Quantum Mechanics, Pergamon Press, 2000
23A. Pitak, J. Mostowski, Eur. J. Phys. 39, 025402 (2018)10.1088/1361-6404/aa997c
|
http://arxiv.org/abs/2306.04556v1
|
20230607160355
|
StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code
|
[
"Hannah McLean Babe",
"Sydney Nguyen",
"Yangtian Zi",
"Arjun Guha",
"Molly Q Feldman",
"Carolyn Jane Anderson"
] |
cs.LG
|
[
"cs.LG",
"cs.HC",
"cs.SE"
] |
Implications of a Possible Spectral Structure of Cosmic-Ray Protons Unveiled by the DAMPE
Zejun Jiang
=========================================================================================
Code LLMs are being rapidly deployed and there is evidence that they can
make professional programmers more productive. Current benchmarks for code generation measure whether models generate correct programs given an expert prompt. In this paper, we present a new benchmark containing multiple prompts per problem, written by a specific population of non-expert prompters: beginning programmers.
contains prompts for problems, written by 80 students who have only completed one semester of Python programming.
Our students wrote these prompts while working interactively with a Code LLM, and
we observed very mixed success rates.
We use to evaluate Code LLMs and
find that is a better discriminator of model performance than existing benchmarks.
We analyze the prompts and
find significant variation in students' prompting techniques.
We also find that
nondeterministic LLM sampling could mislead students into thinking that
their prompts are more (or less) effective than they actually are, which has implications for how to teach with Code LLMs.
ยง INTRODUCTION
Large language models of code (Code LLMs) power coding assistants that are rapidly reshaping how programmers write code. Researchers have studied their impact on programmer productivityย <cit.>, identified real concerns about potential harmsย <cit.>, and considered how they could help students learnย <cit.>. Fundamental to these studies, and to tool adoption, is the assurance that the underlying models work effectively and consistently.
Code LLMs are commonly evaluated using benchmark suites that cover a wide variety of problems. Popular benchmarks such as HumanEval <cit.> and MBPP <cit.> consist of many problems from varying areas of computing, accompanied by a single expert-written prompt. Achieving good performance on these benchmarks indicates that a model will perform well across many programming tasks, assuming that the user can write prompts equally as well as the expert.
In this paper, we present a Code LLM benchmark in a context where this assumption does not hold: beginning programmers using Code LLMs. Our dataset contains student-written prompts (with expert-written test cases) which we use to benchmark several Code LLMs. is constructed using a novel approach that sets it apart from prior work in three key ways.
[1)]
* Existing benchmarksย <cit.> have prompts authored by more experienced programmers, whereas has prompts authored by students who have only completed one computer science course.
* Existing benchmarks contain tricky problems designed to stress-test the problem solving capabilities of Code LLMs.
In contrast, has problems that are easily solved with expert-written descriptions, but often fail with student-written descriptions.
* Existing benchmarks only have a single prompt per problem, whereas has on average prompts per problem, representing a variety of prompting skill levels. This diversity provides a way to explore what it means to write a โgood" prompt and to measure the impact of prompt wording choices.
The problems target a specific skill level and provide a diverse set of prompts for each problem along with expert-written test cases. Students wrote English descriptions of these problems in an iterative manner in collaboration with OpenAI's Codex. Each of the 48 problems in contains at least different prompts. Notably, these prompts exhibit the variations in technical vocabulary and lack of familiarity with how to describe code that are common with beginning students. While other researchers have considered novice student interactions with Code LLMsย <cit.>, is the first benchmark based on student interactions. This framing provides significant insight into Code LLM reasoning capabilities outside of the educational context.
Our key contributions are:
* We present , a benchmark consisting of student-written descriptions of programming problems.
* We identify four key subsets of the benchmark, consisting of descriptions that pass (fail) on the first (last) attempt by a student, and evaluate these subsets on state-of-the-art Code LLMs. Our results show that is better able to discriminate between models than the popular HumanEval benchmark.
* We conduct an in-depth analysis of the prompt descriptions and find that even successful student prompts lead models to generate multiple semantically distinct programs.
ยง BACKGROUND
Existing code generation benchmarks pair natural language descriptions of code with test cases to check the validity of generated programs.
The two most commonly used benchmarks, HumanEval <cit.> and MBPP <cit.>, are in Python.
There are also multi-language benchmarks that translate problems from one language to anotherย <cit.>.
Finally, there are alternate benchmark formats, including multi-turn evaluationย <cit.> and docstring generationย <cit.>.
General-purpose benchmarks
Most benchmarks have a single natural language description per problem, which is typically written by an expert. There are exceptions that scrape the web or crowdsourceย <cit.>, but the dominant trend is for experts to generate benchmarks themselves.
Expert-written prompts can provide wide coverage, but come with limitations. First, they have a single prompt per problem.
Consider this HumanEvalย <cit.> prompt:
Imagine a road thatโs a perfectly straight infinitely long line. n cars are driving left
to right; simultaneously, a different set of n cars are driving right to left. The two
sets of cars start out being very far from each other. All cars move in the same
speed. Two cars are said to collide when a car thatโs moving left to right hits a car
thatโs moving right to left. However, the cars are infinitely sturdy and strong; as a
result, they continue moving in their trajectory as if they did not collide.
This function outputs the number of such collisions.
While the correct solution is simply n^2, the prompt is designed to be purposefully confusing. This means that models succeed or fail based on this specific phrasing. Having a single prompt precludes explorations of how crucial specific word choice, grammar, etc. is to model success. 's non-expert construction allows us to better analyze the idea of a successful prompt, as we have at least prompts per problem, and it helps differentiate variations in model development that contribute to success.
Second, existing benchmarks contain problems at widely varying difficulty levels. Compare the prompt above, which requires mathematical reasoning that might challenge many programmers, with a trivial problem from the same benchmarkย <cit.>: Return length of given string. Although these benchmarks succeed in capturing a wide range of programming tasks, it is difficult to interpret their results as evidence that a model will or will not serve the needs of a particular group of programmers, since their results aggregate over problems at very different skill levels.
Domain-specific benchmarks
There are also a number of domain-specific benchmarks which, rather than present a range of tasks, focus on a more narrow domain. Two notable such benchmarks are DS-1000ย <cit.> and MathQA-Pythonย <cit.>.
Like domain-specific benchmarks, our benchmark targets a specific population of programmers; however, we target a particular skill level rather than a specific application area. We also provide numerous prompts per problem from our non-expert annotators.
ยง THE DATASET
In this section we describe , a many-prompt-per-problem benchmark that targets a specific programmer skill level.[The dataset and its documentation are available at <https://huggingface.co/datasets/wellesley-easel/StudentEval>.] The dataset consists of English-language prompts for 48 programming problems, with at least 14 prompts per problem. All prompts were written by university students who had completed 1 semester of computer science in Python (CS1), but no subsequent CS courses. These students represent a population of programmers with a uniform knowledge base, which allows us to select problems that should be solvable for all participants.
Problem Selection and Format
We compiled a suite of 48 programs that closely resembled the kinds of problems that are familiar to students. The problems exercise a variety of Python features.
The majority of problems were pulled directly from CS1 course materials (quizzes, lab exercises, and homework assignments), with light modifications to avoid publishing answers to assignments still in use.
Thus, all participants should be able to understand and solve the problems by directly writing solutions in Python; we explore whether they are also able to describe them in natural language so that code generation models can solve them.
Each problem consists of four components: a function signature, at least three test cases, a correct function implementation, and an expert-written problem description that produced a working solution using (Figureย <ref>). When we gather student data, which we describe below, we show participants only a function's signature and test cases. From this information, they produce a description, which we we automatically validate using the problem's test cases.
Problem Validation
We validated our problems in several ways.
For common problems that are well-represented in model training data (e.g. Fibonacci), Code LLMs may produce working implementations from the function name alone. To weed out any such problems, we produced Codex generations from each function signature in and measured mean pass@1 rate. Overall, the mean pass@1 for our signatures without docstrings is 0.0519 with a variance of 0.0364. The maximum pass@1 is 0.925, for the problem .
We also validated the test suites associated with each problem. The test cases serve two roles in our dataset collection: they help students understand the problem, and they ensure that the LLM-generated solutions are correct.
<cit.> give evidence that the test cases that accompany widely-used Code LLM benchmarks frequently miss important corner cases.
To avoid this pitfall, we use both test coverage and mutation testing of the expert-written solution to ensure that the test cases in are adequate. Unlike the tests in <cit.>, the tests need to be simple enough to be understandable by students who have only completed CS1. Therefore, we strive to strike a balance between exhaustiveness and comprehensibility. Every problem has 3โ4 tests that achieve 100% code coverage. In fact, we ensure that every problem has three tests, even if they are not necessary to achieve coverage, in order to aid participant understanding of the problems.
Mutation testingย <cit.> is a more rigorous way than coverage to measure the quality of a test suite, and we used MutPyย <cit.> to compute mutation scores. All mutation scores below 90 are either the result of MutPy generating no mutations at all, or generating a technical correct mutation that still passes tests.
Gathering Student-Written Prompts
We recruited 80 beginning CS students from Northeastern University, Wellesley College, and Oberlin College to build the benchmark.
We conducted the IRB-approved, lab-based study over Zoom, using a web-based application designed specifically to for . This application presents the function signature and tests for one problem at a time. Students enter a problem description into a text box. After they submit their description, our server constructs a prompt that consists of the function signature and their problem description formatted as a Python docstring. The server sends this prompt to Codex to produce the function body. The server then tests the function in a sandbox and presents the test results to the participant. Students had the option to reattempt the problem or move on to the next problem. Participants completed three tutorial problems and then 8 problems in approximately 75 minutes, receiving a $50 gift card for their participation.
Dataset Subsets and Basic Statistics
Students generated prompts in total, with an average of prompts per problem. There are significant variations in how the prompts differ from each other: many are small, iterative changes (+/- a few words) whereas a student's first, last, and successful prompts tend to vary significantly from others. To refine the dataset and aid in evaluation, we break the dataset into four disjoint subsets (Figureย <ref>): students most frequently failed to solve problems on their first attempt, and this is the largest subset of problems (First Failure); about half as many first attempts were successful (First Success); slightly fewer students gave up after multiple attempts (Last Failure); and others succeeded after multiple attempts (Last Success).
<Ref> shows that the Last descriptions are significantly longer than the First, which suggests that students keep adding detail, even when starting afresh may be the better approach.
ยง RESULTS
We evaluate Code LLMs: four contemporary open models and GPT-3.5-Turbo-0301. The size and training set of the OpenAI model is not known. The others have open model weights and training data; they are currently the best-performing open pretrained Code LLMs.[StarCoder and Replit-Finetuned-v1 are further trained on Python. However, the fine-tuned Replit model is not available, and thus we compare these base models for consistency.]
We also include StarChat-Alphaย <cit.>, which fine-tunes StarCoderBase on open datasets of instructions and dialogues.
We expect that StarChat-Alpha's behavior is closer to GPT-3.5-Turbo than the other pretrained LLMs.
As with other benchmarks, we use hidden unit tests to evaluate the correctness of model-generated code. To account for their nondeterministic output (e.g., when using a sampler during inference), we adopt the now standard pass@1 metricย <cit.>. Pass@1 is an estimate of the probability that a Code LLM will produce a correct solution for a prompt that passes all hidden unit tests in one shot. To get a reliable estimate, we gather 200 samples for each prompt to calculate pass@1, which is also standard.
ยง.ยง How Do Models Perform on ?
<Ref> reports the mean pass@1 rate for every model on the four subsets of . We also include HumanEval pass@1 rates for comparison.
StarCoder models perform best on
We find that StarCoderBase and StarChat-Alpha significantly outperform all other models on the First/Last Success prompts.
The mean pass@1 is approximately 20% higher (10% absolute) than the closest competing model.
StarChat-Alpha also outperforms StarCoderBase on the First/Last Failure prompts by more than 30% (3% absolute, since pass@1 is low for the failing subsets).
exposes a bigger gap between larger and smaller models than HumanEval
We also observe that the difference between pass@1 rates for the larger and smaller models is more substantial with than HumanEval.
For example, pass@1 for StarChat-Alpha (15B) and StarCoderBase (15B) is 2x (or higher) than pass@1 for SantaCoder (1.1B) and Replit-Code (2.7B) across all subsebsets of .
In contrast, HumanEval pass@1 for StarChat-Alpha and StarCoderBase is only 1.5x more than Replit-Code and SantaCoder.
This suggests that is better at distinguishing models than HumanEval,
or that larger models are significantly better at following student-written instructions than smaller models.
ยง.ยง Variation in Pass@1
Most Code LLM papers only report mean pass@1 for a benchmark, averaging over problems with widely varying pass rates. Because contains multiple prompts per problem, it illuminates the extent to which luck plays a role in whether a Code LLM produces the right answer for a user.
In Figureย <ref>, we group all prompts by problem, so the plots show the percentage of problems (X) with pass@1 lower than the indicated value (Y).
For a given model, let us define a reliable failure to be a prompt that is in First/Last Failure but has pass@1 greater than 0.8 (problems to the right of the dashed line at 0.8 in the CDF). These are cases of bad luck: the prompt failed when the student tried it, but turns out to be reliable with the given model.
We find that GPT-3.5-Turbo-0301 and StarCoderBase have one and two reliable failures each.
Similarly, let us define an unreliable success as a prompt that is in First/Last Success but has pass@1 lower than 0.2. These are cases of good luck: the prompt worked once for a student, but that success is hard to reproduce. We find that nearly 10% of successful prompts are unreliable for smaller models but less than 3% are unreliable with the larger models.
Overall, we believe these results have implications for model selection. It is not adequate to optimize a model to achieve high pass@1 on any benchmark, including . An ideal Code LLM would maximize pass@1 and minimize its variability.
ยง.ยง Participant Success Rates
Examining prompt success rates by participant shows that our dataset represents a wide spectrum of prompting ability levels (Figureย <ref>). Although some participants achieve prompt success rates over 50% with StarCoderBase, a large number struggle to write reliably successful prompts.
A participant might have a low success rate for various reasons. They might not be very skilled at writing prompts, describing the problem to be solved vaguely or even incorrectly. Or they may be writing clear explanations of the problems, but in a style that the model does not understand. Thus, a low success rate does not necessarily indicate a lack of skill on the part of the participant; it can also indicate that models systematically struggle with particular ways to describe code.
ยง WHAT MAKES A SUCCESSFUL PROMPT?
ยง.ยง Trends in Student Word Choice
To explore the relative importance of different words, we tokenized the prompts and computed TF-IDF values for the four data subsets
and then calculated the mean score per word across all prompts. Figureย <ref> shows the mean frequency matrix for words that appear in the top 25 for all subsets.
The top words are a mix of English and Python terms, including many related to types, sequencing, or choice. return/s as part of Figureย <ref> confirms an anecdotal observation: Codex defaults to printing output, which is not conducive to our test-driven evaluation, so specifying โreturnโ may be a learned behavior. We observe a similar trend for specifying parameter names. The lack of large differences between scores and across subsets may be due to the data size or average prompt length (Figureย <ref>).
ยง.ยง Statistical Significance of Prompt Wording
We fitted mixed-effects regression models to the data to test the impact of prompt length and wording choices. All models include random effects for problems and use StarCoderBase pass@1 rates as the response variable. For vocabulary-level features, we use indicator variables: 1 if the prompt uses the word and 0 otherwise.
Length Contrary to our expectations, we observed a statistically significant positive effect of prompt length on pass@1 rates (p=0.007). However, this finding seems driven by last submissions, where successful prompts are on average longer; the average length is similar for passing and failing first prompts (Figure <ref>). Qualitatively, we have observed that students tend to add more
detail on subsequent attempts rather than modifying their earlier text, which likely contributes to this finding.
Input/output word choice We found a significant positive effect of mentioning โreturnโ in the prompt (p<0.0001). This likely resolves the problematic ambiguity associated with prompts that mention โoutputโ rather than specifying whether the function should return or print (Figure <ref>).
Datatype mentions We explored the effect of mentioning dictionaries, lists, and number types, as well as including instances of lists and dictionaries in the prompt. We found a reliable positive effect of mentioning โlistโ (p=0.02), and a borderline negative effect of mentioning โarrayโ (p=0.053). This suggests that StarCoderBase is sensitive to Python terminology conventions.
Function and parameter names We found no reliable effect of mentioning the parameter names in the prompt, but a significant negative effect of mentioning the function name (p=0.02).
ยง.ยง Inspecting Visual Representations
We generated embeddings of each prompt from the last-layer attention weights of the StarCoderBase model in order to explore prompt similarities and differences. Figure <ref> shows some key clusters of embeddings plotted using t-SNEย <cit.>.
Multiple prompt formulations exist. The prompts for some problems form multiple clusters, indicating multiple ways to describe the task. The prompts for the problem form two clusters (Figure <ref>). The top right cluster contains the expert-written prompt; the prompts around it tend to be brief, as exemplified by Prompt 2: Combine lists from 11 to lists from 12. The bottom left prompts do more โhandholdingโ, providing detailed step-by-step directions. For instance, Prompt 1 spells out multiple steps: Takes an input of two lists, l1 and l2, each of which also contains lists. It combines the first list in l1 with the first one in l2, then continues for all items in l1 and l2. It outputs this final list which is a combination of l1 and l2. Both approaches can generate passing programs; future work could explore whether there are style differences between the programs generated by different prompting methods.
Errors and ambiguities pattern together Examining problem sub-clusters also reveals patterns in prompting failures. In Figure <ref>, for instance, there is a sub-cluster of prompts that are ambiguous about whether the function should print or return the desired value. Although a human might be able to disambiguate, these are unreliable prompts: the model may sometimes generate a solution using and sometimes using . Another sub-cluster consists of prompts that contain the string โaspenโ (lower-case) rather than โAspenโ (upper-case), causing the generated code to fail test cases.
Certain prompting styles are challenging Although most prompt embeddings cluster by problem, a handful of clusters contain prompts for multiple problems, representing cases where the model struggles to distinguish among problem descriptions. One prompting style that students use is to describe the function's behavior in terms of expected input/output pairs. For instance, If the number is below 10, make it 10 [...] is a prompt that uses this strategy for .
Although there are passing examples of this style, it does not seem to work well for problems that involve more complex data, such as nested lists or dictionaries. Figure <ref> shows a cluster of prompts that give examples of lists that the function should return. These prompts describe different problems, yet their embeddings cluster together away from the clusters of their respective problems, indicating that the model may struggle to differentiate these rarer values. This style of prompt is likely to be well-understood by humans, yet works poorly for current code generation models.
ยง.ยง Ambiguity in Prompts
The previous sections, and prior work on Code LLMs, ask if models produce correct code for a given prompt. However, it is also possible to have an ambiguous prompt that generates several semantically different functions. Testing semantic equivalence of Python functions is of course undecidable. But, we compute a lower bound on the number of semantically different functions generated by a prompt as follows.
For each completion of a prompt, we use the inputs from the expert-written test cases as a vector of examples. We run each completion on each input and collect a vector of outputs that form the test signature of the functionย <cit.>. When two functions have distinct test signatures, that is proof that they are semantically different. Identical results are inconclusive.
<Ref> summarizes the result of this experiment on each subset of .
As expected, prompts in the Success subsets are more reliable: they generate fewer functions on average than prompts in the First/Last Failure subsets. What is more surprising is just how many different functions a single prompt can generate. Even prompts that are relatively clear to human readers, such as the one in <Ref>, can generate many different functions. Inputting the prompt shown in <Ref> to StarCoderBase generates completions that contain at least seven semantically distinct functions.
This highlights the importance of evaluating prompt reliability. Although the prompt shown in <Ref> happened to produce a passing prompt during the experiment, in reality, this was partly due to luck; the participant was fairly likely to see a failure on their first submission attempt.
This finding has clear implications for the use of code generation models as teaching tools in educational contexts (see discussions in <cit.>): reliability issues may both mislead students into thinking their descriptions are clearer than they really are, and mislead them into over-complicating descriptions that are straightforward to a human, but unreliable for prompting code generation models.
ยง CONCLUSION
We present , a large benchmark for Code LLMs, where the prompts are written by students who have completed one semester of Python.
A key feature of is that it has multiple prompts per problem from repeated attempts and from multiple students. We show that larger models are more capable of following student-written instructions than smaller models.
We also find that many student-written are unreliable (have low pass@1): students get lucky (or unlucky) when using Code LLMs. Finally, we investigate several hypotheses of what makes a good prompt.
Limitations
Students wrote the prompts interactively while using a Codex model. It is likely that they would have revised their problems differently with a different model. only has student-written prompts and is not representative of prompts written by experienced programmers. But, we believe it is valuable to have Code LLM benchmarks that focus on non-experts.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.03214v1
|
20230605194541
|
Data-Driven Modeling of Wildfire Spread with Stochastic Cellular Automata and Latent Spatio-Temporal Dynamics
|
[
"Nicholas Grieshop",
"Christopher K. Wikle"
] |
stat.AP
|
[
"stat.AP"
] |
Nonclassicality of induced coherence without induced emission witnessed by contextuality
A. Asadian
July 31, 2023
===========================================================================================
We propose a Bayesian stochastic cellular automata modeling approach to model the spread of wildfires with uncertainty quantification.
The model considers a dynamic neighborhood structure that allows neighbor states to inform transition probabilities in a multistate categorical model.
Additional spatial information is captured by the use of a temporally evolving latent spatio-temporal dynamic process linked to the original spatial domain by spatial basis functions.
The Bayesian construction allows for uncertainty quantification associated with each of the predicted fire states.
The approach is applied to a heavily instrumented controlled burn.
Keywords: Cellular automata, spatio-temporal statistics, wildfire modeling
ยง INTRODUCTION
In 2021 there were 59,000 wildfires recorded in the USA, covering an area of over 7 million acres <cit.>.
Anthropogenic climate change is lengthening the wildfire season and leading to an increase in wildfire frequency.
The damage caused by these fires and the expense of mitigation strategies costs billions of dollars annually.
Improving models that can predict the spread of wildfires is important for developing and managing fire fighting resources and in being able to provide timely warning to those who may be in danger from an advancing fire.
There are numerous approaches to modeling the spread of wildfires <cit.>.
For example, in semi-empirical physical models such as <cit.>, the rate of fire spread is parameterized as a function of the heat flux into a given area and the amount of energy necessary to cause combustion.
The heat flux into a given area is a function of the fuel source and local factors such as wind speed and elevation.
These types of thermodynamic models are incorporated into wildfire simulators such as FARSITE <cit.> where modifications to the approach of Rothermel are implemented, including consideration of different treatments of crown versus surface level fires and the addition of a spotting component.
Spotting occurs when a wildfire jumps and spreads to a nonadjacent area.
Although there has been discussion of how fire spotting can be modeled from physical principles <cit.>, incorporating realistic spotting in wildfire models remains an important topic of research.
From a mathematical perspective, wildfire propagation can be modeled using the so-called โlevel set approach.โ
In this method, the fire front is the object of interest, and the propagation of this front over space and time is modeled by considering the movement dynamics via an implicit level-set function.
Information such as wind speed and elevation, as motivated by a physical model, can be incorporated into the front spread, as done in <cit.> or <cit.>, where the parameters involved in the rate of spread are estimated.
In <cit.>, complex fuel sources are considered in the level set approach.
Historically, level-set methods rely on these types of empirical parameterizations and do not directly use data from the current fire to inform parameters.
A recent exception is the work of <cit.>, who assimilate data in real-time into a level-set model forced by a Rothermel relationship using a Bayesian filtering approach. In addition, <cit.> demonstrate a hierarchical Bayesian data-driven approach to learn the spatially-varying propagation speed in the direction normal to the fire front in a level-set framework.
An alternative approach for fire spread modeling is based on cellular automata (CA), where the spread is controlled by simple rules.
In a CA, the temporal domain consists of equally spaced time points, and the spatial domain is divided into discrete cells.
CA models have a long history in applied mathematics and computer science, at least going back to <cit.>.
In the traditional CA approach, the evolution of the states of each spatial cell from time t-1 to time t is governed by a set of simple rules.
A famous deterministic demonstration is given by Conway's Game of Life, <cit.>, where at each time point, the transition rule for a cell considers its eight neighbors, and a state transition occurs depending on the sum of its neighbors' states.
From this simple set of rules, complex time evolution behavior can be observed without additional input from the users - aside from setting the initial state of the cells.
Two challenges in developing of a CA model are to specify the number of neighbors of a cell at time t-1 and to discover or specify a set of rules to update the state of the cell for time t.
In addition, one must decide if the transition rules are best represented as deterministic or stochastic.
The <cit.> CA model's neighborhood structure looked at the cells adjacent in each of the four cardinal directions - for a total of four neighbors.
More generally, the cells are determined to be neighbors by means of Manhattan distance, |x_1 - x_2| + |y_1 - y_2|.
In Conway's game of life, the neighborhood was defined by looking at the cardinal and ordinal directions for a total of eight, or more generally this can be stated that neighbors are defined by their Chebyshev distance, max(|x_1 - x_2|,|y_1 - y_2|).
See Figure <ref> for a schematic of the classic neighborhood structures.
Specific representations of the CA method have been incorporated into a variety of statistical models.
For example, agent-based models in which individual agents are considered โcellsโ and operate according to a specific set of probabilistic rules have been used to model the spread of rabies in raccoon populations <cit.>.
<cit.> present a review of this approach applied to hydrological data and <cit.> present a review with discussion of formal uncertainty quantification for such models.
The traditional CA approach has been applied to wildfire modeling with differing definitions of the neighborhood structure and transition rules.
For example, a traditional Moore's neighborhood (cardinal and ordinal neighbors) has often been used <cit.>.
Alternatively, <cit.> considered a two-tiered neighborhood structure where local information was captured using a small-scale neighborhood, and an additional larger-scale neighborhood was used for long-range effects.
The spatial cells need not be defined as regular squares, as shown in <cit.> and <cit.>.
This approach is further discussed in <cit.>.
Once the neighborhood structure has been defined, the rules of state propagation must be specified or learned.
In many models, such as those discussed in <cit.>, <cit.>, <cit.>, and <cit.>, the rules of fire propagation are specified as simple physical relationships and are not learned.
The method outlined in this manuscript builds upon the work of <cit.> by utilizing physical principles from semi-empirical models and environmental covariates when constructing a neighborhood structure.
Importantly, rather than specify these relationships directly, our approach is data-driven in that both the neighborhood and transition rules are learned.
In particular, we incorporate a novel dynamic neighborhood structure motivated by the <cit.> relationship, in addition to transition rules informed by a latent low-rank spatio-temporal dynamic process.
The model is implemented in a spatio-temporal Bayesian hierarchical framework <cit.> to provide formal uncertainty quantification of the predicted fire spread and covariate effects, analogous to the agent-based methods discussed in <cit.> and reviewed in <cit.>.
Section <ref> presents our approach using the CA model with physically motivated covariates and a stochastic latent process, which is then implemented in a Bayesian hierarchical model to reflect the uncertainty of data, model, and parameters.
Section <ref> gives the results of a simulation, demonstrating the CA method's ability to learn the rules of cellular evolution, and in Section <ref> the method is applied to a real-world fire.
Section <ref> discusses further extensions to this framework and other possibilities for determining neighborhood structure.
ยง BAYESIAN CELLULAR AUTOMATA MODEL METHODOLOGY
This section presents the Bayesian hierarchical cellular automata model for fire spread as well as a description of the model implementation and model evaluation metrics.
ยง.ยง Stochastic Cellular Automata Model
Consider a fire that can occur within a rectangular spatial domain, D_s, that is partitioned with n spatial locations (cells), {_i: i=1,โฆ,n}.
We assume the fire evolves in discrete time indexed by t โ{1, 2, 3, โฆ, T},
and that each cell can take one of J states; here we restrict our analysis to the case where J=3 as described below, corresponding to unburned, burning, and burned wildfire states.
We denote the state of cell _i at time t by {S_t(_i) = j: j=1,โฆ,J}.
We then assume that these potential states at each discrete time and space location follow a multinomial distribution with probabilities p_jt(_i).
That is, for each spatial location _i and time t, the state probabilities must satisfy โ_j=1^J p_jt(_i) = 1.
The challenge is then to estimate these spatio-temporal probabilities.
Following the method of ordered categorical responses in <cit.>, we relate the observed discrete ordered cateogries, S_t(_i) = j: j=1,โฆ,J, to an unobserved latent spatio-temporal process, Z_t(_i).
Specifically, we relate the probability of an observation being in state j, p_jt(_i), to the latent process Z_t(i) with a normal cumulative density function (CDF) link, ฮฆ, and cutpoints ฮป_j; โ_j=1^j p_jt(_i) = ฮฆ(ฮป_j - Z_t(i)).
For identifiability reasons, the first cutpoint, ฮป_1, is set equal to 0, and the last cutpoint, ฮป_J set equal to infinity, with the remaining cutpoints learned.
In the original <cit.> framework, the latent process, Z, is assumed to vary only due to covariates.
Here, we consider spatio-temporal covariates that can affect the probability of state transition, but also allow the state transition probabilities to depend on a latent low-rank spatio-temporal dynamic process to capture dynamics that are not well-specified by the neighbor-based covariates <cit.>.
In particular, we specify the latent process by
_t = _t + _t + _t,
where _t โก [Z_t(1),โฆ,Z_t(n)]', _t is an n ร p matrix of p covariates corresponding to each spatial location at time t (see below), are associated regression coefficients, _t is an r-dimensional dynamic process (where r << n), is an n ร r spatial basis function matrix that maps this low-rank process to the n spatial locations, and _t is a zero-mean error term constrained to be uncorrelated with variance 1, _t โผ N(,), as in <cit.>.
Note that the data augmentation approach considered here is just one method to model ordinal data (e.g., see <cit.> for an overview of additional methodologies for spatial data). For example, the Pรณlya Gamma approach of <cit.> has proven to be an efficient approach for Bayesian modeling of categorical data. In our case,
the use of the data augmentation scheme from <cit.> was motivated by our consideration of a single unobserved latent process, _t. As shown in Appendix <ref>, an implementation of our model from the Pรณlya Gamma perspective provides essentially equivalent inference and prediction as the <cit.> approach presented here.
ยง.ยง.ยง Latent Spatio-Temporal Dynamics
At the next level of the model hierarchy, we specify the evolution of the dynamic process _t in (<ref>) as a vector autoregression
_t = _t-1 + _t,
where _t โผ N(,) and is an r ร r transition operator.
As discussed in <cit.>, it is important to allow the transition operator to be unstructured to capture realistic spatio-temporal dynamics (e.g., transient growth).
This is facilitated computationally by the low-rank basis representation, which is often considered in spatio-temporal dynamic models given that the underlying dynamics typically exist on a lower-dimensional manifold than that on which it is modeled <cit.>.
We discuss implementation issues associated with the choice of covariates and basis functions in <ref>. In addition, the next level of the model hierarchy requires specification of prior distributions for {ฮณ_j: j = 2, โฆ J-1} parameters, , , , and _0.
These are presented in Section <ref> as well.
ยง.ยง Model Implementation
This section discusses implementation choices associated with the local covariates in (<ref>), the choice of spatial basis functions in (<ref>), the prior distributions for model parameters, the MCMC algorithm, and our approach to model evaluation.
ยง.ยง.ยง Specification of Local Covariates
The latent _t process in (<ref>) is a function of local time-varying covariates, _t, whose specification is motivated by empirical thermodynamic relationships that are sometimes used to characterize fire spread.
Specifically, <cit.> proposed perhaps the most-used thermodynamic relationship for the spread of fires <cit.>:
R = I_R ฮพ (1 + ฮธ_w + ฮธ_s)/ฯ_b ฮถ Q_ig,
where the rate of fire spread, R (ft/min), is defined as a function of reaction intensity (I_R), larger-scale propagating flux ratio (i.e., a long-range wind factor ฮธ_w), a slope factor (ฮธ_s), and scaled by properties of the fuel (ฯ_b, ฮถ, Q_ig).
Wind speed influences the size and shape of the neighborhood to match the physical properties of the Rothermel equation. Rather than use this equation directly, as it is usually used in fire modeling, we utilize this relationship to modify the neighborhood structure of our CA model dynamically in time as a function of wind speed.
That is, when the wind speed in either the east-west or north-south direction is greater than 2 m/s, then the wind expands the neighborhood in the corresponding direction, as shown in Figure <ref>.
This novel dynamic neighborhood structure, where the neighborhood can change at each time point, allows for the model to capture larger-scale dynamics, as in <cit.>, but is motivated by well-known physical and thermodynamic relationships.
Thus, the matrix _t, an n by p = 3 matrix, consists of the state of the cells in their dynamic neighborhood.
The use of three states is again motivated by the Rothermel equation because the heat flux into a cell is a function of the fuel property and an unburnt cell is fundamentally different from a burnt cell due to the change in fuel properties of the cell.
Denoting ๐ฉ_k,t to be the neighborhood structure of cell k at time t, we can then write X_k,t,1 = โ_i I(S_t-1(s_i) = 1), i โ๐ฉ_k,t, X_k,t,2 = โ_i I(S_t-1(s_i) = 2), i โ๐ฉ_k,t, and X_k,t,3 = โ_i I(S_t-1(s_i) = 3), i โ๐ฉ_k,t,
where S_t-1(s_i) is the state at location i (which is a neighbor of cell k) at the previous time.
ยง.ยง.ยง Specification of Spatial Basis Functions
The reduced-rank dynamic process _t in (<ref>) relies on r spatial basis functions in the n ร r matrix . There are many choices for such basis functions <cit.>.
Here, we consider a constructed empirical orthogonal function (EOF) spatial basis.
As discussed in <cit.>, EOFs can be good spatial basis functions for dynamical processes because they provide an optimal (from the Karhuen-Loรจve decomposition) low-rank spatial dimension reduction with realistic covariance structure (note, sensitivity to the choice of basis functions is briefly discussed in Section <ref>).
The calculation of EOFs requires time replicates.
Although we have time replicates for fire data, if one is predicting the spread outside the range of the fire front, the basis may not provide adequate coverage of unburnt areas.
We address this with a novel constructed EOF algorithm.
The constructed EOF approach requires that we initially simulate a plausible realization of fire spread.
Specifically, the final state of the fire at the last observation time T is used as an input to a simple model.
The simple model, f_sim, can then be run for the desired number of time steps, ฯ, and the state of the fire can be predicted using the simulation.
The simulation model predicts the probability of each state at time T + 1, T + 2, ..., T + ฯ and the most probable state used to evolve the fire.
Once a collection of predicted states is computed, each cell can randomly be assigned a temperature value, _t : t = T+1, โฆ, from that same state using g_sim.
This approach for constructing EOFs is given in Algorithm <ref> and the details of the specification of f_sim and g_sim is further discussed in Appendix <ref>.
Importantly, the simulation model, f_sim, is not meant to be an ideal simulator for a particular fire.
The use of the simplified simulator is simply to evolve the fire front sufficiently to allow the EOF basis to cover unburnt regions in the actual CA model as presented in Section <ref>.
The choice of EOFs as a basis set is subjective, but it does have some advantages compared to other bases, such as spatial bisquare bases <cit.>.
The primary advantage of using the EOF basis functions is that they are guaranteed to capture the spatial dependence most efficiently for a given number of basis functions.
Having fewer spatial basis function coefficients is essential in our application due to the estimation of the latent dynamical process parameters (i.e., ,) given a relatively few number of time points. See Appendix <ref> for a demonstration of the sensitivity of our model to the specification of a low-rank bisquare basis set.
ยง.ยง.ยง Prior Specification and MCMC Algorithm
To complete the Bayesian hierarchical model, non-informative priors are assigned to the model parameters.
The local covariate effects, , are assigned independent N(0,5) priors.
Elements of the evolution matrix, , are assigned independent N(0, 2) priors, and the inverse innovation covariance matrix is assigned an inverse Wishart prior, ^-1โผ W((_r r)^-1, _r).
The cutoffs, ฮป_j, are given improper flat priors, as in <cit.>.
Once the local covariates, spatial basis functions, and prior distributions are chosen, inference and prediction is accomplished through Markov chain Monte Carlo (MCMC) methods as outlined in Algorithm <ref> and presented in detail in Appendix <ref>.
ยง.ยง.ยง Model Evaluation
The model is evaluated using two different forecast verification metrics that are specifically designed for categorical data โ the ranked probability score (RPS) and Gilbert Skill Score (GSS), as discussed in <cit.>.
GSS is a measure based on the 2 ร 2 contingency table of the forecasted states.
Our model has 3 predicted states, but we can compress these into two classes (โburningโ and โnot burningโ) so that the contingency table can be rewritten as:
GSS is then calculated by:
GSS = a - (a+c)(a+b)/n/a + b + c - (a+c)(a+b)/n,
where the term (a+c)(a+b)/n can be considered to be the expected number of true predictions due to chance.
A larger GSS indicates a better model, with 1 being a perfect prediction. GSS has been used for forecast evaluation in meteorology, climatology, and related fields <cit.>.
In addition to being able to predict a future state, our model is able to forecast the probability of each state.
Thus, it is appropriate to consider an evaluation approach that considers the probability.
The RPS <cit.> is a classic scoring rule that considers the probability of each cell being state j.
The RPS for a single prediction location is calculated by:
RPS = 1/J - 1โ_j = 1^J[(โ_k = 1^jp_k - โ_k = 1^j I_k)]^2,
where J is the number of categories, in this case 3.
The RPS for each forecast location is calculated and the average value across all locations is reported.
A lower RPS indicates a better model with 0 indicating a perfect prediction.
ยง SIMULATION EXPERIMENT
The purpose of this simulation is to ensure that the ordinal categorical CA model can retrieve the true probabilities of transitioning.
Thus, we consider a simulated fire evolving in time that is categorized into three states; unburnt, burning, and burnt.
The simulated fire considers a spatial domain of 20 ร 30 cells with 45-time points - a similar size to the real-world application presented below.
The rule for the fire propagation (i.e., from state unburnt โ burning) was a function of the number of burning Moore neighbors, see Table <ref> for the transition probabilities based on the number of burning neighbors.
This rule roughly approximates the physical propagation of a fire front as given by the Rothermel equation (Eq. <ref>) as discussed in Section <ref>.
The Bayesian hierarchical model described in Section <ref> was fitted to this dataset.
There was an intermittent wind term in the positive y direction, so the neighborhood structure was a function of this wind term, using the neighbors as shown in Figure <ref>.
All state transitions were a function of this dynamic neighborhood structure, so the latent _t process was omitted.
The MCMC was run for 10,000 iterations, with the first half discarded as burn-in.
The first 40 observations from the simulated dataset were used to train the model, and the in-sample prediction, as seen in Table <ref>, shows that the method can retrieve the true probabilities of state transitions.
The overarching goal of this CA model is to learn the probabilities of the three-state fire spread model and use these learned rules to forecast the fire spread.
Table <ref> shows that the CA approach presented here can correctly recover the probability of state transition based on the local neighborhood.
The model can also be used for forecasting.
The next five time points beyond the training period were used to test forecast performance, and the RPS was 0.100 versus the naรฏve RPS of 0.579 based on equal probabilities.
Incorporating the CA modeling framework within a Bayesian inference paradigm allows us to quantify the uncertainty of the transitions between states.
The HPDs in Table<ref> demonstrate the uncertainty associated with the transition probabilities; thus, forecasted and predicted states from this model include uncertainty measures that are unavailable in traditional deterministic CA models.
ยง S5 FIRE EXAMPLE
The real-world data for this application comes from the RXCadre series of experimental burns from Florida, USA.
An infrared camera recorded the area's temperature, and local weather conditions were recorded <cit.>.
The data were originally 240 ร 320 resolution with 105-time points. For the analysis presented here, the data were averaged onto a 24x32 grid with the mean temperature of the pixels within each larger cell used as data.
These temperatures were then categorized into three states: burnt, burning, and unburnt.
The criteria for the classification were based on the temperature of the cell, with progression from unburnt to burning after the temperature crossed a threshold above the background average of 300K.
After a burning cell returned to the average background temperature of 300K, the cell was considered to have transitioned to a burnt state.
Using discrete states instead of modeling the temperature allows the model considered here to be applied to a broader range of data sources (e.g., those that just present the fire front boundary at a given time).
The first 39 observations were used to train the model, and then the trained model was used to forecast 5 time points forward.
The MCMC sampler was run for 10,000 iterations, with the first 5,000 discarded as burn-in.
First, a model using only the covariates _t, with no latent temporal process was fitted, and the results are shown in Figure <ref>.
The model can capture the fire's growth in the positive x and y direction, but the local covariates are insufficient to model realistic spread.
For example, the model predicts the transition from the burning state to the burnt state to be more rapid than the observations, as seen in the right side of Figure <ref>.
The limitation illustrated by only considering the covariates to model fire spread can be addressed by the addition of the latent dynamical process _t linked by the spatial basis functions.
The first 5 EOFs were constructed following the procedure in Section <ref>; Figure <ref> shows the general structure.
As shown in Table <ref> and visually in Figure <ref>, adding the latent dynamical process _t on the expansion coefficients from the first 5 EOFs leads to an improvement in prediction accuracy.
The fire in Figure <ref> is forecast to progress more rapidly in the positive x and y directions, providing a better match to the truth than the forecast with only local covariate effects - this is further supported by the model scores in Table <ref>.
The GSS increases for each category with the largest improvement in capturing the growth of the fire.
The RPS decreased, showing an improved model fit with the addition of the latent dynamical process.
Figure <ref> demonstrates the strength of using a Bayesian hierarchical model.
The model is able to provide bounds on the state estimation for each forecast time.
The results in Appendix <ref> investigate an additional fire with different fire spread characteristics, but still from the RXCadre experiment. This example also demonstrates that the model with the latent spatio-temporal dynamics performs better than the covariate-only model.
ยง CONCLUSION
The Bayesian hierarchical cellular automata model presented here offers a few key benefits.
A three-state model is more justifiable from a physical perspective than a two-state model with only burning and unburned states.
This is because burnt cells would have smaller addition to the heat flux in an adjacent cell than a model that only allows for the transition from unburnt to burning.
The CA model prsented here could incorporate local covariates such as (but not limited to) wind speed, by dynamically varying the definition of neighbors.
In addition, the model matrix, _t, can easily be expanded to include terms measuring the cell's fuel if the fire spreads over a non-homogeneous region or include slope properties - as both are terms in the Rothermel equation.
Importantly, adding the unobserved latent dynamic process _t explains some of the dynamics not captured by the local neighborhood structure, and this latent process is linked using a novel construction of low-rank EOF basis functions.
Finally, the model can easily be used to forecast a fire's spread and quantify prediction uncertainty, and this uncertainty of the prediction can be allowed to propagate throughout the length of the forecast.
Future extensions of the method could explore different definitions of neighbors.
In the model presented here, the neighborhood structure was novel in that it was defined based on the Rothermel equation, but data-driven methods could be used to learn this structure.
In addition, adding a fire spotting process to the model could aid in capturing some of the dynamic fire front shifts that happen in large-scale real-world fires.
Mechanisms to allow for missing data need to be considered in cases where the data is not reliably recorded on a regular spatial and temporal scale.
ยง MCMC SAMPLER
The full-conditional distributions for our MCMC algorithm are presented below.
ยง.ยง Priors
[] โผ N(_b, _b)
[_0] โผ N(_0, _0)
[^-1] โผ W( (_Q C_Q)^-1, _Q)
[] โผ N(_m, _m)
Hyperparameters were fixed with _b = _m = _0 = 0, _b = _m = diag(2), ฮฝ_Q = 1, and _0 = diag(5)
ยง.ยง Sampler
_t = _t + _t +
* _0 โผ N(V_0 a_0, V_0)
* V_0 = (' ^-1 + ฮฃ_0 ^-1)^-1
* a_0 = ' ^-1_1 + ฮฃ_0^-1ฮผ_0
* _t โผ N(V_t a_t, V_t)
* V_t = (' ^-1 + ^-1 + ' ^-1)^-1
* Note this is where the computation savings come in. We can invert an r x r matrix faster than an n x n.
* a_t = ' ^-1 (_t - _t ) + ^-1_t-1 + ' ^-1_t+1
* Y_t = Tโผ N(V_T a_T, V_T)
* V_T = (' ^-1 + ^-1)^-1
* a_T = ' ^-1 (_t - _t ) + ^-1_T-1
* โผ W(Q_1, Q_2)
* Q_1 = (โ (_t - _t-1)(_t - _t-1)' + ฮฝ_Q C_Q)^-1
* Q_2 = ฮฝ_Q + T
* vec() โผ N(V_m a_m , V_m)
* V_m = (('_0:T-1โ_n)'(_T โ)^-1('_0:T-1โ_n) + ฮฃ_m^-1)^-1
* a_m = ('_0:T-1โ_n)'(_T โ)^-1 vec(_1:T) + ฮฃ_m^-1ฮผ_m
* ฮฒ^i โผ N(V_b a_b, V_b)
* V_b = (' +ฮฃ_b^-1)^-1
* a_b = (vec( - )' _nT,nT + ฮผ_b ฮฃ_b^-1)'
[( - ) | ฮฒ] โผ N(ฮฒ, )
[ฮฒ] โผ N(ฮผ_b, ฮฃ_b)
[ฮฒ | Z - HY] โ [Z - HY | ฮฒ] [ฮฒ]
* | S โผ N( + , )
* Truncated on the left and right by ฮป_j-1, ฮป_j for S_i = j
* Note for identifiability reasons ฮป_1 = 0. ฮป_0 = -โ and ฮป_J+1 = โ
* ฮป_i โผUnif(max [max(Z_i: S_i = j, ฮป_j-1], min [min (Z_i:S_i = j +1, ฮป_j+1])
ยง EOF CONSTRUCTION
The method to construct the EOF is as follows.
From time t=1 to the end of the training period, time t = T, a multinomial model, f_sim, was fit with only the cell's neighbors states as covariates.
This model was then used to forecast the most probable state of the fire up to time T + ฯ, where ฯ is the forecast length.
To finalize the construction of the EOF, the temperature of each cell needs to be known.
From time t = 1 to t = T this is given from the data, but for time T + 1 to T + ฯ these need to be simulated.
Using the temperature at the training time points, the simulated temperature was randomly sampled from this distribution, g_sim.
For example, if a cell was forecasted to be state โburningโ, the temperature was randomly sampled (imputed) from the distribution of all cells that were state โburningโ from time t = 1 to t = T.
ยง S7 FIRE
In addition to the S5 fire discussed in Section <ref>, we consider another fire from the RXCadre experiment that demonstrated different characteristics than the S5 fire
This fire, S7, had a fire front that was more steadily burning with the fire lacking some of the large jumps in fire boundary that were experienced in the S5 fire.
In addition, the cells transitioned to the burnt state more rapidly than in the S5 fire.
As can be seen in Figure <ref>, there was some exceptionally rapid growth that occurred that the model was unable to capture.
However, as demonstrated in Table <ref> the addition of the simulated EOFs coupled with the dynamic _t process was able to outperform the model with only local covariates.
The GSS was higher in all categories which indicates a better fit and the RPS was lower which supports the same conclusion.
ยง PรLYA GAMMA AUGMENTATION
The use of the data augmentation scheme from <cit.> was motivated by the use of a single unobserved latent process, _t.
Other data augmentation strategies for categorical data have been proposed, such as <cit.>, which utilize a latent Pรณlya Gamma process for each category.
Both augmentation schemes produce similar results as demonstrated by considering the simulation from Section <ref>.
The HPD intervals for the Pรณlya Gamma augmentation scheme are shown in Table <ref> and are similar to the results in Table <ref>.
The Pรณlya Gamma data augmentation scheme produced similar results to the Albert and Chib method while forecasting for a the same length.
The Pรณlya Gamma scheme had an RPS of 0.101 compared to the naรฏve RPS of 0.579 and the RPS of the proposed method of 0.100 as presented in Section <ref>.
ยง BISQUARE BASIS COMPARISON
As a sensitivity comparison, we considered bisquare basis functions with equally spaced knot points in the domain.
Due to the equal spacing imposed constraint, 6-knot locations were chosen, which is close to the number of EOFs but is a conservative comparison.
In this case, the model does not capture the evolving fire front as well, and the corresponding forecast predictions are worse when compared to the simulated EOF approach, with an RPS of 0.2502 compared to the method with simulated EOFs that had an RPS of 0.212.
1
apalike
|
http://arxiv.org/abs/2306.11900v1
|
20230620212245
|
Evaluation of Chinese-English Machine Translation of Emotion-Loaded Microblog Texts: A Human Annotated Dataset for the Quality Assessment of Emotion Translation
|
[
"Shenbin Qian",
"Constantin Orasan",
"Felix do Carmo",
"Qiuliang Li",
"Diptesh Kanojia"
] |
cs.CL
|
[
"cs.CL"
] |
UTF8gbsn
[
[
July 31, 2023
=================
In this paper, we focus on how current Machine Translation (MT) tools perform on the translation of emotion-loaded texts by evaluating outputs from Google Translate according to a framework proposed in this paper. We propose this evaluation framework based on the Multidimensional Quality Metrics (MQM) and perform a detailed error analysis of the MT outputs. From our analysis, we observe that about 50% of the MT outputs fail to preserve the original emotion. After further analysis of the errors, we find that emotion carrying words and linguistic phenomena such as polysemous words, negation, abbreviation etc., are common causes for these translation errors.
ยง INTRODUCTION
To express feelings and attitudes is one of language's major functionsย <cit.>. In this digital age, people can easily share their emotions or opinions online using social media platforms. This results in the generation of a large amount of emotion-loaded and opinionated texts. It is important to convey the correct emotion or opinion in the text to a large audience from different linguistic or cultural backgrounds for cross-cultural communication. Otherwise, misinformation or even toxic emotionsย <cit.> can permeate cross-cultural communication, resulting in harmful implications for the parties involved. Due to the asynchronous nature and sheer quantity of this generated text online, it is impossible for human translators to be present in the loop and perform accurate translations. Hence, machine translation (MT) remains the only viable choice for the task of translating emotion-loaded microblog textsย <cit.>.
Social media texts on Sina Weibo[<https://weibo.com/>], the Chinese microblog platform, have their unique characteristics due to certain features of the Chinese language. Since Chinese is a tonal language, there are many characters which share the exact same or very similar pronunciation but with drastically different meanings. Chinese netizens commonly use this language phenomenon to create emotional slang by replacing the original character/word with a homophone character/word to avoid censorship. Similarly, substitution with homographs is another way to create slang, as Chinese is a hieroglyphic language. For example, using โ็ฎ็ฐโ, which means โeye fieldโ, and substituting them for โ่ช็ฑโ, meaning โfreedomโ is an example of homograph substitutionย <cit.>. We can observe that โ็ฎ็ฐโ looks very similar to โ่ช็ฑโ, where a few strokes of the two characters are omitted to refer to the lack of freedom. Abbreviation of long expressions or transliteration of Chinese characters is another observed phenomenon in social media texts. Such features in this new online language variant pose severe challenges to the MT of Chinese social media texts, especially the emotion-loaded and opinionated microblogs. These challenges are different from the ones observed in translating tweets with hashtags or non-standard orthography present in the other languagesย <cit.>.
There are several studies and datasets which focus on the translation of social media texts, such as TweetMTย <cit.>, the tweet corpus proposed by Mubarak et al.ย Mubarak2020 and the Weibo corpus developed by Ling et al.ย Ling2013. However, none of these focus on the translation of emotions. To the best of our knowledge, there is no research which focuses on the Chinese-English machine translation (C-E MT) of emotion-loaded texts. We endeavour to make our contributions to this area as summarised below:
0mm
* A quality assessment framework for the machine translation of emotion-loaded texts is proposed for evaluating the MT quality in terms of emotion preservation.
* A detailed error analysis is performed to find out linguistic phenomena that are more likely to cause C-E MT errors in terms of emotions.
* A dataset[https://github.com/shenbinqian/HADQAET], annotated with translation errors and severity levels, is released to support tasks like error detection and quality estimation of emotion translation.
Sectionย <ref> describes the related literature in emotion translation and quality assessment of MT. Our proposed framework for human evaluation of the MT quality of emotion-loaded texts is described in Sectionย <ref>. In Sectionย <ref>, we introduce the dataset and methodology for quality assessment. The result of human evaluation and error analysis is presented and analysed in Sectionย <ref>. Sectionย <ref> discusses the conclusion and future plan after summarising the whole paper.
ยง RELATED WORK
ยง.ยง Translation of Emotions and Emotion-Loaded Texts
The awareness of emotions in translation has been discussed in the early stages of translation studies when the emotional reaction of the reader was of significance in the translation of the Bibleย <cit.>. Nida and Taberย Nida1969 emphasised the importance of transferring emotional elements from source to target and proposed to translate the emotionality of the text with a focus on the final translation product.
Many studies focused on the emotional difference or emotion translation between languages, most of which emphasised on the translation of emotion lexica. Russell and Satoย Russell1995 compared 14 emotional words such as `happy' or `sad' in English, Chinese and Japanese to observe similarities and differences post-translation. Choi and Hanย Choi2008 raised concerns about cross-cultural communication regarding the difficulty of finding the equivalence of some emotional concepts such as `shimcheong' (a combination of empathy, sympathy, and compassion) in Korean. Similarly, Hurtado de Mendoza et al.ย Hurtado2010 also raised questions about one-to-one translations of emotion concepts like `shame' in English and Spanish. For other language pairs like English and Arabic, Kayyal and Russellย Kayyal2013 did very similar studies and found that only one pair (happiness-farah) passed their equivalence tests, and other lexical pairs differed in terms of culture and language. For English and Indonesian, the emotion `happy' can be translated into several different words including `bahagia', `senang', `suka', `lega', `kesenangan', `gembira ria', `riang', `ceria', `patah hati', and `tenteram'ย <cit.>. They are not the same in meaning or style, so translating such words might lead to subtle emotional differences in the target language.
These studies reveal the challenges and importance of translating emotions or emotional lexica in cross-cultural communication. But very few studies focused on machine translation or the quality of machine translation regarding emotion preservation. Mohammad et al.ย Mohammad2016 examined sentiments in social media posts in both Arabic-English and English-Arabic translations, and they found that the change of sentiment was mainly caused by ambiguous words, sarcasm, metaphors, and word-reordering issues. Shalunts et al.ย Shalunts2016 also performed experiments to explore the impact of MT on sentiment analysis in German, Russian and Spanish using general news articles. They surprisingly found that the performance of the sentiment analysis tool on the source and the target was comparable, which indicated that the impact of machine translation on sentiment was not obvious.
Contrary to their result, Fukuda and Jinย Fukuda2022 found that sentiment was significantly affected by MT tools. More specifically, positive sentences tended to be more similar in sentiment polarity before and after translation than negative and neutral sentences. Apart from the aforementioned manual or sentiment score-based evaluation of emotion translation, Saadany et al.ย Saadany2021b proposed a sentiment-aware measure which can be used to adjust automatic evaluation metrics like BLEUย <cit.> for the evaluation of MT quality of user-generated content.
As can be seen above, most of the work does not focus on proposing a systematic human evaluation framework to assess the MT quality in terms of emotion preservation, especially not for Chinese-English translation. Our work focuses specifically on this particular use case.
ยง.ยง Quality Assessment of Machine Translation
In the MT area, there are several different automatic and human evaluation methods for assessing MT quality. Among those automatic evaluation metrics, BLEU is the most used tool for this purpose. However, BLEU has been criticised for the lack of recall and the โexplicit word-matching between translation and referencesโย <cit.>. Other metrics like ROUGEย <cit.> and METEORย <cit.> were proposed as an alternative to BLEU, but the resultant evaluation has been similar when compared to BLEU in terms of the n-gram matching. More recently, since the rise of BERT-like modelsย <cit.>, metrics like BERTScoreย <cit.> have been proposed to calculate the similarity between the candidate/hypothesis and the reference translation to evaluate MT quality.
An alternative way to measure quality is to figure out how much post-editing is needed for the candidate translation to match with the reference translation. Translation Edit Rate (TER), which is defined as โthe minimum number of edits needed to change a hypothesis so that it exactly matches one of the references, normalized by the average length of the referencesโย <cit.>, is a metric that measures this error based on edit distance.
More recently, Direct Assessment (DA)ย <cit.> of the translation output, which provides a continuous score within a certain range after the annotator sees a candidate translation and a translation hint, has been used in various ways. It can be used directly to evaluate translation quality as it is obtained from human annotators. It is also used as an input for training quality estimation models in recent Conferences of Machine Translation[https://www.statmt.org/]. Apart from DA, the MQM frameworkย <cit.> provides a more detailed evaluation methodology. It divides translation errors into six dimensions i.e., accuracy, fluency, design, locale convention style, terminology, and verity. Each dimension consists of several error categories like addition, mistranslation, omission or untranslated under the accuracy dimension, and more fine-grained subcategoriesย <cit.>. Each error falls into at least one of these categories and contributes to the overall rating of the translation. Error severity could be added as weights to the rating according to the seriousness of these errors. Eventually, an evaluation score can be calculated to measure the overall translation quality using the framework. The practicality, reliability, and validity of this frameworkย <cit.> have made it the choice of the translation industry and MT evaluation research.
Nevertheless, all the above automatic methods were proposed without taking into account any elements of meaning or emotion, and human evaluation metrics were proposed for the assessment of general MT quality, which might be too generic or over-complicated for specific needs like emotion preservation.
ยง FRAMEWORK FOR QUALITY ASSESSMENT OF EMOTION-LOADED TEXTS
To evaluate the preservation of emotions, we modify the MQM frameworkย <cit.> for the assessment of MT quality of emotion-loaded microblog texts. Since our focus is on the emotion preservation, we simplify the multidimensional metrics into one dimension, i.e., the accuracy of translating emotions. Our error types follow the accuracy dimension of MQM, i.e., addition, mistranslation, omission, untranslated and source error, but we only consider errors that affect emotion. For instance, an addition error is an error in translation that adds information which does not exist in the source and the addition of this information affects the emotion in the target. Our severity levels are defined based on MQM suggestion: critical, major, and minor, which indicates how severely the source emotion is affected by the error. We define them as follows:
0mm
* a critical error leads to an absolute change of emotion into a very different or even opposite emotion category;
* a major error pertains to a change of emotion into one that is not very different from the original emotion or one that is somewhere between the original emotion category and another different category;
* a minor error results in a slight change of emotion with uncertainties about the MT emotion label but certainties about the slight difference between the emotions of the source and the MT text.
Similar to the MQM translation quality scoreย <cit.>, we can also compute evaluation scores regarding emotion preservation by summing up all errors as per their severity level weights. Severity level weights are defined in the MQM framework and for this study, we define them as follows: 10 for critical errors, 5 for major errors and 1 for minor errors. The error rate or evaluation score of emotion translation can now be computed using Equationย <ref>. Examples of error annotation can be seen in the Appendix.
Error Rate = โ^n_n=1Error_n*Weight_s/Text Length
Weight_s: weight given to each error
according to its severity level
Text Length: count of all words and
punctuations in the target text
ยง DATA AND METHODOLOGY
ยง.ยง Data Description
To evaluate the transfer of emotions, we need the source text to be full of emotions. The dataset for the Evaluation of Weibo Emotion Classification Technology on the Ninth China National Conference on Social Media Processing[<https://smp2020ewect.github.io/>] (SMP2020-EWECT) is an ideal source for our purposes. It was annotated with six emotion categories, namely, anger, fear, joy, sadness, surprise and neutral, which was provided by the Harbin Institute of Technology and sourced from Sina Weiboย <cit.>.
Since the dataset is as large as 34,768 entries and it includes Weibo posts with neutral emotions as well, we filter out those posts with neutral emotions and randomly sample 20 percent (about 5500 entries) for machine translation and quality assessment. The distributions of the emotion labels of our sampled dataset and the original SMP2020-EWECT dataset can be seen in Figureย <ref>. We can see that our sampled dataset keeps the original data distribution. We use Google Translate [Results from โhttps://translate.google.co.uk/โ on the 30th of May 2022.] to translate the source text of our sampled dataset and the output is used for quality assessment.
ยง.ยง Methodology
Re-annotation of the emotions in the MT output may prove difficult in some cases due to the fact that some outputs do not make any sense for humans. For example, the MT output โPlaying this old game, I just have no friends...โ may not make much sense and it is difficult to annotate it with an emotion label. However, a bilingual annotator can easily see that the emotion of the source โ็ฉ่ฟไธช่ๆธธๆ๏ผๆ็ฎ็ดๆฏๅผๅฐๆฒกๆๅโฆโ which means โPlaying this old game, I'm just too good to have rivalsโ, is not present in the target. Therefore, we do not re-annotate the raw MT with emotion labels to check possible loss of emotions. Instead, we assess the quality of MT using the framework in Sectionย <ref>.
Two annotators with Chinese-English translation qualifications were recruited to annotate error types and severity levels. All translation errors coupled with severity levels that affect the transfer of original emotions were annotated in the MT output. Words or parts of the text in both source and target in relation to the translation errors were highlighted so that they can be used for error analysis. The annotators were given clear and detailed instructions about the decision process behind the annotations. We released the annotation guidelines along with the annotated dataset in our GitHub repository for inspection and reproducibility.
Since the perception of emotion usually varies a lot among people and across time, we randomly sampled 10% (about 550 entries) of the whole dataset for the inter-annotator agreement check and 100 entries for the intra-annotator agreement check to measure how well annotators agree with each other and themselves. The intra-annotator agreement was done by one annotator annotating the same 100 samples twice two months apart.
ยง RESULT OF HUMAN EVALUATION
This section shows the result of human evaluation on our Weibo dataset based on the framework and methodology proposed in previous sections. We first show the result of inter and intra-annotator agreement and then analyse the evaluation result from two aspects: 1) how many errors there are and how severe these errors are in terms of emotion category and error type; 2) what are the linguistic phenomena that are the likely cause for these errors.
ยง.ยง Result of the Inter and Intra-Annotator Agreement
We use the Cohen Kappa scoreย <cit.> to calculate the inter and intra-annotator agreements. Tableย <ref> shows that the Kappa scores for intra-annotator agreement are very high, which means the annotator is consistent with himself/herself during annotation. Inter-annotator agreement is relatively lower, especially for the error severity. So we compared the severity levels of the two annotators and found they are more likely to disagree on whether there is a minor error (or no error). Disagreement on major/critical errors comes the second. This may be partially because different people perceive emotions differently. To further analyse the reasons, we collect some examples which annotators disagree.
One of the main causes is the disagreement on the change of the subject of emotion. For example, the MT output of the source โๅๆญปๅฎๅฎไบโ meaning โScared me to deathโ, is โScared the baby to deathโ. One annotator annotates it as a minor error, while the other as a major error. In this example, the subject of emotion should be โmeโ rather than a third party, โthe babyโ, which might result in the reduction of the strong emotion and the transformation of the emotion from โfearโ into somewhere between โfearโ and โangerโ. Annotators are likely to disagree on the severity level of this case.
Emotion conflicts caused by mistranslation is another problem which annotators disagree. For instance, the source emotion of this post โๆๅฎนๆๅๆย ้ป็ผๅ,้ๆฅ็,็ๆฏ,็ฑ็บนย ๅ
จๅจ่ฟไธคๅคฉ็ๅบๆฅไบโ is sadness, which means โLife is so hard on me. Dark circles, pimples, eyebrows, wrinkles all had an explosive growth in the past two daysโ, but the MT output โI'm easy. Iย Dark circles, pimples, eyebrows, wrinklesย have all exploded in the past two daysโ may contain both joy and sadness, two conflicting emotions. This causes the disagreement on the severity level, as one annotator annotates it as a critical error, while the other as a major error.
The complete change of meaning in the target but with the similar emotion as the source is another major cause. For example, the emotion of the MT output โHis mother got a leg and caught a cold again, mad at meโ might be anger or sadness, which is similar to the emotion of the source โไปๅจไบไธช่
ฟ็๏ผๅๆๅไบ๏ผๆฐๆญปๆไบโ, but the target meaning is completely different from the source โF**k your mother, Cold again! I'm so pissed offโ. One annotator annotates it as a critical error, while the other as a major error.
ยง.ยง Error Statistics
After annotating each entry of the dataset, we collect all error entries and display error statistics in the following figures to see 1) how many examples are incorrectly translated; 2) which type of error is most common; 3) which emotion category is less likely to be mistranslated; and 4) which error type is more critical.
From Figureย <ref>, we know the MT quality of these texts is not acceptable as about 50% of the entries have errors in preserving emotions and 41.58% have major or critical errors.
Among these error severity levels, mistranslation is the most common error type followed by omission according to the left chart in Figureย <ref>. In the right bar chart of Figureย <ref>, we normalise the number of error types of each emotion category against the total number of errors. We can see the pattern is very similar for all emotion categories, which suggests mistranslation is the most common error type and omission comes the second.
In the left bar chart of Figureย <ref>, the number of errors is divided by the number of instances in each emotion category to show the proportion of errors in that emotion category. We see that `joy' accounts for the least errors despite it having the second largest number of total entries, which means that those social media texts with the emotion of `joy' are more likely to be translated correctly by Google Translate, compared with other emotion categories. This can be further proved by the right chart of Figureย <ref>, where normalised counts of severity levels are plotted for each emotion category. We can see from critical errors to no error, as the severity level decreases, the number of `joy' increases. This suggests errors in the `joy' category are more likely to be minor. For those entries without errors, `joy' takes the largest percentage among all emotion categories. This result corresponds with the study by Fukuda and Jinย Fukuda2022, which indicated that positive sentences are less likely to be affected by MT compared with negative and neutral sentences.
In Figureย <ref>, we normalise the number of error severity for each error type against the total number of errors. We can see that for all error types, critical errors take the largest percentage except for addition. In the addition category, minor errors are much more than critical errors, which means addition errors are less likely to have severe impact on emotions. That is maybe because the original emotion would not be changed a lot if we just add some extra words in the target text. For the untranslated category, critical errors are far more than other types. This suggests that untranslated errors affect the transfer of emotion quite severely.
ยง.ยง Analysis of Error Causes
In this section, we investigate linguistic phenomena that are responsible for the translation errors in the MT output based on annotation described in Sectionย <ref>. We first discuss errors caused by emotion carrying words and then by other linguistic phenomena.
ยง.ยง.ยง Emotion Carrying Words
To find out the most common cause of these translation errors, we collect all the words and sentences identified during annotation as corresponding to an error and then find out where the error occurs. We count the frequency of these words and sentences, and calculate the percentage of the words in total erroneous entries as shown in Tableย <ref> and Tableย <ref>.
We can see from โHuman Translationโ[Human translations here and in the rest of the paper are provided by a professional translator.] column in Tableย <ref> that almost all the frequent words are emotion carrying words. Some of them, including the most frequent word โๅฐผ็โ, are emotional slang created by homophone character substitutionย <cit.>. Others such as โๅฑ
็ถโ, โ็ซ็ถโ are emotional adverbs used to show strong feelings. Many of these emotion carrying words (top five) take a large percentage among all erroneous entries. For example, โๅฐผ็โ appears in 2.19% of the erroneous entries in emotion translation.
Tableย <ref> shows the most frequent 5 phrases among those erroneous examples. We can see that these phrases also contain slang or adverbs that convey strong emotions. From both tables, we observe that emotion carrying words pose a strong challenge to translation.
ยง.ยง.ยง Other Linguistic Phenomena
Other linguistic phenomena like polysemous words, abbreviation, negation, subject/object issues, subjunctive mood and punctuation problems etc., also play a role in causing these errors in emotion translation.
Polysemous Words
Polysemous words especially those having several different meanings can be easily mistranslated, which might result in the change of the original emotion. In the following example, the character โ็ผโ in the source literally means โhurtโ, but in the Chinese culture, it can represent an emotion called โheart-aching loveโ which refers to the love that children get from their doting parents or lovers get from their partnersย <cit.>. MT clearly mistranslates the source emotion.
Source Text (ST): ไปไธชๅฅณไบบ่ฏดไผ็ผๆไธ่พๅญ
Machine Translation (MT): Tell a woman that she will hurt me for the rest of my life
Human Translation (HT): This woman said she will love me for the rest of her life.
Abbreviation
Internet slang in Chinese can be created by abbreviation, which shortens a longer expression into a word/phrase. In the source of the following example, โๆดปไน
่งโ literally meaning โlive long seeโ is an abbreviation of โๆดป็ๆถ้ดไน
ไปไนไบ้ฝๅฏ่ฝ่งๅฐโ, which is often used to imply surprise. Mistranslation of this abbreviation by MT leads to the misunderstanding and change of the source emotion.
ST: ๆดปไน
่ง๏ผๆ่ฟๆฏๆฏ่พ้ๅ้ซๅทใๅฐฑไธไธชไบบๅๆฌขๆ่ใๆๅฎ
MT: See you for a long time, I am still more suitable for high cold. The only one who likes me is cute. Good night
HT: If you live long enough, you can see anything unexpected. I am more suitable for being cool. Only one person sees me as cute. Good night.
Negation
Mistranslation of negation is a known problem for MT affecting both the emotion preservation and the understanding of a text. In the following example, the source character โๅฅฝโ means โveryโ not the common meaning of โgoodโ and โไธโ is the negative word, but in the MT result, only โๅฅฝโ is kept as โgoodโ not the correct meaning of โveryโ and the negation is omitted.
ST: ๅฟๆ
ๅฅฝไธ็ฝ
MT: I'm in a good mood
HT: I'm in a very bad mood.
Subject/Object Issues
Since Chinese is not a subject prominent languageย <cit.>, omission of subject is a quite common phenomenon in Chinese especially in informal texts. The omission of the subject in the source causes the swap of the subject and object in MT and results in a change of the emotion subject. This further affects the emotion of the MT as it becomes closer to fear rather than anger.
ST: ๆๆไธไธ่ฝๆญปๅ
MT: Can I die if I pull
HT: Will you die if you pull me up?
Subjunctive Mood
Chinese does not have syntactic markers for counterfactual conditionals as the subjunctive mood in Englishย <cit.>. The source text expresses the wish to run the first place, but machine translation does not render it into the English subjunctive mood, affecting the transfer of the original anger emotion.
ST: ๅ่ทไธๅฐ็ฌฌไธๆๅจๆๅ้ข็้ฝๅ ไบ
MT: I can't run the first one. I deleted the one in front of me.
HT: If I didn't run the first place, I would delete all those who run ahead of me.
Punctuation Problems
Nonstandard use of punctuation in Chinese microblogs is another challenge posed to emotion translation. Here, the following source text is separated by exclamation marks, which shows strong emotions. But in the MT output, each separated character is regarded as an independent sentence. Such mistranslations change the original emotion, as the character โๅฅฝโ meaning โveryโ is translated as โgoodโ.
ST: ๆ๏ผๅฅฝ๏ผ้ฅฟ๏ผ๏ผ๏ผ๏ผ๏ผ
MT: I! it is good! hungry! ! ! ! !
HT: I AM SO HUNGRY!!!!!
0.2in
The following example shows problems caused by the lack of punctuation. Since there is no space between Chinese characters, it is difficult for MT systems to tokenise the sentence. The lack of punctuation in some entries in the dataset seems to be highly correlated with the quite frequent omission of some emotion loaded parts in the text.
ST: ๅฐๅบไปไนๆถๅๅป่่ฏๅ่ๆฏๅฟฝๆ ๆๅๆไธๅปๆฒกๅฟๆ
ๅป่่ฏ
MT: When are you going to take the test
HT: When are we going to take the exam? Always fooling me. I would be in a bad mood if it postponed again.
Hallucination
Hallucinationย <cit.> is a common problem for neural machine translation, but it is rarely seen in this dataset. We only see the following example of hallucination, which might probably be caused by continuous repetition of some characters since the MT result keeps changing as we edit the repetitive characters. Hallucination is definitely a problem for the preservation of the source emotion.
ST: ๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅๆฌกๅฅฅ็็นไนๆฏ้ไบ
MT: 200022000
HT: WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF WTF I'm f**king speechless.
ยง CONCLUSION AND FUTURE WORK
Our work investigates the performance of MT engines on the translation of emotion-loaded texts. We propose a new framework for evaluating MT quality in terms of emotion preservation developed in line with the MQM evaluation framework. We perform a manual evaluation of the MT output and present a detailed error analysis. We observe which type of errors is the most common and which emotion category is more likely to be correctly translated by MT. Our detailed analyses describe which linguistic factors such as emotion carrying words, subject omission and so on, cause these errors in translating microblog texts loaded with emotions. Furthermore, the annotated bilingual dataset can be used for training quality estimators to automatically assess the translation quality while preserving emotions. In future, we aim to extend this dataset with reference translations and use it to train computational models for estimating the translation quality of emotion-loaded texts. We plan to conduct further research and perform more analyses to improve the proposed framework.
eamt23
|
http://arxiv.org/abs/2306.11319v1
|
20230620062923
|
Primordial Density Perturbations from Magnetic Fields
|
[
"Tal Adi",
"Hector Cruz",
"Marc Kamionkowski"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] | |
http://arxiv.org/abs/2306.01897v1
|
20230602200423
|
Two-photon quantum gates with a single atom
|
[
"Arkan Hassan",
"Julio Gea-Banacloche"
] |
quant-ph
|
[
"quant-ph"
] |
|#โฉ1|#1โฉ
โจ#|1โจ#1|
#1โจ#1โฉ
#1ฮบ+i(ฮ+#1)
#1ฮบ-i(ฮ+#1)
''
'''
ฯ
ฮบฬจ
ฮ
โฯ^'
ฯ^''
Department of Physics, University of Arkansas, Fayetteville, AR 72701
A single atom in the V configuration exhibits a natural optical nonlinearity even down to the two-photon level. We explore in detail the possibility to use this for a controlled-phase quantum logical gate, by having the atom interact with single-mode, single-photon fields for a prescribed time.
Two-photon quantum gates with a single atom
Arkan Hassan and Julio Gea-Banacloche
July 31, 2023
===========================================
ยง INTRODUCTION AND SUMMARY
Single photons are, in many ways, ideal systems to be used as qubits for quantum information processing tasks, but the lack of a direct interaction between photons makes conditional logic operations challenging. The Kerr effect between photons traveling in a nonlinear medium was proposed early on <cit.>, but it has been shown that, even if a strong enough nonlinearity could be achieved in a conventional material, the unavoidable phase noise attending such nonlinearities scales badly to the single-photon level, to the point of making conditional gates essentially impossible <cit.>.
Single atoms do exhibit large nonlinearities in their interaction with few-photon systems, but again their coupling to single photons in free space is too small to engineer deterministic gates. By placing the atom in a cavity (or waveguide) to enhance the coupling, conditional logical gates can indeed be realized, in particular, in protocols where the atom interacts sequentially with the two photons <cit.>, and some of these schemes have been demonstrated experimentally <cit.>; however, the realization of a conditional gate between two photons interacting simultaneously with an atom remains a challenge. The main conceptual difficulty here is that the interaction of the incident, traveling wavepackets with the atom-cavity system invariably results in spectral entanglement between the photons, which substantially reduces the fidelity of the operation, as the outgoing wavepackets no longer match the spectral profile of the incoming ones; see, e.g., <cit.> for specific examples. (This spectral entanglement was also identified as a generic difficulty for nonlinear interactions between single-photon wavepackets in <cit.>; since then, a number of solutions have been proposed, typically for wavepackets traveling in one-dimensional geometries, and involving either a nonlocal continuous medium <cit.>, or successive interactions with periodic arrays of few-level scatterers <cit.>.)
A potential solution to the atom-cavity problem would be to, in effect, separate the process of getting the photons in and out of the cavity from the nonlinear interaction between the photons (mediated by the atom) once they are in the cavity. To this end, one could adopt the methods proposed in <cit.> to load and unload the cavity with essentially negligible wavepacket distortion, and then use, e.g., a large detuning produced by an external applied field to effectively โturn the interaction with the atom on and off.โ We show here that, if this can be done, under ideal conditions (namely, no losses during the interaction, either by leakage out of the cavity or emission into non-cavity modes, and precise control of the interaction time), a V-level atom could mediate a conditional phase (CPHASE) gate between two photons with different polarizations with, in principle, arbitrarily high fidelity. As explained in Section II, our result is based on the fact that efficient rational approximations exist to the irrational frequencies appearing in the state evolution coefficients.
We note that the possibility of using a precise control of the interaction time between two photons, each described by a single temporal mode, and an ensemble of N atoms, to effect a CPHASE gate was originally suggested by Ottaviani et al. <cit.>, and in fact it was this work that provided the original motivation for our research. As we show below, however, the scheme only works optimally for a single atom, as adding atoms in effect โdilutesโ the nonlinearity of the system until, in the limit of very large N, it essentially vanishes (i.e., the phase shift for two photons is just twice the phase shift for a single photon). This, the second main result of our paper, is established in detail for the V-system in Section III. In section IV, and for completeness, we also explore the suggestion of <cit.> of extending the system to five levels with a couple of strong driving fields, to possibly take advantage of electromagnetically-induced transparency (EIT) to mitigate the loss of fidelity due to spontaneous emission.
Although for most of the paper we concentrate on the V system (appropriate for a polarization encoding), we note that our method would also work, in principle, with an ordinary two-level atom and a two-rail encoding, using a scheme similar to that proposed in <cit.>. This might, therefore, serve as an alternative to the scheme proposed in Section VI.C of <cit.>. This and other related issues are further discussed in the Conclusions (Section V).
ยง C-PHASE GATE USING A SINGLE V-TYPE ATOM
ยง.ยง The lossless case.
In this section we consider the interaction of one atom in the V configuration (as shown in Figure 1), with two field modes, a and b, distinguishable by their polarization, which can initially contain at most one photon each. The single-temporal mode assumption implies that the interaction takes place in an optical cavity. This can, in principle, be set up by first loading the photonic traveling fields into the cavity, using techniques such as described in Refs.ย <cit.>; the frequency-conversion process used to this end needs to preserve the individual polarizations. The finesse of the cavity at the frequency of the a and b modes is assumed to be essentially infinite. We also assume this frequency to be far detuned from the initial atomic transition frequency, so no interaction between the atom and the fields takes place during the loading process. At t=0 we assume the atom is brought into (near) resonance with the fields, and hence allowed to interact with them for 0<t<T. At T the interaction is stopped, by again bringing the atom far from resonance, and the fields extracted from the cavity by the reverse of the frequency-conversion process used to load them in.
We assume a symmetric arrangement with equal detunings and coupling constants for both modes, for simplicity (the problem is analytically solvable also without these assumptions, and we have verified that the best results are obtained under these conditions). The Hamiltonian, in an interaction picture for the fields, is then
H = ฤงฮด|e_aโฉโจe_a| + ฤงฮด|e_bโฉโจe_b| + ฤง g (|e_aโฉโจg| a + |e_bโฉโจg| b + H.c.)
With one photon initially in each mode and the atom starting in the ground state |gโฉ, the system's state at any later time can be written as
|ฮจ(t)โฉ = C_ea^(2) |01โฉ|e_aโฉ + C_eb^(2) |10โฉ|e_bโฉ + C_g^(2) |11โฉ |gโฉ
leading to the equations of motion
ฤ_ea^(2) = - i ฮดC_ea^(2) - i g C_g^(2)ฤ_g^(2) = -i g C_ea^(2) - i g C_eb^(2)ฤ_eb^(2) = -i ฮดC_eb^(2) - i g C_g^(2)
In the case only one of the photons, say, the a photon, is initially present, the relevant equations would be instead
ฤ_ea^(1) = - i ฮดC_ea^(1) - i g C_g^(1)ฤ_g^(1) = -i g C_ea^(1)
These equations are easily solved, with the results, for the single-photon case,
C_g^(1)(t) = e^-iฮด t/2[iฮด/2ฯ_1sinฯ_1 t + cosฯ_1 t]
C_ea^(1)(t) = -i g e^-iฮด t/2sinฯ_1 t/ฯ_1
and, for the two-photon case,
C_g^(2)(t) = e^-iฮด t/2[iฮด/2ฯ_2sinฯ_2 t + cosฯ_2 t]
C_ea^(2)(t) = C_eb^(2)(t) = -i g e^-iฮด t/2sinฯ_2 t/ฯ_2
where
ฯ_1 = 1/2โ(ฮด^2 + 4 g^2) ฯ_2 = 1/2โ(ฮด^2 + 8 g^2)
In order to carry out a successful conditional phase gate on this pair of photons, we would ideally like to choose an interaction time T such that the atom returns to the ground state with unit probability at that time, independently of whether one or both photons are initially present, so that
C_g^(1)(T) = e^iฯ_1, C_g^(2)(T) = e^iฯ_2
and with the phases ฯ_1 and ฯ_2 such that
ฯ_2 - 2ฯ_1 = ฯ
While it is not actually possible to satisfy Eqs.ย (<ref>) and (<ref>) exactly for a finite interaction time T, it is, in principle, possible to get arbitrarily close for a sufficiently long T. This is most easily seen in the resonant case, ฮด =0, where C_g^(1) = cos(gt) and C_g^(2) = cos(โ(2) gt). Eqs.ย (<ref>) and (<ref>) would then be satisfied if we could have simultaneously gt=nฯ, with n an integer (so ฯ_1 =ฯ or 2ฯ), and โ(2) g t = mฯ, with m an odd integer. This would require โ(2) to be of the form m/n, i.e., to be a rational number, which it is certainly not; however, a result in number theory <cit.> shows that there exist rational approximations to โ(2) (and, indeed, to any irrational number) with the property that
| โ(2) - m/n| < 1/n^2
with n arbitrarily large. One then only has to choose one such approximation with odd m, and let gT = nฯ, to have |C_g^(1)(T)|=1 and C_g^(2)(T) = cos(mฯ+ฯต), where ฯตโผ 1/n. As an example, the relatively rough approximation โ(2)โ 17/12 already yields C_g^(2)(T) = cos(12 โ(2) ฯ)=-0.9957, for gT=12ฯ. (Interestingly, all the terms m/n in the series (<ref>) for โ(2) obtained by a continued fraction expansion actually have odd numerators.)
As this example illustrates, trying to satisfy separately the conflicting requirements (<ref>) and (<ref>) inevitably leads to tradeoffs. A way to quantify, in a single number, the potential impact of these tradeoffs is provided by a gate fidelity that can be defined, for this system, in the following way. Assume the initial state of the atom-field system to be |ฮจ(0)โฉ = (ฮฑ_00|00โฉ + ฮฑ_01|01โฉ + ฮฑ_10|10โฉ + ฮฑ_11|11โฉ)|gโฉ. Then, ideally, we'd want the state of the field at the time T to be
|ฮฆ_idealโฉ = ฮฑ_00|00โฉ + ฮฑ_01e^iฯ_1|01โฉ + ฮฑ_10e^iฯ_1|10โฉ - ฮฑ_11e^2iฯ_1|11โฉ
where the phase ฯ_1 is arbitrary, and given by e^iฯ_1 = C^(1)_g/|C^(1)_g|; what matters is that Eq.ย (<ref>) be satisfied, as indicated by the minus sign in Eq.ย (<ref>).
We can then define the gate fidelity (root-mean square) by
F = (โจฮฆ_ideal|ฯ_f|ฮฆ_idealโฉ)^1/2
where the overbar means an average over the random coefficients ฮฑ_ij of the initial state (which we assume to be uniformly distributed in magnitude between zero and 1, with random phases, and satisfying โ |ฮฑ_ij|^2 =1), and ฯ_f = Tr_at(|ฮจ(T)โฉโจฮจ(T)|) is the reduced density operator for the field after tracing over the atomic states. We find
F^2 = 1/10(1+3|C_g^(1)|^2 + |C_e^(1)|^2 + |C_g^(2)|^2 +|C_e^(2)|^2+2|C_g^(1)|-1/|C_g^(1)|^2 (1+2|C_g^(1)|) Re[( C_g^(1)^โ)^2 C_g^(2)] )
where |C_e^(1)|^2 and |C_e^(2)|^2 are the occupation probabilities of either state |e_aโฉ or |e_bโฉ in the single- and two-photon cases, respectively. (Note that, in the lossless case considered in this subsection, we can use |C_e^(1)|^2 + |C_g^(1)|^2 =1 and 2|C_e^(2)|^2 + |C_g^(2)|^2 =1 to further simplify the result (<ref>).)
Figure 2 shows what F looks like as a function of ฮด/g and gT. The maxima visible along the vertical axis (ฮด =0), specifically for gT=6.473 and 15.629, correspond to the first two continued fraction approximations to โ(2), namely, 3/2 and 7/5, and despite their crudeness they already yield F = 0.9856 and 0.9975, respectively. The next term, 17/12, mentioned above but not visible in the figure, as it corresponds to the rather large gT โ 12ฯโ 37.7, would yield F = 0.9996.
Also of potential interest are the fidelity peaks visible in Figure 2 for nonzero detuning ฮด. The logic behind these peaks can be understood along similar lines to the zero-detuning case. If, for some ฮด and T, one can make ฯ_1 T โ nฯ and ฯ_2 T โ mฯ, then by Eqs.ย (<ref>) and (<ref>) one has C^(1)(T) โ (-1)^n e^-iฮด T/2 and C^(2)(T) โ (-1)^m e^-iฮด T/2. In that case, 2ฯ_1 โ -ฮด T and ฯ_2 โ -ฮด T/2 if m is even, or ฯ_2 โ -ฮด T/2+ฯ if m is odd. The conditions (<ref>) and (<ref>) will then be satisfied if one can find three integers, m,n and q, such that
1/2โ(ฮด^2 + 4g^2)T โ nฯ1/2โ(ฮด^2 + 8g^2)T โ mฯ1/2ฮด T โ qฯ
with m and q of opposite parity. This requires n,m and q to approximately satisfy
2 n^2 = m^2 + q^2
with m>n>q.
All the maxima seen in Figure 2 correspond to approximate solutions of this equation, and it is easy to see that the approximations can be improved, and the gate fidelity along with them, indefinitely, for long enough times, as in the ฮด =0 case. For example, letting n=3q Eq.ย (<ref>) becomes 17q^2 = m^2, and one can then choose m and q from the successive optimal approximations to โ(17): 4/1, 33/8, 268/65,โฆ. The first one, m=4, q=1, when substituted in (<ref>), yields ฮด/g=0.707 and gT=8.886; the actual maximum of F is at ฮด/g=0.699 and gT =8.762, and equals 0.9924. (For reference, the largest maximum of F in the region shown in Fig.ย 2 occurs at ฮด/g = 1.3881 and gT = 18.007, and equals 0.9984; the corresponding values of n,m and q in Eq.ย (<ref>) are (7,9,4).)
ยง.ยง Impact of losses (spontaneous emission)
The possibility that the atom may decay by emitting a photon into a mode other than a or b can be approximately treated by making the replacement ฮดโ - i ฮณ + ฮด in Eqs.ย (<ref>) and (<ref>) (note that ฮณ here is an amplitude decay rate). This โpure-state approximation,โ which ignores the fact that the atom must return to the ground state after a spontaneous emission event, is essentially correct for the single-photon case, since in that case the atom's return to the ground state with no photons left in the system means no further evolution is possible. In the two-photon case, however, there is still one photon left to interact with the atom after a single spontaneous emission event, and an exact calculation of the fidelity requires a density-matrix treatment.
As shown in the appendix, the result of this density matrix treatment is an expression for the gate fidelity that includes the same terms shown in Eq.ย (<ref>), calculated in the same way (with only the replacement ฮดโ - i ฮณ + ฮด), plus additional terms:
F^2 = [Eq.ย (13) with ฮดโ - i ฮณ + ฮด] + 1/10ฯ^(1)_00g,00g + 1/20ฯ^(2)_00g,00g + 1/10ฯ^(2)_00e,00e+1/10ฯ^(2)_10g,10g
Here, ฯ^(1)_00g,00g (probability to be in the ground state with zero photons, when starting from the ground state with one photon) is calculated from the single-photon results as
ฯ^(1)_00g,00g = 2ฮณโซ_0^t |c_e^(1)|^2
and the other terms correspond to the two-photon case (equations of motion for them are given in the appendix). Note that the symmetry of the system has been used throughout; in particular, 1/10ฯ^(2)_10g,10g = 1/10ฯ^(2)_01g,01g, where the first pair of subscripts refer to the a and b photons, respectively.
Figure 3 shows the gate fidelity as a function of ฮณ calculated from the density matrix, optimized at every point over T and ฮด, with 0โค gT โค 20 and 0โคฮด/g โค 2 (the parameter region covered in Figure 2). Specifically, in the range 0โคฮณ/g โค 0.005, we have taken gT = 18.01, ฮด =1.388g; between ฮณ = 0.005g and ฮณ =0.015g, we take gT = 8.76, ฮด =0.7g; between ฮณ = 0.015g and ฮณ =0.07g, gT = 6.473, ฮด =0; and between ฮณ = 0.07g and ฮณ =0.155g, gT = 2.695, ฮด =0. These choices of T and ฮด roughly correspond to the different maxima shown, for ฮณ=0, in Fig.ย 2; note how as ฮณ increases, a shorter time evolution is favored, as well as (eventually) the choice ฮด = 0.
Besides the โunconditionalโ fidelity just discussed, one may be interested in the conditional fidelity, that is, the (gate) fidelity that one would obtain in a run of the experiment in which no photons were lost to spontaneous emission. This can be calculated easily, by ignoring the additional terms in Eq.ย (<ref>) and renormalizing the pure-state wavefunction (with ฮดโ - i ฮณ + ฮด) before calculating (<ref>). The result is shown as the dashed line in Figure 3, for the same values of T and ฮด as the corresponding conditional fidelity. The stepwise decreases seen there are due to the fact that the successive choices of T and ฮด made as ฮณ increases become optimal because they reduce the probability of a spontaneous emission relative to the previous choice, but they do lead to a smaller gate fidelity when spontaneous emission does not happen at all.
Figure 3 clearly shows that one needs to have a very small ratio of ฮณ to g in order to have a substantial unconditional fidelity in this setup. For this reason, in the next couple of sections we will discuss schemes that have been proposed to either enhance g or reduce ฮณ, always in the context of a single-mode treatment of the two quantized fields.
ยง MULTIPLE V-TYPE ATOMS
It is well known that, for some atom-field interaction processes, having a large number of atoms N at a given field location (in a volume small compared to the wavelength) leads to an effective enhancement of the atom-field coupling g by a factor of โ(N). However, while this is true for linear processes, and even for some nonlinear processes when the density of photons is sufficiently large, it does not work for single-photon nonlinear processes like the one considered here.
The key fact that needs to be appreciated is that, in this scheme, in order for one photon to affect the other they both need to be interacting with the same atom. The essence of the V-atom nonlinearity is that an individual atom cannot absorb, say, an a photon if it has absorbed a b photon. Introducing more atoms, under those circumstances, would indeed make it possible for any of them to interact with the single a photon, but at that point it would not matter whether they were three-level V atoms or just two-level atoms: the joint interaction presented in Section II.A would not be happening, nor would the associated conditional phase shift.
In fact, since introducing more atoms actually increases the chances that the two photons will interact with separate atoms, one expects the nonlinearity (i.e., the conditional phase shift) to go down as N increases. This can indeed be shown to be the case, formally, as follows:
Let the Hamiltonian for the N-atom, two-photon system be
H= ฤงฮดโ_i=1^N|e_aโฉ_iโจe_a| + ฤงฮดโ_i=1^N|e_bโฉ_iโจe_b| + ฤง gโ_i=1^N (|e_aโฉ_iโจg| a + |e_bโฉ_iโจg| b + H.c. )
where the bras and kets shown act only on the space of the i-th atom. Under the simplest assumption that both transitions have identical strengths and detunings, the state vector of the system when only one photon (say, a) is initially present and the atoms all start in the collective ground state |g_allโฉ, can be written as
|ฮจ(t)โฉ^(1) = C_g^(1)(t) |g_allโฉ|1โฉ_a + C_e^(1)(t) |ฯ_aโฉ|0โฉ_a
Here
|ฯ_aโฉ = 1/โ(N)( โ_i=1^N |e_aโฉ_iโจg|) |g_allโฉ
denotes a normalized, completely symmetric state in which one of the atoms is in the excited state |e_aโฉ, and all the others are in the ground state. The state |ฯ_bโฉ is defined analogously. When both photons are initially present, the evolution of the system is
|ฮจ(t)โฉ^(2) = C_g^(2)(t) |g_allโฉ|11โฉ_ab + C_e^(2)(t) |ฯ_aโฉ|01โฉ_ab + C_e^(2)(t) |ฯ_bโฉ|10โฉ_ab +C_ee^(2)(t)|ฯ_abโฉ|00โฉ_ab
where
|ฯ_abโฉ
=1/โ(N-1)(โ_i=1^N |e_aโฉ_iโจg|) |ฯ_bโฉ =1/โ(N-1)(โ_i=1^N |e_bโฉ_iโจg|) |ฯ_aโฉ
is again a symmetric, normalized state in which one atom is in state |e_aโฉ, another one in state |e_bโฉ, and the rest in the ground state. From the basic nature of |ฯ_aโฉ, |ฯ_bโฉ and |ฯ_abโฉ, and taking into account their normalization, it is clear that the following results hold:
(โ_i=1^N |e_aโฉ_iโจe_a|) |ฯ_aโฉ = |ฯ_aโฉ(โ_i=1^N |e_aโฉ_iโจe_a|) |ฯ_abโฉ = |ฯ_abโฉ(โ_i=1^N |gโฉ_iโจe_a|) |ฯ_aโฉ = โ(N)|g_allโฉ(โ_i=1^N |gโฉ_iโจe_a|)|ฯ_abโฉ = โ(N-1)|ฯ_bโฉ
so the Schrรถdinger equation, with the Hamiltonian (<ref>), yields
ฤ_e^(1) = - i gโ(N) C_g^(1) - i ฮด C_e^(1)ฤ_g^(1) = - i g โ(N) C_e^(1)
for the single-photon case, and
ฤ_ee^(2) = -2iฮด C_ee^(2) -2igโ(N-1) C_e^(2)ฤ_e^(2) = -iฮด C_e^(2) -igโ(N-1) C_ee^(2) - igโ(N) C_g^(2)ฤ_g^(2) = -2igโ(N) C_e^(2)
for the two-photon case.
In the limit where N is large, so that one can approximate โ(N-1)โโ(N), it is easy to see that the solutions of Eqs.ย (<ref>) and (<ref>) satisfy
C_g^(2) = C_g^(1)^2, C_e^(2) = C_g^(1)C_e^(1), C_ee^(2) = C_e^(1)^2
which means the N-atom two-photon system reduces, formally, to two independent N-atom, one-photon systems. In particular, we see from the first of Eqs.ย (<ref>) that one will always have ฯ_2 = 2 ฯ_1, and hence no conditional phase (i.e., the nonlinearity, in effect, vanishes in this limit). Of course, expanding the square root shows that the approximation โ(N-1)โโ(N) ceases to be valid for times such that gt/โ(N)โผ 1, but the whole point of bringing in N atoms was to increase the effective coupling so one could have a substantial effect for times such that gโ(N) t โผ 1 while ฮณ t โช 1; if one has to wait for times tโผโ(N)/g, then the requirement ฮณ t โช 1 becomes even harder to satisfy than in the single-atom problem. (It is easy to see that the arguments presented above apply also in the presence of spontaneous emission, at least in the โpure state approximationโ where one simply replaces ฮด by -iฮณ+ฮด.)
Although the argument above clearly shows that a large number of atoms is undesirable, one may still wonder about what happens for a small number of atoms. We have, therefore, studied the 2-atom case in detail, with the result shown in Figure 4, which shows the gate fidelity as a function of ฮณ for two identical V-type atoms interacting with single photons a and b as a dashed line, compared to the equivalent result for a single atom (solid line).
Each curve in Figure 4 is the result of optimizing, at each point, with respect to both gT and ฮด/g, over the same space of parameters shown in Figure 2, and the density matrix treatment has been used for both (details of the calculation can be found in Appendix A.2). It is apparent that adding even just one atom to the system substantially degrades the CPHASE gate performance, especially as the spontaneous emission losses increase.
ยง THE 5-LEVEL SCHEME WITH TWO CLASSICAL FIELDS
It has long been known that electromagnetically-induced transparency (EIT) can increase the effective optical nonlinearity of an atomic gas while at the same time decreasing its absorption, i.e., making it more transparent <cit.>, and in fact a proposal to use this โgiant Kerr effectโ for quantum logic was put forth by Lukin and Imamoglu <cit.>. This idea motivated the authors of <cit.> to consider the potential for a CPHASE gate of the 5-level, โMโ-configuration scheme illustrated in figure 5, where the two auxiliary levels |g_aโฉ and |g_bโฉ are coupled by external, classical fields (with Rabi frequencies ฮฉ) to the excited states |e_aโฉ and |e_bโฉ, to provide EIT in the |gโฉโ|e_aโฉ and |gโฉโ|e_bโฉ transitions. The purpose of this section is to explore this system fully, in the single-atom regime (it is easy to verify that the argument against multiple atoms presented in the previous section applies to this scheme as well <cit.>).
It may be worth pointing out, at the outset, that it is not immediately obvious how EIT would actually help here. The standard derivation of EIT involves a perturbative treatment of the probe field (in this case, the single-photon fields a and b) <cit.>. This implicitly assumes that the interaction of each probe photon with each atom is relatively weak, and hence both the absorption and the phase shift result from the cumulative effect of many atoms interacting with (essentially) a classical probe field. This is the complete opposite of the situation considered here, where each photon needs to be coupled as strongly as possible to a single atom. Indeed, the results we show below are not really very EIT-like, and we believe it is best to just think of the auxiliary fields and levels as a way to introduce an additional parameter in the systemโformally, the Rabi frequency ฮฉโthat makes it possible, to some extent, to satisfy the conditions (<ref>) and (<ref>) somewhat better than the three-level system in the presence of losses.
Figures 6 and 7 illustrate these points. Figure 6 shows the gate fidelity for the 5-level system, as a function of ฮณ, for different values of the auxiliary fields ฮฉ. We find that, for small ฮณ, the 5-level system always performs worse than the 3-level system. For ฮณ T larger than about 0.07 in the figure, however, it is possible to find a value of ฮฉ that brings the 5-level fidelity somewhat above the 3-level curve (which here corresponds to ฮฉ =0). Nevertheless, it appears that as ฮฉ increases past some optimal value, the improvement over the ฮฉ=0 case disappears, or is confined to larger and larger values of ฮณ, where the fidelity is already quite low. Figure 7, which shows the gate fidelity as a function of ฮฉ, for different values of ฮณ, confirms this and also suggests the existence of an optimum value, or range of values, of ฮฉ.
As it turns out, however, Figures 6 and 7 do not tell the whole story. Each point in the graphs has been optimized with respect to T and ฮด in the intervals 0โค gTโค 30 and 0โคฮด/gโค 10, but many of the values shown correspond, in fact, to either gT=30 or ฮด/g = 10, meaning that larger values are possible in principle if either T or ฮด are increased. We find, in fact, that the optimal values, for given ฮฉ and ฮณ, are found by increasing both gT and ฮด/g together, keeping the ratio g^2T/ฮด constant. In practice, of course, we would expect the maximum value of T to be limited by some practical considerations (for instance, if the interaction takes place in a cavity, by the decay time of the field in the cavity, which we have not considered here at all), and similarly ฮด may be limited by the possibility of coupling to nearby atomic levels, so figures 6 and 7 are probably a fair representation of what one might qualitatively expect in an actual experimental setting. Nevertheless, a full study of the asymptotic behavior of the 5-level system, for large T and ฮด, is possible and not exempt of interest, so we will devote the rest of this section to it.
We begin with the equations of motion for the wavefunction amplitudes in the โquasi-pure stateโ approximation (as mentioned earlier, to account for spontaneous emission exactly one only has to supplement the contribution of these terms to the density matrix by additional terms, which are explicitly indicated in Appendix A.3). When both photons are initially present, we have
ฤ_ea^(2) = - (ฮณ + i ฮด) C_ea^(2) - i g C_g^(2) -iฮฉC_ga^(2)ฤ_g^(2) = -i g C_ea^(2) - i g C_eb^(2)ฤ_eb^(2) = -(ฮณ + i ฮด) C_eb^(2) - i g C_g^(2) -iฮฉC_gb^(2)ฤ_ga^(2) = -i ฮฉC_ea^(2)ฤ_gb^(2) = -i ฮฉC_eb^(2)
and when only one is present (say, the a photon) we have
ฤ_ea^(1) = -(ฮณ + i ฮด) C_ea^(1) - i g C_g^(1) -iฮฉC_ga^(2)ฤ_g^(1) = -i g C_ea^(2)ฤ_ga^(1) = -i ฮฉC_ea^(2)
Both (<ref>) and (<ref>) can be solved with the initial condition C_g(0) =1, with the results, for the ground state amplitude at the time t,
C_g^(1) = ฮฉ^2/g^2+ฮฉ^2 + g^2/g^2+ฮฉ^2 e^-1/2 t(ฮณ+iฮด)ร 1/2 [ e^-ฮผ_1 t/2(1-ฮณ+iฮด/ฮผ_1) + e^ฮผ_1 t/2(1+ฮณ+iฮด/ฮผ_1)],ฮผ_1 โกโ((ฮณ+iฮด)^2-4ฮฉ^2-4g^2)
and
C_g^(2) = ฮฉ^2/2 g^2+ฮฉ^2 + g^2/2 g^2+ฮฉ^2 e^-1/2 t(ฮณ+iฮด)ร [ e^-ฮผ_2 t/2(1-ฮณ+iฮด/ฮผ_2) + e^ฮผ_2 t/2(1+ฮณ+iฮด/ฮผ_2)],ฮผ_2 โกโ((ฮณ+iฮด)^2-4ฮฉ^2-8g^2)
These results immediately show that, if ฮฉ is allowed to become very large, one will simply have C_g^(1) = C_g^(2) = 1, and the desired conditional phase shift will vanish. This makes sense physically: a very large ฮฉ produces a Stark shift that takes the atom out of resonance with the a and b photons, in such a way that the interaction effectively vanishes.
The interesting thing, however, is that it is formally possible to take the photons very far from resonance in another way, by making ฮด largeโso large that, in effect, the probability of a spontaneous emission event becomes negligibleโand yet, as long as ฮฉ remains finite, one can still approximately achieve the desired phase shift, albeit for very long times.
The precise result follows in a very straightforward way from Eqs.ย (<ref>) and (<ref>). Note that for ฮดโซฮณ, g, ฮฉ one has
ฮผ_1 = ฮณ + iฮด +2i/ฮด(g^2+ฮฉ^2) + O(1/ฮด^2)ฮผ_2 = ฮณ + iฮด +2i/ฮด(2g^2+ฮฉ^2) + O(1/ฮด^2)
and consequently we have the asymptotic forms
C_g^(1) โฮฉ^2/g^2+ฮฉ^2 + g^2/g^2+ฮฉ^2 e^it (g^2+ฮฉ^2)/ฮดC_g^(2) โฮฉ^2/2g^2+ฮฉ^2 + 2g^2/2g^2+ฮฉ^2 e^it (2g^2+ฮฉ^2)/ฮด
One can see now how the conditions (<ref>) and (<ref>) can be approximately satisfied. To begin with ฮฉ/g should be sufficiently small for the second term on the right-hand side of Eqs.ย (<ref>) to dominate over the first; then, to get Eq.ย (<ref>), it would suffice to have
t/ฮด(2 g^2+ฮฉ^2) -2 t/ฮด(2 g^2+ฮฉ^2) = -t/ฮดฮฉ^2 = -nฯ
with n odd. Note that with small ฮฉ/g this can only be satisfied for very large values of gt; ฮด/g itself is required to be very large in order for the approximation (<ref>) to be valid, and we require, additionally, ฮดโซฮณ,ฮฉ. The value ฮด=10g in the optimized fidelity plots of Figs.ย 6 and 7 is, in fact, sufficiently large for (<ref>) to be approximately valid; the optimum value of ฮฉ seen then in the graphs, about 0.8g, is the best compromise between trying to keep ฮฉ small enough for |C_g| โผ 1 in Eqs.ย (<ref>), and large enough for (<ref>) to be approximately valid, given the constraint gTโค 30.
ยง A TWO-LEVEL ATOM SCHEME.
The three- (or five-) level scheme considered so far is suitable for a conditional phase gate when a โsingle-railโ encoding is used (i.e., the logical 0 and 1 states correspond to single photon states with orthogonal polarizations). However, by making use of a setup such as the one shown in <cit.>, one could apply these ideas to a dual-rail encoding, the idea being that an initial state |ฯต_1,ฯต_2โฉ, with ฯต_1,ฯต_2 โ{0,1}, becomes a cavity field state with ฯต_1+ฯต_2 photons. Then a single two-level atom can be used to produce the desired phase shift between the single-photon and two-photon states.
With a single two-level atom in the cavity, the results for C_g^(1)(t) and C_g^(2)(t) are identical to those given by Eqs.ย (<ref>)โ(<ref>). The expression for the gate fidelity is also identical to Eq.ย (<ref>), except for the absence of the terms involving the excited state amplitudes. This is because, with a dual-rail encoding, the number of physical photons involved in a two-qubit operation is always two, regardless of the initial logical state. Hence, if the atom is left in an excited state at the end of the interaction time, the final field state will necessarily be orthogonal to the ideal one, since it will have one photon less. This means that the gate fidelity will always be slightly smaller than for the single-rail, three-level scheme, although it, too, can in principle be made arbitrarily large for sufficiently large times.
As was the case for the system considered in section III, here also adding more atoms has a detrimental effect, and eventually causes the nonlinearity to vanish. If we write the state of the N-atom, 1-photon system in the form
|ฮจ(t)โฉ^(1) = C_e^(1) |0โฉ|ฯ_eโฉ + C_g^(1) |1โฉ|g_allโฉ +
where |ฯ_eโฉ is defined in a form analogous to |ฯ_aโฉ in Eq.ย (<ref>), the equations of motion for
C_e^(1) and C_g^(1) are identical to Eqs.ย (<ref>). On the other hand, for the N-atom, 2-photon case, the overall state must be written as
|ฮจ(t)โฉ^(2) = C_ee^(2) |0โฉ|ฯ_eeโฉ +C_e^(2) |1โฉ|ฯ_eโฉ+ C_g^(2) |2โฉ|g_allโฉ +
where now instead of Eq.ย (<ref>) we must define
|ฯ_eeโฉ = 1/โ(2(N-1))( โ_i=1^N |eโฉ_iโจg|) |ฯ_eโฉ
and the equations of motion read
ฤ_ee^(2) = -2iฮด C_ee^(2) -igโ(2(N-1)) C_e^(2)ฤ_e^(2) = -iฮด C_e^(2) -igโ(2(N-1)) C_ee^(2) - igโ(2N) C_g^(2)ฤ_g^(2) = -igโ(2N) C_e^(2)
Now it is easy to see that, in the limit Nโซ 1, the solution to the system (<ref>) can be written in terms of the solution to the system (<ref>) as
C_g^(2) = C_g^(1)^2, C_e^(2) = โ(2)C_g^(1)C_e^(1), C_ee^(2) = C_e^(1)^2
and therefore, as before, ฯ_2 = 2 ฯ_1.
As mentioned in the introduction, the authors of <cit.> did consider a scheme in which the conditional phase would result from the interaction with a two-level atom, rather than a second- or third-order nonlinearity. Our scheme here may be regarded as a variation on theirs, where we assume the atom-field interaction is turned on and off abruptly, as opposed to gradually.
ยง CONCLUSIONS
The goal of this research was to considerโand, where necessary, clarifyโthe potential for quantum logic of systems of the Jaynes-Cummings type, i.e., single or multiple atoms interacting with single temporal modes, containing one or two photons, over a finite time. Our two main results are: that, in principle, gate fidelities arbitrarily close to 1 can be achieved if losses (including cavity losses) are neglected; and that using more than one atom is suboptimal, and a very large number of atoms actually causes the useful phase shift to go to zero.
Since any attempt to realize this system in practice would require making use of optical nonlinearities to load and unload the cavities, it is reasonable to ask whether anything is gained by using an atom for the conditional phase shift, instead of the nonlinear materials themselves (as suggested in <cit.>). An immediate answer is that the scheme in <cit.> requires, for the phase shift, very large optical nonlinearities at the single-photon level, whereas the loading and unloading of the cavity can in principle be achieved with reasonable nonlinearities, as long as the auxiliary field is strong enough (in fact, coherent frequency conversion of single-photon states has already been demonstrated). Moreover, it is not immediately apparent why the large single-photon nonlinearities assumed in <cit.> would not suffer from the phase noise problem originally pointed out in <cit.>, at least for the ฯ^(3) case.
The situation is different with regard to the two-level atom scheme presented in Section VI.C of <cit.>. This can be regarded as a variation of our scheme involving a gradual turning on and off of the interaction, and suitable for dual-rail logic; or, conversely, our V-level scheme can be regarded as a variation on theirs involving a sudden turning on and off of the interaction, and suitable for single-rail logic. A fair comparison of these approaches, particularly as regards the turning on and off of the interaction, would require a detailed analysis based on specific parameters. Such an analyisis, however, is beyond the scope of the present paper.
ยง DENSITY-MATRIX GATE FIDELITY CALCULATIONS
ยง.ยง Single V-type atom
The density matrix equation of motion for this system is
ฯฬ= -i/ฤง[H,ฯ] -ฮณโ_e=e_a,e_b(ฯ|eโฉโจe| + |eโฉโจe|ฯ - 2 |gโฉโจe|ฯ|eโฉโจg|)
In this expression, the first two terms under the summation sign give the decay of the excited states, and their action on those states (and/or on the corresponding density matrix elements) can be completely accounted for by a wavefunction treatment with the non-Hermitian Hamiltonian resulting from the substitution ฮดโฮด-iฮณ in Eq.ย (<ref>) (the โpure-state approximationโ mentioned in Section II.B). The last term in (<ref>), on the other hand, is a โsourceโ term that repopulates the ground state as a result of the decay of an excited state. It only acts on, and only produces, diagonal components (in the atomic basis) of the density operator. It cannot be handled by pure-state methods, but it also does not affect the โpure state evolutionโ just discussed, because, since spontaneous emission is irreversible, the states that it produces cannot evolve back into the ones from which they originated, so they do not โfeed backโ into the equations of motion for the corresponding amplitudes.
The evaluation of the gate fidelity (<ref>) requires, in principle, the calculation of the density matrix that evolves from the initial state ฯ(0) = |ฮจ(0)โฉโจฮจ(0)|, with
|ฮจ(0)โฉ = (ฮฑ_00|00โฉ + ฮฑ_01|01โฉ + ฮฑ_10|10โฉ + ฮฑ_11|11โฉ)|gโฉ
From the above discussion it follows that this can in part be accomplished by pure-state methods, by substituting in (<ref>) the wavefunctions that evolve out of the initial conditions |01โฉ|gโฉ,|11โฉ|gโฉ,โฆ, and then adding terms that take into account the action of the source term. A substantial simplification is that, in doing so, one does not have to worry about the evolution of terms in |ฮจ(0)โฉโจฮจ(0)| that involve โmixedโ initial conditions, that is, different initial numbers of photons. The reason is that, as indicated above, the states produced by the source term cannot evolve back to their original form, since the system has lost one photon irreversibly; therefore, if they have any projection at all on the state |ฮฆ_idealโฉ (Eq.ย (<ref>)), it will be on a state corresponding to a different initial condition yet. Hence, a โmixedโ term that starts out proportional to a product of different ฮฑ_ij will end up multiplied by two other different coefficients, and the result will average to zero. For example, pure state evolution applied to the initial term ฮฑ_10ฮฑ_11^โ|10โฉโจ11|โ|gโฉโจg| will yield a term proportional to ฮฑ_10ฮฑ_11^โ|00โฉโจ01|โ|e_aโฉโจe_a|, and spontaneous decay, through the action of the โsource termโ in (<ref>), will then yield
ฮฑ_10ฮฑ_11^โ|00โฉโจ01|โ|e_aโฉโจe_a|โฮฑ_10ฮฑ_11^โ|00โฉโจ01|โ|gโฉโจg|
and the expectation value, in the state |ฮฆ_idealโฉ, of the last term in (<ref>) will be proportional to ฮฑ_00^โฮฑ_10ฮฑ_11^โฮฑ_01, which averages to zero, since the phases of the ฮฑ_ij are all random <cit.>.
This means that we only need to concern ourselves with the decay products of diagonal terms that arise from the one- and two-photons initial conditions separately. For the single-photon case, this means we only need to consider the chain
|ฮฑ_10|^2|00โฉโจ00|โ|e_aโฉโจe_a|โ |ฮฑ_10|^2|00โฉโจ00|โ|gโฉโจg|
(and equivalently with |ฮฑ_01|^2 and e_b). The rate of growth of the corresponding density matrix element is shown in Eq.ย (<ref>), and its contribution to the fidelity, which involves the average of |ฮฑ_10|^2 |ฮฑ_00|^2, is seen in Eq.ย (<ref>), where it has already been combined with that of the symmetric term arising from the decay of |e_bโฉ.
The two-photon evolution, starting from the term |ฮฑ_11|^2|11โฉโจ11|โ|gโฉโจg|, involves two decay chains with two decay events each. On the โaโ side, |01โฉ|e_aโฉ decays to |01โฉ|gโฉ, and from here one can have coherent evolution as in the single-photon case, producing the state |00โฉ|e_bโฉ, which eventually decays to the final state |00โฉ|gโฉ. The equations of motion for the corresponding density matrix elements can, except for the source terms, be written down directly from the pure-state, single-photon case; we do, however, write these elements with a superscript (2) to remind ourselves of their origin in the |ฮฑ_11|^2 term. These equations are, on the โaโ side,
ฯฬ^(2)_01g,01g = 2ฮณฯ^(2)_01e_a,01e_a-ig(ฯ^(2)_00e_b,01g-ฯ^(2)_01g,00e_b) ฯฬ^(2)_00e_b,01g = -(ฮณ+iฮด)ฯ_00e_b,01g -ig(ฯ^(2)_01g,01g-ฯ^(2)_00e_b,00e_b) ฯฬ^(2)_01g,00e_b = -(ฮณ-iฮด)ฯ_01g,00e_b +ig(ฯ^(2)_01g,01g-ฯ^(2)_00e_b,00e_b) ฯฬ^(2)_00e_b,00e_b = -2ฮณฯ^(2)_00e_b,00e_b+ig(ฯ^(2)_00e_b,01g-ฯ^(2)_01g,00e_b)
and a similar one on the โbโ side, switching e_a and e_b and the corresponding photonic subscripts, both contributing to the final state through
ฯฬ^(2)_00g,00g = 2ฮณฯ^(2)_00e_b,00e_b + 2ฮณฯ^(2)_00e_a,00e_a
The contribution of the relevant terms to the gate fidelity, after making use of the problem's symmetry, can be seen in Eq.ย (<ref>) of the main text.
For completeness, we show below also how we have calculated the nonzero averages of the products of coefficients ฮฑ_ij. Let x=|ฮฑ_00|^2, y=|ฮฑ_01|^2, z=|ฮฑ_10|^2. Then, for the state (<ref>) to be normalized, we must have |ฮฑ_11|^2=1-x-y-z. As this quantity has to be between 0 and 1, we find zโค 1-x-y, and again because zโฅ 0 we find yโค 1-x. So, to calculate our averages, we can use a probability distribution function which is constant and nonzero over the volume defined by {0โค xโค 1 && 0โค y โค 1-x && 0โค zโค 1-x-y } (note that in spite of the seemingly asymmetric way we have defined this volume, it is in fact symmetric in the three coordinates, a triangular pyramid). We normalize this by requiring that the average of 1 be 1, that is,
1 =1/ Nโซ_0^1 dxโซ_0^1-xdy โซ_0^1-x-ydz = 1/6 N
So, with N = 1/6, we can calculate the averages we want, as
|ฮฑ_ij|^4 = x^2 = 6โซ_0^1 x^2 dxโซ_0^1-xdy โซ_0^1-x-ydz = 1/10
and
|ฮฑ_ij|^2|ฮฑ_kl|^2 = xy = 6โซ_0^1 x dxโซ_0^1-xy dy โซ_0^1-x-ydz = 1/20
ยง.ยง Two V-type atoms
For the 2-atom case, it is best to use the basis introduced in Section III, where the atomic state with one excitation of the โaโ type is the symmetric combination
|ฯ_aโฉ = 1/โ(2)(|e_a,gโฉ + |g,e_aโฉ)
and similarly for |ฯ_bโฉ, and the doubly excited state is also the symmetric combination
|ฯ_abโฉ = 1/โ(2)(|e_a,e_bโฉ + |e_b,e_aโฉ)
In Eqs.ย (<ref>) and (<ref>), C_e denotes the probability amplitude to find the system in either one of |ฯ_aโฉ or |ฯ_bโฉ, and C_ee that of finding it in |ฯ_abโฉ
The pure-state part of the gate fidelity can then be computed by starting with the state
|ฮจ(0)โฉ = (ฮฑ_00|00โฉ + ฮฑ_01|01โฉ + ฮฑ_10|10โฉ + ฮฑ_11|11โฉ)|ggโฉ
and building the time-evolved state |ฮจ(t)โฉ through the substitutions
|01โฉ|ggโฉ โ C_g^(1)(t) |01โฉ|ggโฉ + C_e^(1)(t) |00โฉ|ฯ_bโฉ|10โฉ|ggโฉ โ C_g^(1)(t) |10โฉ|ggโฉ + C_e^(1)(t) |00โฉ|ฯ_aโฉ|11โฉ|ggโฉ โ C_g^(2)(t) |11โฉ|ggโฉ + C_e^(2)(t) |01โฉ|ฯ_aโฉ + C_e^(2)(t) |10โฉ|ฯ_bโฉ + C_ee^(2)(t) |00โฉ|eeโฉ
where the C^(1) and C^(2) coefficients are the solutions to Eqs.ย (<ref>) and (<ref>), respectively, with the replacement ฮดโฮด -iฮณ, and starting from the ground state. The trace of |ฮจ(t)โฉโจฮจ(t)| over the atoms produces a ฯ_f(t) that is formally identical to the single-atom result, except for the extra term |ฮฑ_11|^2 |C_ee^(2)(t)|^2 |00โฉโจ00|, so taking the expectation value โจฮฆ_ideal|ฯ_f |ฮฆ_idealโฉ and averaging over the ฮฑ_ij produces an expression for the gate fidelity that is also formally identical to Eq.ย (<ref>), except for an additional term
1/20|C_ee^(2)(t)|^2
To this one must now add the effect of the โsource termsโ in the density-matrix equations of motion, i.e., the terms that repopulate the ground state after a spontaneous emission event. For the single-photon case, these are trivial: the states |ฯ_aโฉ and |ฯ_bโฉ decay at the same rate (2ฮณ) as |e_aโฉ and |e_bโฉ, and populate the ground state in the same way as in the single-atom case, so, just as in Eq.ย (<ref>), we get a contribution to F^2 given by
1/10ฯ^(1)_00g,00g = ฮณ/5โซ_0^t|C_e^(1)(t^')|^2 dt^'
(here and below we shall use the single subscript g to refer to the double ground state |ggโฉ).
For the two-photon case, the situation is slightly more complicated. It is simplest to assume that the two atoms interact with the same reservoir, in which case the state |00โฉ|ฯ_abโฉ decays directly to either |00โฉ|ฯ_aโฉ or |00โฉ|ฯ_bโฉ, each with a rate (probability/time) equal to 2ฮณ <cit.>. In addition to this, a singly-excited state such as |01โฉ|ฯ_aโฉ can decay into |01โฉ|ggโฉ, which then can evolve coherently into |00โฉ|ฯ_bโฉ. We thus have, in the place of equations (<ref>), the following:
ฯฬ^(2)_01g,01g = 2ฮณ |C_e^(2)|^2-igโ(2)(ฯ^(2)_00ฯ_b,01g-ฯ^(2)_01g,00ฯ_b) ฯฬ^(2)_00ฯ_b,01g = -(ฮณ+iฮด)ฯ_00ฯ_b,01g -igโ(2)(ฯ^(2)_01g,01g-ฯ^(2)_00ฯ_b,00ฯ_b) ฯฬ^(2)_01g,00ฯ_b = -(ฮณ-iฮด)ฯ_01g,00ฯ_b +igโ(2)(ฯ^(2)_01g,01g-ฯ^(2)_00ฯ_b,00ฯ_b) ฯฬ^(2)_00ฯ_b,00ฯ_b = -2ฮณฯ^(2)_00ฯ_b,00ฯ_b +igโ(2)(ฯ^(2)_00ฯ_b,01g-ฯ^(2)_01g,00ฯ_b) + 2ฮณ |C_ee^(2)|^2
and a similar set involving |ฯ_aโฉ, leading finally to
ฯฬ^(2)_00g,00g = 2ฮณฯ^(2)_00ฯ_b,00ฯ_b + 2ฮณฯ^(2)_00ฯ_a,00ฯ_a
The final expression for the gate fidelity will then be
F^2 = [as in Eq.ย (13)] + 1/20|C_ee^(2)(t)|^2+1/10ฯ^(1)_00g,00g + 1/20ฯ^(2)_00g,00g + 1/10ฯ^(2)_00e,00e+1/10ฯ^(2)_10g,10g
where the subscript โeโ in the next to last term could stand for either ฯ_a or ฯ_b equivalently.
ยง.ยง Single M-type atom
By following the same approach it is easy to see that in the M level scheme described by Eqs.ย (<ref>) and (<ref>), the following terms need to be added to the gate fidelity (<ref>), in the pure-state approximation,
1/20(|C^(1)_g_a|^2 +|C^(2)_g_a|^2) + same with g_a โ g_b
whereas the full density matrix treatment introduces five additional terms (after making use of the aโ b symmetry), that we may write as
1/10(ฯ_00g,00g^(1) + 1/2ฯ_00g,00g^(2) + ฯ_10g,10g^(2) + ฯ_00e_a,00e_a^(2) + ฯ_00g_a,00g_a^(2))
The first four of these are already present in the V-level result (<ref>), and the corresponding equations of motion are similar, with only a couple of differences. One is, of course, the presence of terms proportional to ฮฉ, which couple the excited states |e_aโฉ and |e_bโฉ to the new ground states |g_aโฉ and |g_bโฉ, respectively. The other difference is in the repopulation rates: we have chosen to keep using the symbol ฮณ for the amplitude decay rate of the excited states, but now each of them has two different states to decay to, so we must assume that each of the ground states is only populated at a rate ฮณ/2 in amplitude, or ฮณ in intensity, from any one excited state. With this, and suppressing, for brevity, the equations that follow from complex conjugation, we find
ฯฬ^(1)_00g,00g = ฮณ |C^(1)_e|^2 ฯฬ^(2)_00g,00g = ฮณฯ^(2)_00e_a,00e_a + ฮณฯ^(2)_00e_b,00e_b
(compare to Eqs.ย (<ref>) and (<ref>), respectively), and
ฯฬ^(2)_01g,01g = ฮณฯ^(2)_01e_a,01e_a-ig(ฯ^(2)_00e_b,01g-ฯ^(2)_01g,00e_b) ฯฬ^(2)_01g,00e_b = -(ฮณ-iฮด)ฯ_01g,00e_b +ig(ฯ^(2)_01g,01g-ฯ^(2)_00e_b,00e_b) + iฮฉฯ^(2)_01g,00g_bฯฬ^(2)_00e_b,00e_b = -2ฮณฯ^(2)_00e_b,00e_b+ig(ฯ^(2)_00e_b,01g-ฯ^(2)_01g,00e_b) +iฮฉ(ฯ^(2)_00e_b,00g_b-ฯ^(2)_00g_b,00e_b) ฯฬ^(2)_00e_b,00g_b = -(ฮณ +iฮด) ฯ^(2)_00e_b,00g_b-ig ฯ^(2)_01g,00g_b-iฮฉ(ฯ^(2)_00g_b,00g_b-ฯ^(2)_00e_b,00e_b) ฯฬ^(2)_00g_b,00g_b = ฮณฯ^(2)_00e_b,00e_b-iฮฉ(ฯ^(2)_00e_b,00g_b-ฯ^(2)_00g_b,00e_b) ฯฬ^(2)_01g,00e_b = -ig ฯ^(2)_00e_b,00g_b+iฮฉ-ig ฯ^(2)_01g,00e_b
5
yamamoto I. L. Chuang and Y. Yamamoto, โSimple quantum computer,โ Phys. Rev. A 52, 3489 (1995).
shapiro J. H. Shapiro, โSingle-photon Kerr nonlinearities do not help quantum computation,โ Phys. Rev. A 73, 062305 (2006).
shapiro2 J. H. Shapiro and M. Razavi, โContinuous-time cross-phase modulation and quantum com- putation,โ New J. Phys. 9, 16 (2007).
duan L.-M. Duan and H. J. Kimble, โScalable Photonic Quantum Computation through Cavity-Assisted Interactions,โ Phys. Rev. Lett. 92, 127902 (2004).
koshino K. Koshino, S. Ishizaka, and Y. Nakamura, โDeterministic photon-photon โ(SWAP) gate using a ฮ system,โ Phys. Rev. A 82, 010301(R) (2010).
rempe B. Hacker, S. Welte, G. Rempe, and S. Ritter, โA photonโphoton quantum gate based on a single atom in an optical resonator,โ Nature 536, 193-196 (2016).
chuang C. Chudzicki, I. L. Chuang and J. H. Shapiro, โDeterministic and cascadable conditional phase gate for photonic qubits,โ Phys. Rev A 87, 042325 (2013).
arkan A. Hassan and J. Gea-Banacloche, โInput-output wavepacket description of two photons interacting with a V-type three-level atom in an optical cavity,โ AVS Quantum Sci. 5, 021401 (2023).
kerr J. Gea-Banacloche, โImpossibility of large phase shifts via the giant Kerr effect with single-photon wave packets,โ Phys. Rev. A 81 043823 (2010).
knight K. Xia, M. Johnsson, P. L. Knight, and J. Twamley, โCavity-Free Scheme for Nondestructive Detection of a Single Optical Photon,โ Phys. Rev. Lett. 116, 023601 (2016).
bala B. Viswanathan and J. Gea-Banacloche, โAnalytical results for a conditional phase shift between single-photon pulses in a nonlocal nonlinear medium,โ Phys. Rev. A 97, 032314 (2018).
brod1 D. J. Brod, J. Combes, and J. Gea-Banacloche, โTwo photons co- and counterpropagating through N cross-Kerr sites,โ Phys. Rev. A 94, 023833 (2016).
brod2 D. J. Brod and J. Combes, โPassive CPHASE Gate via Cross-Kerr Nonlinearities,โ Phys. Rev. Lett. 117 (2016).
konyk W. Konyk and J. Gea-Banacloche,โPassive, deterministic photonic conditional-phase gate via two-level systems,โ Phys. Rev. A, 99, 010301 (2019).
sorensen B. Schrinski, M. Lamaison, and A. S. Sรธrensen, โPassive Quantum Phase Gate for Photons Based on Three Level Emitters,โ Phys. Rev. Lett. 129 130502 (2022).
raymer D. V. Reddy and M. G. Raymer, โPhotonic temporal-mode multiplexing by quantum frequency conversion in a dichroic-finesse cavity,โ Optics Express 26, 28091 (2018).
heuck1 M. Heuck, K. Jacobs, and D. R. Englund, โControlled-Phase Gate Using Dynamically Coupled Cavities and Optical Nonlinearities,โ Phys. Rev. Lett. 124, 160501 (2020).
heuck2 M. Heuck, K. Jacobs, and D. R. Englund, โPhoton-photon interactions in dynamically coupled cavities,โ Phys. Rev. A 101, 042322 (2020).
ottaviani C. Ottaviani, S. Rebiฤ, D. Vitali and P. Tombesi, โQuantum phase-gate operation based on nonlinear optics: full quantum analysis,โ Phys. Rev. A 73, 010301 (R) (2006).
rebic S. Rebiฤ, C. Ottaviani, G. Di Giuseppe, D. Vitali and P. Tombesi, โAssessment of a quantum phase-gate operation based on nonlinear optics,โ Phys. Rev. A 73, 032301 (2006).
nysteen A. Nysteen, D. P. S. McCutcheon, M. Heuck, J. Mรธrk, and D. R. Englund, Phys. Rev. A 95, 062304 (2017).
dirichlet Known as โDirichlet's approximation theorem;โ see https://planetmath.org/DirichletsApproximationTheorem and W. M. Schmidt, Diophantine approximation. Lecture Notes in Mathematics. Vol. 785. Springer. doi:10.1007/978-3-540-38645-2.
schmidt H. Schmidt and A. Imamoglu, โGiant Kerr nonlinearities obtained by electromagnetically induced transparency,โ Opt. Lett. 21, 1936 (1996).
lukin M. Lukin and A. Imamoglu, โNonlinear Optics and Quantum Entanglement of Ultraslow Single Photons,โ Phys. Rev. Lett. 84, 1419 (2000).
rebic2 The authors of <cit.> do not appear to have realized this: their model leads to N-atom equations that are identical to the single-atom ones except for the replacement of the single-atom coupling g by gโ(N). This is because their treatment ignores altogether the equivalent of the term C_ee^(2) in Eq.ย (<ref>), that is, the (overwhelming, for large N) probability that each photon may, in fact, be absorbed by a different atom.
gea J. Gea-Banacloche, Yong-qing Li, Shao-zheng Jin, and Min Xiao, โElectromagnetically-induced transparency in ladder-type, inhomogeneously-broadened media: theory and experiment,โ Phys. Rev. A 51, 576โ584 (1995).
hau S. E. Harris and L. V. Hau, โNonlinear Optics at Low Light Levels,โ Phys. Rev. Lett. 82, (1999).
coefs If we just label the coefficients by the total number of initial photons they correspond to, we would start with ฮฑ_nฮฑ_m^โ and end with ฮฑ_n-1^โฮฑ_nฮฑ_m^โฮฑ_m-1. The only way this will not average to zero is if n=m in the first place.
note2 The alternative assumption is that the state |ฯ_abโฉ decays to either |e_a,gโฉ, |e_b,gโฉ, |g,e_aโฉ, or |g,e_bโฉ, each at a rate ฮณ. This does not change the final result, it just makes the calculation more cumbersome.
|
http://arxiv.org/abs/2306.02887v2
|
20230605135636
|
Gen-IR @ SIGIR 2023: The First Workshop on Generative Information Retrieval
|
[
"Gabriel Bรฉnรฉdict",
"Ruqing Zhang",
"Donald Metzler"
] |
cs.IR
|
[
"cs.IR",
"cs.CL"
] |
[email protected]
0000-0002-3596-0285
University of Amsterdam and RTL NL
The Netherlands
[email protected]
0000-0003-4294-2541
ICT, Chinese Academy of Sciences
China
[email protected]
Google Research
USA
<ccs2012>
<concept>
<concept_id>10002951.10003317</concept_id>
<concept_desc>Information systemsย Information retrieval</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systemsย Information retrieval
Generative information retrieval (IR) has experienced substantial growth across multiple research communities (e.g., information retrieval, computer vision, natural language processing, and machine learning), and has been highly visible in the popular press. Theoretical, empirical, and actual user-facing products have been released that retrieve documents (via generation) or directly generate answers given an input request. We would like to investigate whether end-to-end generative models are just another trend or, as some claim, a paradigm change for IR. This necessitates new metrics, theoretical grounding, evaluation methods, task definitions, models, user interfaces, etc. The goal of this workshop[https://coda.io/@sigir/gen-ir] is to focus on previously explored Generative IR techniques like document retrieval and direct Grounded Answer Generation, while also offering a venue for the discussion and exploration of how Generative IR can be applied to new domains like recommendation systems, summarization, etc. The format of the workshop is interactive, including roundtable and keynote sessions and tends to avoid the one-sided dialogue of a mini-conference.
@ SIGIR 2023: The First Workshop on Generative Information Retrieval
Donald Metzler
July 31, 2023
=====================================================================
ยง TITLE
@ SIGIR 2023: The First Workshop on Generative Information Retrieval
ยง MOTIVATION
Last year saw the rise of generative IR on two fronts. We will refer to them as:
[label=(*)]
* ย Generative Document Retrieval (GDR): via a generative process, retrieve a ranked list of existing documents (e.g. Wikipedia or news articles) that match a query and
* ย Grounded Answer Generation (GAG): retrieve a human readable generated answer that matches a query; the answer can link to or refer to a document.
On the GDR end of the spectrum, first proposed an end-to-end model-based retrieval approach in a position paperย <cit.>: directly predict identifiers of candidate documents, instead of indexing all documents (a.k.a. index-retrieve-then-rank). The position paper builds on generative entity linkingย <cit.>, later extended for long sequencesย <cit.>. The generative model is expected to embed all relevant information that is in the documents. Soon after, released Differentiable Search Indexes (DSI), the first model generating indexes of Wikipedia articlesย <cit.>. The above mentioned position paperย <cit.> goes beyond GDR, towards GAG and full-fledged end-to-end retrieval models that generate answers.
On the GAG end of the spectrumย <cit.>, recent Large Language Models (LLMs) have been released to the public that are essentially (conversational) IR models. Some are conversational with aspects of reinforcement learning (ChatGPT[<https://openai.com/blog/chatgpt/>] or Claude[<https://www.anthropic.com/constitutional.pdf>]),
some cite their sources (Phind[<https://phind.com/about>] or Perplexity[<https://www.perplexity.ai/>]), some are focused on science (Galactica[<https://galactica.org/>]), some can do all of the above and more (YOU[<https://you.com/>]), and others have yet to be released (Sparrow[<https://www.deepmind.com/blog/building-safer-dialogue-agents>]).
Generative IR as an end-to-end model has clear benefits over the index-retrieve-then-rank paradigm.
[label=(*)]
* It is simpler and more flexible.
* The training pipeline is compressed.
* There is no need for an index of documents that is tedious to query or compute similarity with.
But Generative IR also comes with its challenges. Namely,
[label=(*)]
* it has yet to be demonstrated that retrieval performance is improved on big datasets (such as the full MS-MARCO datasetย <cit.>),
* generative models can hallucinate (i.e., generate false information). This is more obviously true for LLMs that generate answers (GAG) than for retrieval models that generate doc-ids (GDR).
* The infinite index paradigmย <cit.>: if LLMs can generate an infinite amount of answers to a given query, then classic recall-based IR evaluation metrics like NDCG cannot rely on a finite amount of true positives.
A workshop on Generative IR will question whether IR is truly facing a paradigm change at the theoretical levelย <cit.>. This event will also be a way to reflect on Generative IR's benefits and challenges, as retrieval-like LLMs (GAG) get released to the general public. Finally, we will encourage submissions and discussions on further Generative IR topics and models, where existing literature is scarce, such as recommender systems, Learning to Rank, diffusion models, etc. We compiled a list of related literature[<https://github.com/gabriben/awesome-generative-information-retrieval>].
ยง THEME AND PURPOSE OF THE WORKSHOP
2023 will be a forum for discussion about the challenges in applying (pre-trained) generation models for information retrieval as well as the theory behind the models and applications.
The aim of this workshop is multi-fold:
[label=(*)]
* discussing the main challenges in designing and applying generative retrieval models in practice,
* establishing a bridge for communication between academic researchers and industrial researchers around Generative IR,
* providing an opportunity for researchers to present new directions and early insights, and
* creating an agenda for Generative IR according to the 4 pillars bellow (Model Architecture, Training, Evaluation, Applications). This agenda will then ideally be periodically revised at future occurrences of the workshop.
Our call for papers and the theme of the panel / roundtable discussions will evolve around these 4 pillars. For now Generative IR revolves mostly around Generative Document Retrieval (GDR) and Grounded Answer Generation (GAG). We leave space for further tasks in the 4th pillar.
ยง.ยง Model Architecture
Despite the preliminary studies on pre-trained language models (PTMs) for GDR, most research in this direction focuses on straightforwardly applying existing PTMs that are specifically designed for NLP into IR applications such as T5 <cit.> and BART <cit.>.
These encoder-decoder architectures do not consider the IR cues that might benefit the downstream IR tasks, such as GAG. These cues include information about ranking, entity disambiguation, and the causal relationships behind ranking tasks.
Another solution could be to generate documents via other types of models that can provide a range of predictions, like diffusion modelsย <cit.>. Diffusion models have already been tested for language generation and categorical data in generalย <cit.> and are thus candidates for both GDR and GAG tasks.
ยง.ยง Training
Despite the strong experimental performance of GDR models, the potential of generative models for general search problems is limited by the training strategies that are currently employed.
* Learning To Rank objective. Traditional index-retrieve-then-rank paradigm implies a Learning To Rank objective at the end of the pipeline. This objective is commonly expressed as point-wise, pair-wise, or list-wise. Following the new model-based retrieval paradigm, the objective is global over the whole corpus and usually defined as a standard seq2seq objective, i.e., maximizing the output doc-id likelihood with teacher forcing conditioned on the query. There are many interesting questions to help understand whether such optimization is optimal, how it connects with existing Learning to Rank paradigms, and so on.
* Generalization Ability. So far most studies only demonstrate the effectiveness of their approaches on retrieval datasets where a query has only one relevant document.
In the future, we should extend the generalization ability of GDR to different search tasks, including a query with a relevant document, with multiple relevant documents at one relevance grade, and with multiple relevant documents at different relevance grades.
One option to predict multiple documents via model based retrieval is to use contrastive learning between the document and query representations.
* Incremental Learning. For GDR models, there remain open questions about the practical applicability of such models to dynamic corpora. In dynamic and open IR system, documents are incrementally added or removed from the indexed corpus. It is valuable to explore continuously updated learning objectives over new or removed documents (e.g. <cit.>).
ยง.ยง Evaluation
We consider several topics for the evaluation of Generative Document Retrieval and Grounded Answer Generation:
* We are not aware of an evaluation on a big dataset for either GDR or GAG (such as the full MS-MARCO datasetย <cit.>).
* Evaluation metrics need to be designed taking into account the specifics of the generative paradigm. These metrics should ideally both suit traditional IR and Generative IR.
* Human evaluation of Generative IR is still at its infancy. Note that ChatGPT leverages Reinforcement Learning with Human Feedback (RLHF)ย <cit.>, while Claude uses RL from AI Feedback (RLAIF)ย <cit.>.
* Interpretability and causality are still hard to determine. In the context of GAG, this implies citing its sources, a.k.a. attributionย <cit.> (via for example a citation tokenย <cit.>). In other words bridging the gap between GDR and GAG.
* Robustness to adversarial attacks (how easy is it to create fake facts or fool the Grounded Answer Generation model) and to distribution shifts (does transfer learning across datasets work?).
* Efficiency of models. GDR requires considerably less compute power than GAG. Is there a way to bring computational costs down for GAG or to provide more information with the same amount of compute with GDR (e.g. a ranking of documents instead of just one document or a summary of documents)?
* GAGs tend to be very assertive about their claims. Uncertainty estimates would be particularly desirable for GAGs and especially for the ones which don't cite their sources like ChatGPT.
* GAGs can appear like they have a mind of their own. Some new conceptual metrics and learning constraints have been proposed like truthfullness, harmlessness, honesty and helpfulnessย <cit.>.
ยง.ยง Applications
At inference time, both GDR and GAG are sensitive to prompting strategies. Given particular prompts, it has been shown that one can provoke ChatGPT into hallucinating answers. As a solution, could we use a generation model to unify GDR and GAG, so as to provide document references to source material making it much easier to highlight the authoritativeness / accuracy of the answer?
Furthermore, there are several applications to Generative IR that have not yet been subject to much scrutiny beyond GDR and GAG. We can think of summarization, Knowledge-Intensive Language Tasks (KILT) (e.g.ย <cit.>), recommender systems (e.g.ย <cit.>) and learning to rank (e.g.ย <cit.>).
ยง FORMAT
will be an interactive full-day hybrid workshop that avoids the one-sided dialogue of a mini-conference.
* Invited panel (industrial and academic) [hybrid]. Candidates from different institutions and companies accepted our invitation: Neeva, Google, Meta AI, Tsinghua University, Chinese Academy of Science, Sapienza University of Rome, Samaya AI, KAIST, University of Waterloo, Huggingface, Stanford University.
* Contributed paper presentations as posters [onsite] and video demos [online].
* An interactive session to share lessons learned [hybrid].
* Breakout sessions on issues that emerge from the contributed papers and demos (to be determined after the submission deadline but prior to the workshop) [onsite].
ยง.ยง Workshop schedule
ยง.ยง.ยง Morning
Time Activity
08.30โ08.45 Opening
08:45โ09:15 Panel Discussions (academic)
09:15โ10:00 Poster Session - (1) Model Architecture
10:00โ10:30 Coffee break
10:30โ11:00 Panel Discussions (industrial)
11:00โ11:45 Poster Session - (2) Training
11:45โ12:15 Breakout preparation
11:45โ13:30 Lunch
ยง.ยง.ยง Afternoon
Time Activity
13:30โ14:00 Panel Discussions - Setting an agenda for Gen-IR
14:00โ14:45 Poster Session - (3) Evaluation
14:45โ15:30 Refreshment break
15:30โ16:30 Breakout
14:00โ14:45 Poster Session - (4) Applications
17:15โ17:30 Round up and closing discussions
ยง.ยง.ยง Schedule
Date Event
May 2, 2023 Submission deadline
Jun 14, 2023 Notification
Jul 1, 2023 Camera ready versions of accepted papers due
Jul 27, 2023 Gen-IR workshop
ยง ORGANIZERS
Gabriel Benedict is an industry PhD candidate at University of Amsterdam, in collaboration with RTL NL. He is doing a mix of theoretical and applied AI research. The main themes are metrics-as-losses for neural networks, normative diversity metrics for news recommendation, intent-satisfaction modelling, video-to-music AI and most recently diffusion for IR tasks.
Ruqing Zhang is an associate professor in Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). She has worked on a number of problems related to natural language generation and neural ranking models. Her current research is especially how to design generative models for IR, how to improve the robustness of ranking models, and how to make IR trustworthy with the lens of โcausalityโ.
Donald Metzler is a Senior Staff Research Scientist at Google Inc. Prior to that, he was a Research Assistant Professor at the University of Southern California (USC) and a Senior Research Scientist at Yahoo!. He currently leads a research group focused on a variety of problems at the intersection of machine learning, natural language processing, and information retrieval. He is a co-author of the position paperย <cit.>.
ยง PC MEMBERS
Potential PC members for reviewing paper submissions:
* Andrew Yates, University of Amsterdam
* Arian Askari, Leiden University
* Hainan Zhang, JD
* Hyunji Lee, KAST AI
* James Thorne, KAIST AI
* Nicola De Cao, University of Amsterdam
* Qingyao Ai, Tsinghua University
* Roi Cohen, Tel Aviv University
* Ronak Pradeep, University of Waterloo
* Sheng-Chieh Lin, University of Waterloo
* Shengyao Zhuang, The University of Queensland
* Vinh Q. Tran, Google Research
* Xiao Wang, University of Glasgow
* Xinyu Ma, Baidu
* Yujia Zhou, Renmin University of China
* Zhicheng Dou, Renmin University of China
ยง SELECTION PROCESS
We will solicit submission of papers of two to six pages through an open call for papers, representing reports of original research, preliminary research results, proposals for new work, descriptions of generative models based toolkits tailored for IR, and position papers.
All papers will be peer reviewed by the program committee and judged by their relevance to the workshop, especially to the two main themes, and their potential to generate discussion.
ยง TARGET AUDIENCE
The target audience is the broad range of researchers in industry and academia interested in IR and especially in Generative IR. We will advertise the workshop via a dedicated website and a Twitter/Mastodon account.
ยง RELATED WORKSHOPS
As an emerging paradigm, there have not been related workshops held previously at SIGIR or other conferences.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.04057v1
|
20230606231438
|
High hard X-ray polarization in Cygnus X-1 confined to the intermediate hard state: evidence for a variable jet component
|
[
"Tanmoy Chattopadhyay",
"Abhay Kumar",
"A. R. Rao",
"Yash Bhargava",
"Santosh V. Vadawale",
"Ajay Ratheesh",
"Gulab Dewangan",
"Dipankar Bhattacharyay",
"Mithun N. P. S.",
"Varun Bhalerao"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.IM"
] |
Tanmoy Chattopadhyay
[email protected]
Kavli Institute of Particle Astrophysics and Cosmology, Stanford University
452 Lomita Mall, Stanford, CA 94305, USA
Physical Research Laboratory
Thaltej, Ahmedabad, Gujarat 380009, India
Inter-University Center for Astronomy and Astrophysics
Pune, Maharashtra-411007, India
Tata Institute of Fundamental Research,
Mumbai, Maharashtra-400005, India
Inter-University Center for Astronomy and Astrophysics
Pune, Maharashtra-411007, India
Physical Research Laboratory
Thaltej, Ahmedabad, Gujarat 380009, India
INAF - IAPS, Via Fosso del Cavaliere 100, I-00133 Rome, Italy
Inter-University Center for Astronomy and Astrophysics
Pune, Maharashtra-411007, India
Inter-University Center for Astronomy and Astrophysics
Pune, Maharashtra-411007, India
Department of Physics, Ashoka University
Rai, Sonipat, Haryana-131029, India
Physical Research Laboratory
Thaltej, Ahmedabad, Gujarat 380009, India
Indian Institute of Technology Bombay
Powai, Mumbai, Maharashtra 400076, India
Cygnus X-1, the well-known accreting black hole system,
exhibits several observational features hinting at an intricate interplay between the accretion disk, its atmosphere known as the corona and the putative relativistic jet.
It has been extensively studied using all available observational methods, including using the newly available technique of sensitive X-ray polarimetry.
X-ray polarization characteristics are distinct for coronal and jet emissions.
The low X-ray polarization measured below โผ100 keV is understood as arising from the corona. In contrast, the high polarization measurements reported above โผ400 keV required a separate jet-dominated spectral component, which spectroscopy does not demonstrate conclusively.
Here we report precise polarization measurements in the 100-380 keV region made during three different sub-classes of spectral states of the source using the CZTI instrument onboard AstroSat. A high polarization (23ยฑ4 %) is found mainly in the Intermediate Hard State of the source, and the energy-resolved measurements smoothly connect the coronal and the jet regimes.
When high polarization is observed, the simultaneous spectral data hints at a separate power law component above 100 keV. We examine the possible sources of this energy-dependent high polarization in Cygnus X-1.
ยง INTRODUCTION
Cygnus X-1, a high-mass X-ray binary (HMXB) system, is one of the earliest known X-ray sources, harboring a 21.2ยฑ2.2 solar-mass black hole in a 5.6-day orbit with a 40.6^+7.7_-7.1 solar-mass star, and located
at a distance of 2.22^+0.18_-0.17 kpc from us <cit.>.
Unlike most other X-ray sources, Cygnus X-1 is persistent and has been extensively studied across
almost the entire electromagnetic spectrum over the last five decades <cit.>. The source displays state
transitions between the thermal disk dominated soft state and the
hard state with a power-law dominated spectrum. It is also detected
in radio wavelengths, thought to originate in relativistic jets <cit.>.
Cygnus X-1 is one of the brightest X-ray sources in the hard state, and the hard X-ray emission is attributed mainly to Compton scattering from a hot corona. Some studies indicate an additional component in the hard state spectrum <cit.> which has been interpreted as power-law emission from an optically thin jet <cit.>.
Detailed modeling of the broadband spectral energy distribution (SED) of Cygnus X-1 in the hard state requires consideration of jet emission to account for the soft-gamma ray observations <cit.>. It has been suggested that under certain conditions, the jet emission may contribute significantly in hard X-rays as well <cit.>, similar to a few other black hole sources <cit.>.
However, the extent to which the jet emission can contribute to hard X-rays continues to be debated <cit.>.
Hard X-ray polarization measurements offer an unique possibility to distinguish
between emissions arising in the corona and the jet. However, hard X-ray
polarization measurements are challenging to carry out, and so far only weak hints are available of polarization in hard X-rays <cit.>.
The first attempt to explore the polarization properties of Cygnus X-1 dates way back to 1970s when a Bragg polarimeter onboard The Eighth Orbiting Solar Observatory (OSO 8) placed an upper limit of a few percent at 2.6 keV <cit.>. Subsequently there have been attempts to measure polarization of the source in hard X-rays, both in the coronal regime (a few tens of keV to โผ100 keV) and the suspected jet regime (above 100 keV) <cit.>.
Recently, <cit.> reported a precise measurement of polarization of Cygnus X-1 in the hard state using Imaging X-ray Polarimetry Explorer (IXPE) in 2-10 keV band.
They found a polarization fraction of 4.0ยฑ0.2 % with an increasing trend in polarization with energy. The polarization angle is -20.7ยฑ1.4^โ (from the local north towards northeast in clockwise direction) and aligns with the outflowing radio jet. These results suggest that the X-ray coronal plasma is extended in the plane of the accretion disk.
IBIS and SPI instruments onboard the The INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) independently measured high polarization for this source at โผ65 % with a polarization angle of 224^โ at energies above 400 keV <cit.>. These results were interpreted as the jet origin of the photons, further corroborated by the spectroscopic analysis showing two distinct spectral components, a thermal Comptonization component at energies below 200 keV and a power law component beyond 200 keV, supposedly, due to synchrotron radiation from the jet.
However, <cit.> modeled wide band spectral energy distribution
of Cygnus X-1 spanning from radio to MeV and suggested that for a realistic set of model parameters, contribution of the jet emission in X-rays is likely negligible.
Later observations by the Polarized Gamma-ray Observer (PoGO+), a dedicated balloon-borne hard X-ray polarimeter sensitive in 19-181 keV, found the source to be unpolarized in the hard state.
They placed an upper limit of 5.6 % at a position angle of 154ยฑ31^โ, similar to what IXPE found <cit.>.
They also estimated upper limits for polarization from the jet component of around 5-10 % <cit.>.
These findings enhance the tension between the low energy polarization measurements and the high polarization found above 250 keV by INTEGRAL, requiring a synchrotron emission component. A detailed polarimetric study of the source in 100-500 keV region (the energy range in which the coronal and the jet components could have similar contribution) can confirm any separate jet component in the hard X-rays. Since the radio emission, believed to be originating from the jet, is known to change between different spectral states of Cygnus X-1,
it is essential to have hard X-ray polarization measurements in different spectral states to decipher the underlying emission mechanisms. However, such state-dependent hard X-ray polarization measurements have not been possible so far.
Cadmium Zinc Telluride Imager (CZTI) is a moderately sensitive hard X-ray polarimeter in 100-380 keV energy range.
The polarization information is obtained by accurately identifying the Compton scattered events in the CZTI plane, which modulate the azimuthal angle distribution if the incident radiation is polarized.
The capability of CZTI as a polarimeter has been demonstrated both in the laboratory before the launch of AstroSat <cit.>
and in space with the measurement of polarization of Crab <cit.>. Polarization measurements for a large sample of Gamma-ray Bursts (GRBs) have also been reported by <cit.> and <cit.>.
CZTI polarimetry range (100-380 keV) bridges the gap between the PoGO+ and INTEGRAL measurements and, therefore, can contribute significantly to understanding the emission mechanism in this energy range.
If there is indeed a transition in the emission mechanism from corona to jet, that can be effectively probed by studying the energy-resolved polarization properties of the source with CZTI.
For this study, we made three long targeted observations of Cygnus X-1 after the source transitioned to hard state (hereafter ID2992, ID4646, and ID5146). We did a detailed polarization analysis of these three observations using the Compton events in CZTI.
In Sec. <ref>, we give the details of the observations along with their spectral state determination. The polarization results are discussed in Sec. <ref> followed by a brief description of the spectroscopic analysis and results in Sec. <ref>. In Sec. <ref>, we discuss the results in the context of coronal and jet contribution to the global emission of Cygnus X-1 in different spectral states.
ยง ASTROSAT OBSERVATION OF CYGNUS X-1
Since the launch of AstroSat, Cygnus X-1 has been observed on several occasions. Many of them, however, are of short exposures, and some are in the soft state with very low hard X-ray flux, not suitable for polarization analysis.
Hence, during the last few observations cycles, three long (>200 ks) observations (AstroSat observations ID2992, ID4646, and ID5146), triggered by the transition of the source from soft to hard state, were undertaken.
Details of the source and blank sky observations used for polarization analysis are given in Table <ref>.
To identify the specific subclass of the spectral state, we followed a method described by <cit.>.
Spectral analysis was carried out in 30-100 keV energy range by fitting a powerlaw to the orbit-wise spectra, and a distribution of the orbit-wise fitted spectral index and flux (22-100 keV) was obtained for each of the three observations.
From the distributions, we classify ID2992, ID4646, and ID5146 as Intermediate Hard State (IMH), Intermediate Soft State (IMS), and Pure Hard state (PH), respectively as shown in Figure <ref>.
Hereafter, we denote these observations as IMH2992, IMS4646, and PH5146, respectively. Details of the method for spectral state determination can be found in supplementary material <ref>.
ยง POLARIZATION ANALYSIS RESULTS
We carried out a detailed polarization analysis for these three observations, following the steps described in <cit.>. Details of the CZTI polarization measurement methodology can be found in supplementary section <ref>. The Azimuthal Scattering Angle Distribution (ASAD) and the contour plots for all the three observations are shown in Figure <ref> (a: PH5146, b: IMS4646, c: IMH2992).
The fitted modulations in ASAD for the PH and IMS states are low and not constrained even though the estimated minimum detectable polarizations (MDPs) for PH5146 and IMS4646 are low (<10 %). We also estimate the Bayes factor, which provides a statistical confirmation of the detection of polarization by comparing a sinusoidal polarized model to an unpolarized constant model fitted to the data. The low Bayes factors (<3, as described in appendix <ref>), measured in both the cases, indicate no statistically significant detection of polarization in these two observations.
Analysis of IMH2992, on the other hand, shows statistically significant polarization, with measured polarization fraction of 23ยฑ4 % in 100-380 keV, implying greater than 5ฯ detection for 1 parameter of interest at 68 % confidence level. The observed polarization angle projected in the sky plane is 236ยฑ12^โ, which agrees with the INTEGRAL results. The angle is โผ90^โ away from the IXPE measured polarization angle in 2-10 keV. The contour plot on the right side of the figure shows that the polarization degree and the angle are well constrained at 68, 95, and 99 % confidence levels. The Bayes factor is also high (โผ733), confirming very high statistical significance.
With such high detection significance for IMH2992, we explored the energy dependence of the polarization. Figure <ref> (b), (d), and (e) show the modulation curves for IMH2992 in three energy ranges: 100-175, 175-230, and 230-380 keV.
The signal in 100-175 keV is found to be unmodulated (Bayes factor <1).
The signals at higher energies (175-230 keV and 230-380 keV), on the other hand, are found to be polarized (26ยฑ6 % at 228ยฑ13^โ and 39ยฑ9 % at 239ยฑ11^โ respectively) at >4ฯ level (Bayes factor of 33 and 254, respectively). We measure upper limits of polarization for the data in the first energy bin (100-175 keV) of IMH2992 and for the other two observations in 100-380 keV (see the rightmost column of table <ref>).
Figure <ref> shows the polarization fraction and angle of Cygnus X-1 in different spectral states (denoted by different symbols) from all available measurements till date (in different colors), including the AstroSat CZTI measurements presented here as blue data points.
Since the INTEGRAL data of Cygnus X-1 encompass a few years of observation in total, we denote the results as averaged hard state, supposedly consisting of both pure hard and intermediate states.
It can be seen that the CZTI measurements smoothly bridge the gap between the corona-dominated low energy (< 100 keV) measurements of low polarization and the high polarization measured by INTEGRAL.
ยง SPECTROSCOPIC ANALYSIS RESULTS
To investigate the spectral signatures of the polarization signal present in IMH, we undertake spectroscopic analysis of the source for these three observations.
The broadband X-ray spectrum of the source can be typically characterized as a combination of a thermal accretion disk and emission from a Comptonizing medium, with the Comptonised emission often showing structure; e.g two components with different optical depths <cit.> and different electron temperatures <cit.> or hybrid distribution of electrons <cit.> in the hard state of the source.
Since the effect of the polarized component is limited to X-rays beyond 100ย keV, we need to investigate the hard X-ray spectrum in detail with minimal dependence of model on the biases from lower energies. Therefore, for spectral analysis, we focus only on modeling the CZTI spectrum in 30-190ย keV and do not include data in softer X-rays from the other two AstroSat instruments: the Soft X-ray Telescope (SXT) and the Large Area X-ray Proportional Counter or LAXPC).
<cit.> have investigated the hard X-ray spectrum (10โ400ย keV) of Cygnus X-1 in the low hard state at multiple epochs and are able to model the emission with Comptonization with reflection from cold matter with compPS <cit.>. Thus we use a similar formalism to model the CZTI spectrum in 30โ190ย keV.
For each observation ID, we filter the raw event files following the standard CZTI data analysis pipeline procedure and generate clean event files. From the clean event files, we generate background subtracted source spectrum using an improved mask-weighting technique with updated calibration (Mithun et al in prep), which is implemented in
module of CZTI data analysis pipeline version 3.0
and the associated
CALDB[<http://astrosat-ssc.iucaa.in/cztiData>].
In Figureย <ref> top panel, we show the inherent differences in the shape of the spectrum in the three states, by computing the ratio of the respective spectrum to Crab spectrum. This removes the instrumental effects and allows for comparison of the spectral slopes in a model-independent manner. The PH5146 state has the highest flux and shows the characteristics of thermal Comptonization spectra with a cutoff at โผ100 keV. In the other two states (IMH2992 and IMS4646), there is an indication of the cutoff energy being lower and the emergence of a power law component. Since we are restricted to energies above 30 keV, for spectral modeling, we fix all parameters that affect below this energy (i.e. disk temperature and reflection parameters) to their typical values <cit.> and assume a spherical geometry of the Comptonizing medium <cit.>. Thus only parameters which we constrain are the optical depth of the medium, electron temperature, and the normalization. We find that all the three observations can be described adequately with a hard Comptonization (ฯ^2 of 101.1, 99.5 and 157.1 for 84 degrees of freedom in IMH2992, IMS4646, and PH5146 respectively) with the parameters consistent with those reported by <cit.>. The fitted parameter values are given in Table <ref> in appendix <ref>. The electron temperature in IMH is higher (โผ170ย keV) than that observed in IMS (โผ130) or PH (โผ84) while the optical depths follow a reverse trend. Confidence intervals of the parameters are determined by Markov Chain Monte Carlo sampling of the parameter space using 50 walkers running for 10000 steps (after burning the initial 2000 steps before convergence).
<cit.> have reported the presence of a powerlaw like component in the INTEGRAL spectrum of Cygnus X-1 in its hard state. We test for the presence of a similar component in the CZTI spectra by including a powerlaw with a fixed slope and a variable normalization. The addition of the component does not change the fit statistics (reduced ฯ^2 is โฒ1) but in the case of IMH observation, it causes a significant change in the electron temperature. We keep the slope of the powerlaw component tied at 1.6 as it is representative of the typical synchrotron jet observed in the INTEGRAL observation <cit.> and since we only want to test the presence of the component, computing the significance of the component by estimating its normalization is sufficient. The model decomposition for the individual observations
(for the second model: ) is shown in Figureย <ref>. The model parameters are noted in Tableย <ref> in appendixย <ref>
We note that the spectral modeling of the IMH state allows inclusion of a powerlaw component with reasonable constraints on its normalization, though with the inclusion of the powerlaw component, the electron temperature becomes similar to that of IMS and PH states. The IMS and PH allow inclusion of the power-law component, but with much lower normalization and is consistent with zero within a few standard deviations.
Based on the spectroscopic analysis, we conclude that there is a degeneracy in the spectral information in IMH2992 with both Comptonization and Comptonization + Powerlaw models being able to describe the spectrum. The latter configuration aligns better with the polarization results if we assume that the powerlaw component is the main contributor to the observed polarization in >175ย keV. The measured relative contributions of the powerlaw (1.4ร10^-9 erg/cm^2 in 100-175 keV, 8.4ร10^-10 erg/cm^2 in 175-230 keV, and 9.2ร10^-10 erg/cm^2 in 230-380 keV) to the total flux (3.9ร10^-9 erg/cm^2 in 100-175 keV, 1.6ร10^-9 erg/cm^2 in 175-230 keV, and 1.4ร10^-9 erg/cm^2 in 230-380 keV) are consistent with the flux contributions expected from the observed polarization results within 1ฯ scatter of each other, assuming a 50 % maximum polarization from the synchrotron flux (more details in appendix <ref>).
ยง SUMMARY AND DISCUSSIONS
In this paper, we report new polarization measurements of Cygnus X-1 using the Astrosat-CZTI instrument in 100-380 keV. Polarization measurements were done in three different spectral states - pure hard (PH), intermediate hard (IMH), and intermediate soft states (IMS). In the PH and IMS states, we did not see any evidence of polarization (upper limit of โผ10 %). However, the IMH state was seen to have polarized emission with a fraction of 22 %, measured with more than 5ฯ significance at an angle
around 236^โ (local north to east in anti-clockwise direction). Energy resolved analysis shows that the polarization increases with energy from no polarization at energies <175 keV to โผ40 % polarization at higher energies.
It has been known for a long time that the spectral shape observed at high energies for Cygnus X-1 requires multiple components. The two distinct polarization angles measured at low and high energies (see Figure <ref>) strongly indicate the existence of a distinct spectral component at high energies, with an origin different from that in the putative corona
<cit.>.
The high polarization seen here strongly links this component to synchrotron radiation in an ordered magnetic field, possibly from the base of the jet. Further, finding high polarization confined only to the IMH state provides further clues to the origin of jets in Cygnus X-1.
In the hard state, the different sub-classes (PH, IMH, and IMS) are configurations of the accretion disk dictated by accretion rate, location of disk truncation, and the strength of outflows and jets.
IMH state is fascinating because, in this spectral state, maximum radio flux variation has been observed <cit.>, and strong jets are expected to be formed. We also detect high polarization in this state. The evolution of polarization fraction with energy in this state along with the spectral analysis results, therefore, favors a scenario where the coronal and jet emission mechanisms co-exist in 100-380 keV energy range and intersect around 200 keV.
In the PH and IMS spectral states, on the other hand, there is no evidence of polarization from the jet component, either in the energy-integrated or in the energy-resolved analysis. However, it is to be noted that steady radio emission is seen in both these states, although the flux and its variation are found to be low <cit.>.
This suggests that the jet component may be present in all three states with similar polarization properties, but in the PH and the IMS states, the X-ray emission is dominated by the Corona all the way to โผ400 keV.
Synchrotron process in an ordered magnetic field in a jet represents the most probable way to produce highly polarized emission. However, as emphasized by <cit.>, high polarization levels in X-rays (e.g. seen in INTEGRAL) require extreme conditions that are unlikely to occur in a steady-state jet.
Recently, <cit.> attempted multi-wavelength SED modeling of Cygnus X-1 spectral and polarimetric data (radio, IR, optical, and INTEGRAL data in X-rays), based on synchrotron emission in an ordered magnetic field of a steady-state jet. They failed to explain the high-energy polarization angles (โผ60^โ away from the jet) reported by INTEGRAL, while the optical and IR emissions are polarized in the direction of the jet. One expects the intrinsic polarization angle to be wavelength independent when a single electron population in an optically thin jet is responsible for emission across the electromagnetic spectrum.
These contradictions suggest an alternative possibility that the observed high polarization may result from some peculiar transient phenomena occurring mainly in the IMH state of the source. We speculate below a possible scenario.
In transient black hole binaries, it is observed that sources traverse well-defined paths of state transitions starting from the low hard state (with the indication of a steady jet as evidenced by the radio emission) and then make a state transition to the soft State <cit.>. This transition, quite often, passes through the IMH state, and it is even suggested that during this transition, the source passes through a specific `jet-line' where super-luminal jet ejections are observed to take place <cit.>. Cygnus X-1 is a high-mass X-ray binary, and, in contrast to the black hole transients, it makes slow transitions and spends several days in each sub-state. It is quite conceivable that we are seeing the `jet-line' transition in slow motion during the IMH state of Cygnus X-1. Hence, many of the assumptions of the steady-state jets, which failed to explain the high polarization and the different PA, may not be valid during a transient jet. Exploring transient jet formation, based on the constraints presented in this work, could enable us to understand the intricate disk-jet connection in black hole sources. Observationally, a detailed time-resolved multi-wavelength observation of Cygnus X-1 in the IMH state would be instrumental in understanding this enigmatic source.
This publication uses data from the AstroSat mission of the Indian Space Research
Organisation (ISRO), archived at the Indian Space
Science Data Centre (ISSDC). CZT-Imager is built
by a consortium of institutes across India,
including the Tata Institute of Fundamental
Research (TIFR), Mumbai, the Vikram Sarabhai Space
Centre, Thiruvananthapuram, ISRO Satellite Centre
(ISAC), Bengaluru, Inter University Centre for
Astronomy and Astrophysics, Pune, Physical
Research Laboratory, Ahmedabad, Space Application
Centre, Ahmedabad. Contributions from the vast
technical team from all these institutes are
gratefully acknowledged. Specifically, we would
like to thank M. K. Hingar, A. P. K. Kutty, M. H.
Patil, S. Sinha and Y. K. Arora (TIFR) for the CZT-
Imager hardware fabrication; and K. S. Sarma, K.
H. Navalgund, R. Pandiyan and K. Subbarao (ISAC)
for project management and mission operation. The
continued support from M. Annadurai and A. S.
Kirankumar is gratefully acknowledged.
ยง DETERMINATION OF SPECTRAL STATES
CZTI data consists of a time-tagged event list with a time resolution of 20 ฮผs which include the information of the CZTI quadrant, CZT detector module ID, pixel number, and PHA value for each event. CZTI data reduction pipeline takes this event list
as an input and generate standard data products like light curves and spectra. For generation of background subtracted spectrum and light curves, CZTI analysis pipeline makes use of mask-weighting technique where the background is measured simultaneously considering the pixels' open fractions and effective areas.
In order to identify the spectral state class of ID2992, ID4646, and ID5146, we followed the same technique prescribed by <cit.>.
They have done a detailed spectral analysis of Cygnus X-1 using INTEGRAL data spanning over fifteen years. and categorised the states into hard and soft regimes based on the hard X-ray flux in 22-100 keV: hard state for flux above 75ร10^-10 erg cm^-2 s^-1 and soft state for flux below this value. Each regime is further categorised into pure, transitional and intermediate, totaling six states โ pure hard (PH, ฮโค 1.78), transitional hard (TH, 1.78โคฮโค 1.93 ), hard intermediate (IMH, 1.93 โคฮโค 2.29), soft intermediate (IMS, 1.93 โคฮโค 2.29), transitional soft (TS, 2.29 โคฮโค 2.65), and pure soft (PS, ฮ > 2.65) states based on the clustering of the data in spectral index and flux density diagram.
Spectral analysis similar to the <cit.> requires analysis of hourly or sub-hourly data (0.5-2 hour). Since each orbit of the AstroSat lasts for โผ96 minutes, we proceeded with spectral analysis of orbit-wise data.
Each of the three observations is divided into separate orbit-wise cleaned event files by applying the orbit-wise Good Time Interval (GTI), which is obtained using a program written in Interactive data Language (IDL). We exclude the South Atlantic Anomaly (SAA) regions for each orbit.
The orbit-wise event files are then used to obtain the the spectral and response files from the standard CZTI data reduction pipeline.
The spectral analysis is carried out using the xspec <cit.> for each orbit files. The spectrum for all the observations is fitted with an powerlaw model. The spectral indices for the orbits in 30-100 keV and the computed model flux in 22-100 keV obtained from the spectral fitting are plotted in Figure <ref> for the three observations. The average values of the spectral indices and flux are given in table <ref>. Based on these values, we determine that ID5146 is a PH state, ID2992 is a IMH state, and ID4646 is a IMS state.
ยง X-RAY POLARIMETRY WITH CZT-IMAGER
In the CZTI, polarization is estimated from the azimuthal scattering angle distribution (ASAD) of the Compton scattered photons <cit.>. The CZTI consists of a large pixelated detector plane (geometric area of 976 cm^2) with pixel size of 2.5 mm ร 2.5 mm and 5 mm thickness, and possesses considerable Compton scattering efficiency above 100 keV, making it suitable for Compton scattering polarimetry in 100-380 keV. The on-board electronics preserve simultaneous multi-pixel events and enable time-tagged transmission of individual events, thus providing polarization information on a routine basis. Here we brief the polarization analysis steps <cit.>.
ยง.ยง.ยง Selection of Compton events
The first step of the polarization analysis is to select valid Compton events. For each of the three individual observations, we removed the intervals of high background before and after the South Atlantic Anomaly passage in each orbit. We also removed all events from pixels that were classified as noisy or spectroscopically bad. Then, we extracted the double pixel events satisfying the Compton criteria, e.g., detected within 20 ฮผs time window in two adjacent pixels and the energetics of the events satisfy Compton kinematics. For details, see <cit.>. The Compton events are then used to obtain the source ASAD.
ยง.ยง.ยง Background subtraction
It is important to consider an appropriate blank sky observation for the measurement of background. Since the CZTI mask and other support structures become increasingly transparent at energies beyond 100 keV, a bright X-ray source at a large off-axis angle, even up to 80^โ, can interfere with the true background. The blank sky observations were taken from a region where the Crab and Cygnus X-1 are out of the open field of view of the CZTI. We also need to consider the effect of the earth X-ray albedo, which constitutes a large fraction of the hard X-ray background in the low earth orbit. Since CZTI is at the corner of the spacecraft with most instruments present only at one side of CZTI and albedo background comes from one side of the spacecraft, this may lead to an asymmetry in the background azimuthal scattering angle distribution. In order to minimize this effect, the blank sky observations were selected such that the relative orientation of the spacecraft during the background measurement is the same (ยฑ5^โ) as that during the source measurement. The same data cleaning process and Compton criteria are implemented to generate the background ASAD.
Prior to the background ASAD subtraction, an important point to consider is that the background count rate changes within the duration of observation with a stable periodic nature. This results from the inclined orbit of AstroSat where some of the orbits pass through the outskirts of the South Atlantic Anomaly giving an increase in count rate when the spacecraft is in these regions of the orbit. Because of the rotation of the earth, this phase of high count rate reappears every โผ24 hours and this has been seen in other AstroSat instruments also <cit.> apart from CZTI <cit.>. To correct for this effect, we try to match the phases of orbital variation of the count rate during background and Cygnus X-1 observations using a cross-correlation method <cit.> and identify the common or phase-matched regions. These phase matched regions are used to correct for the long-term variation in data before background subtraction.
ยง.ยง.ยง Modulation curve fitting
In the next step, we fit the ASAD to obtain polarization fraction and angle. Because of the non-uniformity in the solid angles subtended by the surrounding edge and corner pixels to the central scattering pixel, we see an unequal count rates in the edge and corner pixels, which is first corrected by normalizing it with an ASAD for 100 % unpolarized radiation in the same energy range. For Gamma-ray bursts, this is typically obtained from the Geant4 <cit.> Monte Carlo simulation <cit.>. However, for ON-axis sources, the unpolarized ASAD is best obtained from the observed source azimuthal distribution by averaging the edge and corner pixels separately <cit.>. The geometry-corrected modulation curves are fitted by a sinusoidal function, Acos2(ฯ - ฯ_0 + ฯ/2)+B, to estimate the polarization angle in the detector plane (ฯ_0) and the modulation amplitude (ฮผ=A/B).
Errors on the raw ASAD of source and background observations are computed individually based on counting statistics and propagated to calculate the errors on the background-subtracted ASAD.
To estimate the values of the fitting parameters (A, B, ฯ_0) and the uncertainties on them, we perform MCMC
simulations for a large number (1 million) of iterations. For each iteration, the posterior probability is estimated based on randomly sampled model parameter values. At the end of the evolution chain, while the modulation factor and polarization angle are estimated from the best fitted
values of the parameters (A, B and ฯ_0), uncertainties
on them are computed from the distribution of the posterior probabilities
of the parameters. The polarization fractions for each of these observations are estimated by normalizing the fitted ฮผ by the modulation factor expected for 100 % polarized radiation (ฮผ_100), which is obtained from Geant4 simulations where an identical process for the selection of Compton events is followed. In order to confirm that an observation is statistically polarized, we estimate Bayes factor for the sinusoidal model (M_1, for polarized photons) and a constant model (M_2, unpolarized photons) as the ratio of marginal likelihoods of M_1 to M_2 <cit.>.
In the cases where the Bayes factor is greater than 3, we estimate polarization fraction and angle from the fitted parameters (for example, for IMH2992, the Bayes factor is >3 in the full energy range and in 175-230 keV and 230-380 keV). If the Bayes factor is estimated to be less than 3 (e.g., PH5146, IMS4646, and IMH2992 in 100-175 keV), we estimate polarization upper limit.
ยง SPECTRAL FIT RESULTS
The spectral data for the three observations (IMH2992, IMS4646, and PH5146) were fitted with two different models - and . In Table <ref>, the fitted parameter values are summarized for the models.
From the fitted power law and Comptonization parameters for IMH2992, we computed the contribution of the power law to the total flux in 100-175, 175-230, and 230-380 keV respectively. Assuming synchrotron origin of the power law, we computed the expected polarization fraction as a function of energy assuming a maximum 50 % polarization fraction and compare it with the observed polarization fractions as shown in Figure <ref>.
natexlab#1#1
[Agostinelli etย al.(2003)Agostinelli, Allison, Amako,
Apostolakis, Araujo, Arce, Asai, Axen, Banerjee, Barrand,
Behner, Bellagamba, Boudreau, Broglia, Brunengo, Burkhardt,
Chauvie, Chuma, Chytracek, Cooperman, Cosmo, Degtyarenko,
Dell'Acqua, Depaola, Dietrich, Enami, Feliciello, Ferguson,
Fesefeldt, Folger, Foppiano, Forti, Garelli, Giani,
Giannitrapani, Gibin, Gรณmez Cadenas, Gonzรกlez, Gracia
Abril, Greeniaus, Greiner, Grichine, Grossheim, Guatelli,
Gumplinger, Hamatsu, Hashimoto, Hasui, Heikkinen, Howard,
Ivanchenko, Johnson, Jones, Kallenbach, Kanaya, Kawabata,
Kawabata, Kawaguti, Kelner, Kent, Kimura, Kodama, Kokoulin,
Kossov, Kurashige, Lamanna, Lampรฉn, Lara, Lefebure, Lei,
Liendl, Lockman, Longo, Magni, Maire, Medernach, Minamimoto,
Mora de Freitas, Morita, Murakami, Nagamatu, Nartallo, Nieminen,
Nishimura, Ohtsubo, Okamura, O'Neale, Oohata, Paech, Perl,
Pfeiffer, Pia, Ranjard, Rybin, Sadilov, Di Salvo, Santin,
Sasaki, Savvas, Sawada, Scherer, Sei, Sirotenko, Smith,
Starkov, Stoecker, Sulkimo, Takahata, Tanaka, Tcherniaev, Safai
Tehrani, Tropeano, Truscott, Uno, Urban, Urban, Verderi,
Walkden, Wander, Weber, Wellisch, Wenaus, Williams, Wright,
Yamada, Yoshida, Zschiesche, & G EANT4
Collaboration]agostinelli03
Agostinelli, S., Allison, J., Amako, K., etย al. 2003, Nuclear
Instruments and Methods in Physics Research A, 506, 250,
10.1016/S0168-9002(03)01368-8
[Antia etย al.(2022)Antia, Agrawal, Katoch, Manchanda,
Mukerjee, & Shah]antia22
Antia, H.ย M., Agrawal, P.ย C., Katoch, T., etย al. 2022, arXiv e-prints,
arXiv:2205.03136.
2205.03136
[Arnaud(1996)]arnaud1996xspec
Arnaud, K. 1996, in Astronomical Data Analysis Software and Systems V, Vol.
101, 17
[Basak etย al.(2017)Basak, Zdziarski, Parker, &
Islam]Basak2017MNRAS.472.4220B
Basak, R., Zdziarski, A.ย A., Parker, M., & Islam, N. 2017, ,
472, 4220, 10.1093/mnras/stx2283
[Belloni etย al.(2005)Belloni, Homan, Casella, van der
Klis, Nespoli, Lewin, Miller, & Mรฉndez]belloni05
Belloni, T., Homan, J., Casella, P., etย al. 2005, , 440, 207,
10.1051/0004-6361:20042457
[Bernard etย al.(2022)Bernard, Chattopadhyay, Kislat, &
Produit]denis22
Bernard, D., Chattopadhyay, T., Kislat, F., & Produit, N. 2022, arXiv
e-prints, arXiv:2205.02072.
2205.02072
[Cadolle Bel etย al.(2006)Cadolle Bel, Sizun, Goldwurm,
Rodriguez, Laurent, Zdziarski, Foschini, Goldoni, Gouiffรจs,
Malzac, Jourdain, & Roques]bel06_cgx1
Cadolle Bel, M., Sizun, P., Goldwurm, A., etย al. 2006, , 446, 591,
10.1051/0004-6361:20053068
[Chattopadhyay(2021)]chattopadhyay21_review
Chattopadhyay, T. 2021, Journal of Astrophysics and Astronomy, 42, 106,
10.1007/s12036-021-09769-5
[Chattopadhyay etย al.(2014)Chattopadhyay, Vadawale, Rao,
Sreekumar, & Bhattacharya]chattopadhyay14
Chattopadhyay, T., Vadawale, S.ย V., Rao, A.ย R., Sreekumar, S., &
Bhattacharya, D. 2014, Experimental Astronomy, 37, 555,
10.1007/s10686-014-9386-1
[Chattopadhyay etย al.(2019)Chattopadhyay, Vadawale, Aarthy,
Mithun, Chand, Ratheesh, Basak, Rao, Bhalerao, Mate, Arvind,
Sharma, & Bhattacharya]chattopadhyay19
Chattopadhyay, T., Vadawale, S.ย V., Aarthy, E., etย al. 2019, , 884,
123, 10.3847/1538-4357/ab40b7
[Chattopadhyay etย al.(2022)Chattopadhyay, Gupta, Iyyani,
Saraogi, Sharma, Tsvetkova, Ratheesh, Gupta, Mithun, Vaishnava,
Prasad, Aarthy, Kumar, Rao, Vadawale, Bhalerao, Bhattacharya,
Vibhute, & Frederiks]chattopadhyay22_grb
Chattopadhyay, T., Gupta, S., Iyyani, S., etย al. 2022, , 936, 12,
10.3847/1538-4357/ac82ef
[Chauvin etย al.(2018)Chauvin, Florรฉn, Friis, Jackson,
Kamae, Kataoka, Kawano, Kiss, Mikhalev, Mizuno, Ohashi,
Stana, Tajima, Takahashi, Uchida, & Pearce]chauvin18a
Chauvin, M., Florรฉn, H.ย G., Friis, M., etย al. 2018, Nature
Astronomy, 2, 652, 10.1038/s41550-018-0489-x
[Chauvin etย al.(2019)Chauvin, Florรฉn, Jackson, Kamae,
Kataoka, Kiss, Mikhalev, Mizuno, Takahashi, Uchida, &
Pearce]chauvin18b
Chauvin, M., Florรฉn, H.-G., Jackson, M., etย al. 2019, , 483,
L138, 10.1093/mnrasl/sly233
[Cui etย al.(1997)Cui, Heindl, Rothschild, Zhang, Jahoda,
& Focke]cui97_cgx1
Cui, W., Heindl, W.ย A., Rothschild, R.ย E., etย al. 1997, , 474,
L57, 10.1086/310419
[Di Salvo etย al.(2001)Di Salvo, Done, ลปycki, Burderi,
& Robba]salvo01_cgx1
Di Salvo, T., Done, C., ลปycki, P.ย T., Burderi, L., & Robba,
N.ย R. 2001, , 547, 1024, 10.1086/318396
[Ebisawa etย al.(1996)Ebisawa, Ueda, Inoue, Tanaka, &
White]ebisawa96
Ebisawa, K., Ueda, Y., Inoue, H., Tanaka, Y., & White, N.ย E. 1996,
, 467, 419, 10.1086/177616
[Fender etย al.(2004)Fender, Belloni, & Gallo]fender04
Fender, R.ย P., Belloni, T.ย M., & Gallo, E. 2004, , 355, 1105,
10.1111/j.1365-2966.2004.08384.x
[Fender etย al.(2006)Fender, Stirling, Spencer, Brown,
Pooley, Muxlow, & Miller-Jones]fender06_cgx1
Fender, R.ย P., Stirling, A.ย M., Spencer, R.ย E., etย al. 2006, ,
369, 603, 10.1111/j.1365-2966.2006.10193.x
[Gallo etย al.(2003)Gallo, Fender, & Pooley]gallo03_cgx1
Gallo, E., Fender, R.ย P., & Pooley, G.ย G. 2003, , 344, 60,
10.1046/j.1365-8711.2003.06791.x
[Gierlinski etย al.(1997)Gierlinski, Zdziarski, Done,
Johnson, Ebisawa, Ueda, Haardt, & Phlips]gierlinski97_cgx1
Gierlinski, M., Zdziarski, A.ย A., Done, C., etย al. 1997, , 288,
958, 10.1093/mnras/288.4.958
[Jourdain etย al.(2014)Jourdain, Roques, &
Chauvin]Jourdain14_cgx1
Jourdain, E., Roques, J.ย P., & Chauvin, M. 2014, The Astrophysical Journal,
789, 26, 10.1088/0004-637x/789/1/26
[Jourdain etย al.(2012a)Jourdain, Roques,
Chauvin, & Clark]jourdain12
Jourdain, E., Roques, J.ย P., Chauvin, M., & Clark, D.ย J.
2012a, Astrophysical Journal, 761, 27,
10.1088/0004-637X/761/1/27
[Jourdain etย al.(2012b)Jourdain, Roques, &
Malzac]jourdain12_cgx1
Jourdain, E., Roques, J.ย P., & Malzac, J. 2012b, , 744,
64, 10.1088/0004-637X/744/1/64
[Kantzas etย al.(2020)Kantzas, Markoff, Beuchert, Lucchini, Chhotray,
Ceccobello, Tetarenko, Miller-Jones, Bremer, Garcia, & etย al.]Kantzas20
Kantzas, D., Markoff, S., Beuchert, T., etย al. 2020, Monthly Notices of the
Royal Astronomical Society, 500, 2112โ2126, 10.1093/mnras/staa3349
[Krawczynski etย al.(2022)Krawczynski, Muleri, Dovฤiak,
Veledina, Rodriguez Cavero, Svoboda, Ingram, Matt, Garcia,
Loktev, Negro, Poutanen, Kitaguchi, Podgornรฝ, Rankin,
Zhang, Berdyugin, Berdyugina, Bianchi, Blinov, Capitanio, Di
Lalla, Draghis, Fabiani, Kagitani, Kravtsov, Kiehlmann,
Latronico, Lutovinov, Mandarakas, Marin, Marinucci, Miller,
Mizuno, Molkov, Omodei, Petrucci, Ratheesh, Sakanoi, Semena,
Skalidis, Soffitta, Tennant, Thalhammer, Tombesi, Weisskopf,
Wilms, Zhang, Agudo, Antonelli, Bachetti, Baldini, Baumgartner,
Bellazzini, Bongiorno, Bonino, Brez, Bucciantini, Castellano,
Cavazzuti, Ciprini, Costa, De Rosa, Del Monte, Di Gesu, Di
Marco, Donnarumma, Doroshenko, Ehlert, Enoto, Evangelista,
Ferrazzoli, Gunji, Hayashida, Heyl, Iwakiri, Jorstad, Karas,
Kolodziejczak, La Monaca, Liodakis, Maldera, Manfreda, Marscher,
Marshall, Massaro, Mitsuishi, Ng, O'Dell, Oppedisano, Papitto,
Pavlov, Peirson, Perri, Pesce-Rollins, Pilia, Possenti,
Puccetti, Ramsey, Romani, Sgrรฒ, Slane, Spandre, Tamagawa,
Tavecchio, Taverna, Tawara, Thomas, Trois, Tsygankov, Turolla,
Vink, Wu, Xie, & Zane]krawczynski22_ixpe
Krawczynski, H., Muleri, F., Dovฤiak, M., etย al. 2022, arXiv
e-prints, arXiv:2206.09972.
2206.09972
[Kumar etย al.(2021)Kumar, Chattopadhyay, Vadawale, Rao,
Gupta, Mithun N.ย P., Bhalerao, & Bhattacharya]kumar21
Kumar, A., Chattopadhyay, T., Vadawale, S.ย V., etย al. 2021, Journal of
Astrophysics and Astronomy, 42, 10.1007/s12036-021-09711-9
[Kumar etย al.(2022)Kumar, Chattopadhyay, Vadawale, Rao, Mithun,
Bhalerao, & Bhattacharya]kumar22
Kumar, A., Chattopadhyay, T., Vadawale, S.ย V., etย al. 2022, Monthly Notices
of the Royal Astronomical Society, 10.1093/mnras/stac2466
[Laurent etย al.(2011)Laurent, Rodriguez, Wilms, Cadolle
Bel, Pottschmidt, & Grinberg]laurent11
Laurent, P., Rodriguez, J., Wilms, J., etย al. 2011, Science, 332, 438,
10.1126/science.1200848
[Lei etย al.(1997)Lei, Dean, & Hills]lei97
Lei, F., Dean, A.ย J., & Hills, G.ย L. 1997, Space Science Reviews, 82,
309, 10.1023/A:1005027107614
[Long etย al.(1980)Long, Chanan, & Novick]long80
Long, K.ย S., Chanan, G.ย A., & Novick, R. 1980, , 238, 710,
10.1086/158027
[Lubiลski etย al.(2020)Lubiลski, Filothodoros, Zdziarski,
& Pooley]Lubinski20
Lubiลski, P., Filothodoros, A., Zdziarski, A.ย A., & Pooley, G. 2020, The
Astrophysical Journal, 896, 101, 10.3847/1538-4357/ab9311
[Makishima etย al.(2008)Makishima, Takahashi, Yamada, Done,
Kubota, Dotani, Ebisawa, Itoh, Kitamoto, Negoro, Ueda, &
Yamaoka]Makishima2008PASJ...60..585M
Makishima, K., Takahashi, H., Yamada, S., etย al. 2008, , 60, 585,
10.1093/pasj/60.3.585
[Malyshev etย al.(2013)Malyshev, Zdziarski, &
Chernyakova]malyshev13_cgx1
Malyshev, D., Zdziarski, A.ย A., & Chernyakova, M. 2013, , 434,
2380, 10.1093/mnras/stt1184
[Markoff etย al.(2001)Markoff, Falcke, & Fender]markoff01
Markoff, S., Falcke, H., & Fender, R. 2001, Astronomy & Astrophysics,
372, L25, 10.1051/0004-6361:20010420
[McConnell etย al.(2002)McConnell, Zdziarski, Bennett,
Bloemen, Collmar, Hermsen, Kuiper, Paciesas, Phlips, Poutanen,
Ryan, Schรถnfelder, Steinle, & Strong]mcconnel02_cgx1
McConnell, M.ย L., Zdziarski, A.ย A., Bennett, K., etย al. 2002, ,
572, 984, 10.1086/340436
[Miller-Jones etย al.(2021)Miller-Jones, Bahramian, Orosz, Mandel,
Gou, Maccarone, Neijssel, Zhao, Ziรณลkowski, Reid, Uttley, Zheng, Byun,
Dodson, Grinberg, Jung, Kim, Marcote, Markoff, Rioja, Rushton, Russell,
Sivakoff, Tetarenko, Tudose, & Wilms]miller21
Miller-Jones, J. C.ย A., Bahramian, A., Orosz, J.ย A., etย al. 2021, Science,
371, 1046, 10.1126/science.abb3363
[Poutanen & Svensson(1996)]compPS1996ApJ...470..249P
Poutanen, J., & Svensson, R. 1996, , 470, 249, 10.1086/177865
[Rahoui etย al.(2011)Rahoui, Lee, Heinz, Hines,
Pottschmidt, Wilms, & Grinberg]rahoui11_cgx1
Rahoui, F., Lee, J.ย C., Heinz, S., etย al. 2011, , 736, 63,
10.1088/0004-637X/736/1/63
[Russell & Shahbaz(2014)]russell14_cgx1
Russell, D.ย M., & Shahbaz, T. 2014, , 438, 2083,
10.1093/mnras/stt2330
[Stirling etย al.(2001)Stirling, Spencer, de la Force,
Garrett, Fender, & Ogley]stirling01_cgx1
Stirling, A.ย M., Spencer, R.ย E., de la Force, C.ย J., etย al. 2001,
, 327, 1273, 10.1046/j.1365-8711.2001.04821.x
[Sunyaev & Truemper(1979)]sunyaev79_cgx1
Sunyaev, R.ย A., & Truemper, J. 1979, , 279, 506,
10.1038/279506a0
[Torii etย al.(2011)Torii, Yamada, Makishima, Sakurai,
Nakazawa, Noda, Done, Takahashi, &
Gandhi]torii2011PASJ...63S.771T
Torii, S., Yamada, S., Makishima, K., etย al. 2011, , 63, S771,
10.1093/pasj/63.sp3.S771
[Vadawale etย al.(2015)Vadawale, Chattopadhyay, Rao,
Bhattacharya, Bhalerao, Vagshette, Pawar, &
Sreekumar]vadawale15
Vadawale, S.ย V., Chattopadhyay, T., Rao, A.ย R., etย al. 2015, Astronomy
& Astrophysics, 578, 73, 10.1051/0004-6361/201525686
[Vadawale etย al.(2001)Vadawale, Rao, &
Chakrabarti]vadawale01
Vadawale, S.ย V., Rao, A.ย R., & Chakrabarti, S.ย K. 2001, Astronomy &
Astrophysics, 372, 793, 10.1051/0004-6361:20010574
[Vadawale etย al.(2003)Vadawale, Rao, Naik, Yadav,
Ishwara-Chandra, Pramesh Rao, & Pooley]vadawale03
Vadawale, S.ย V., Rao, A.ย R., Naik, S., etย al. 2003, Astrophysical
Journal, 597, 1023, 10.1086/378672
[Vadawale etย al.(2018)Vadawale, Chattopadhyay, Mithun,
Rao, Bhattacharya, Vibhute, Bhalerao, Dewangan, Misra, Paul,
Basu, Joshi, Sreekumar, Samuel, Priya, Vinod, &
Seetha]vadawale17
Vadawale, S.ย V., Chattopadhyay, T., Mithun, N.ย P.ย S., etย al. 2018,
Nature Astronomy, 2, 50, 10.1038/s41550-017-0293-z
[Wilms etย al.(2007)Wilms, Pottschmidt, Pooley, Markoff, Nowak,
Kreykenbohm, & Rothschild]Wilms07_cgx1
Wilms, J., Pottschmidt, K., Pooley, G.ย G., etย al. 2007, The Astrophysical
Journal, 663, L97, 10.1086/520508
[Zanin etย al.(2016)Zanin, Fernรกndez-Barral, de Oรฑa
Wilhelmi, Aharonian, Blanch, Bosch-Ramon, & Galindo]zanin16
Zanin, R., Fernรกndez-Barral, A., de Oรฑa Wilhelmi, E., etย al.
2016, , 596, A55, 10.1051/0004-6361/201628917
[Zdziarski etย al.(2017)Zdziarski, Malyshev, Chernyakova, &
Pooley]zdz2017MNRAS.471.3657Z
Zdziarski, A.ย A., Malyshev, D., Chernyakova, M., & Pooley, G.ย G. 2017,
, 471, 3657, 10.1093/mnras/stx1846
[Zdziarski etย al.(2014)Zdziarski, Pjanka, Sikora, &
Stawarz]zdziarski14
Zdziarski, A.ย A., Pjanka, P., Sikora, M., & Stawarz, ล. 2014,
MNRAS, 442, 3243, 10.1093/mnras/stu1009
|
http://arxiv.org/abs/2306.06320v4
|
20230609235549
|
Validation of semi-analytical, semi-empirical covariance matrices for two-point correlation function for Early DESI data
|
[
"Michael Rashkovetskyi",
"Daniel J. Eisenstein",
"Jessica Nicole Aguilar",
"David Brooks",
"Todd Claybaugh",
"Shaun Cole",
"Kyle Dawson",
"Axel de la Macorra",
"Peter Doel",
"Kevin Fanning",
"Andreu Font-Ribera",
"Jaime E. Forero-Romero",
"Satya Gontcho A Gontcho",
"ChangHoon Hahn",
"Klaus Honscheid",
"Robert Kehoe",
"Theodore Kisner",
"Martin Landriau",
"Michael Levi",
"Marc Manera",
"Ramon Miquel",
"Jeongin Moon",
"Seshadri Nadathur",
"Jundan Nie",
"Claire Poppett",
"Ashley J. Ross",
"Graziano Rossi",
"Eusebio Sanchez",
"Christoph Saulder",
"Michael Schubnell",
"Hee-Jong Seo",
"Gregory Tarle",
"David Valcin",
"Benjamin Alan Weaver",
"Cheng Zhao",
"Zhimin Zhou",
"Hu Zou"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"math.ST",
"physics.data-an",
"stat.TH"
] |
firstpageโlastpage
GTC Follow-up Observations of Very Metal-Poor Star Candidates from DESI
[
July 31, 2023
=======================================================================
We present an extended validation of semi-analytical, semi-empirical covariance matrices for the two-point correlation function (2PCF) on simulated catalogs representative of Luminous Red Galaxies (LRG) data collected during the initial two months of operations of the Stage-IV ground-based Dark Energy Spectroscopic Instrument (DESI).
We run the pipeline on multiple effective Zel'dovich (EZ) mock galaxy catalogs with the corresponding cuts applied and compare the results with the mock sample covariance to assess the accuracy and its fluctuations.
We propose an extension of the previously developed formalism for catalogs processed with standard reconstruction algorithms.
We consider methods for comparing covariance matrices in detail, highlighting their interpretation and statistical properties caused by sample variance, in particular, nontrivial expectation values of certain metrics even when the external covariance estimate is perfect.
With improved mocks and validation techniques, we confirm a good agreement between our predictions and sample covariance.
This allows one to generate covariance matrices for comparable datasets without the need to create numerous mock galaxy catalogs with matching clustering, only requiring 2PCF measurements from the data itself.
The code used in this paper is publicly available at <https://github.com/oliverphilcox/RascalC>.
large-scale structure of Universe โ cosmology: theory โ galaxies: statistics โ surveys โ software: data analysis โ methods: statistical
ยง INTRODUCTION
Measurements of the large-scale structure of the Universe are one of the pillars of modern cosmology.
The two-point correlation function (2PCF) of galaxies is a particularly important statistical quantity for the large-scale structure, describing the excess probability of finding a galaxy at a given separation from another galaxy, compared to a random distribution.
Its measurements have a notable feature at the scale of baryon acoustic oscillations (BAO, first detected by <cit.>).
As the corresponding comoving scale is a standard ruler with length set by sound horizon during recombination (more precisely, drag epoch), these measurements particularly constrain the expansion history of the Universe at redshifts between then and now (zโผ 1), providing a valuable test for cosmological models.
A more detailed overview of the methodology is provided in <cit.>.
We have an exciting opportunity to analyze the data from the Dark Energy Spectroscopic Instrument (DESI, <cit.>), a highly promising 5-year Stage-IV BAO experiment for large-scale spectroscopic surveys.
For example, the sample of Luminous Red Galaxies (LRG, <cit.>) observed during the first two months of DESI main survey operations () yields a BAO scale measurement with 1.7% precision <cit.>, which is already comparable to the aggregate precision of 0.77% of preceding leading surveys, BOSS and eBOSS <cit.>.
For rigorous interpretation of data, the likelihood is crucial.
Fortunately, the distribution of measured clustering statistics is well described by a multivariate Gaussian, which is fully described by mean and covariance matrix.
Unfortunately, the latter poses a serious challenge.
A fully analytical model for the covariance matrix is desirable because it can provide a fast and stable result.
However, it is very hard to construct in the case of galaxy clustering.
One of the reasons is that galaxies are high matter overdensities, evolving in a crucially nonlinear regime.
Another is the complicated effects of survey geometry and non-uniform selection.
There have been recent promising developments on covariance matrices for power spectra using perturbation theory <cit.>.
However, power spectra results are difficult to apply to the correlation functions, because the Fourier transform is not local.
The standard approach is using a sample covariance estimated from mock (simulated) galaxy catalogs.
This solution is far from ideal.
Such catalogs need to capture key aspects of clustering and be representative of the data, or assumed theoretical model.
Detailed simulations are computationally expensive, while for precise covariance estimate a large number of samples is required, increasingly higher as more quantities are measured.
This forces a hard compromise between quality and quantity, keeping the total computation time very long.
The mock-based covariance production is therefore not characterized by flexibility, as the generation and processing of numerous simulations for an updated dataset or alternative model require a huge effort.
Another group of methods is called internal for using only the data itself.
The lack of dependence on mocks and model assumptions makes them attractive.
These are represented by re-sampling techniques like jackknife and bootstrap, which involve splitting the data into parts.
However, it may be hard to ensure that the parts are principally equivalent (so the differences all stem from random fluctuations) and separate the independent contributions to different estimates used for covariance.
<cit.> attempt to mitigate the latter issue for jackknives by modifying the pair weighting for jackknife covariance estimates.
However, <cit.> find that the results are still prone to a density-dependent bias.
Each of the three approaches has its flaws and combining advantages from different ones seems very promising.
Thus we choose to focus on a method that started nearly analytical and employed elements of an internal approach.
<cit.> demonstrated that the covariance matrix for 2PCF in arbitrary survey geometry can be expressed through integrals or sums in configuration space (this result is analogous to <cit.>), which can be computed efficiently with importance sampling techniques.
There are terms containing higher-point correlations (3-point and connected 4-point functions), which are highly challenging to model or measure with precision.
Thus instead a procedure of leaving only Gaussian correlation and rescaling of shot noise amplitude was proposed and found to achieve a good agreement with mock-based covariances.
This rescaling parameter can be calibrated on a small suite of mocks, or on a jackknife covariance intrinsic to the data analyzed <cit.>.
We find the combination of analytical methods with jackknife more promising than with mocks, because, after a validation, such a procedure does not require the construction of any new mocks to match updated data or an alternative model with different assumptions, and does not require any more than jackknife covariance computation, at the same time offering higher smoothness, stability, and invertibility.
<cit.> introduced the [<https://github.com/oliverphilcox/RascalC> (Oliver Philcox, Daniel Eisenstein, Ross O'Connell, Alexander Wiegand, Misha Rashkovetskyi, Yuting Wang, Ryuichiro Hada, Uendert Andrade)] code with a new algorithm, making it easier to account for survey geometry via a catalog of random points, boosting the efficiency several times and extended the formalism to covariances for multiple tracers (galaxy types).
<cit.> developed the estimators for Legendre-binned 2PCF and isotropic three-point correlation function.
We have contributed covariances for the BAO analysis of data <cit.>.
This work accompanies it, focusing on the validation of the approach in realistic circumstances.
We limit ourselves to analogs of DESI LRG sample <cit.> due to the availability of a large suite of mocks with corresponding cuts, providing a good sample covariance matrix for reference.
Similarly to <cit.>, we process a single mock catalog in essentially the same manner as data and compare the resulting covariance with the sample covariance of clustering measurements in all available mocks, which gives a fair proxy of the pipeline performance on data and is also robust to the mismatch between data and mock clustering.
We repeat the procedure multiple times taking a different catalog each time to assess the accuracy of the method, its stability, and fluctuations.
In addition, we pay extra attention to the formation of a covariance matrix comparison toolkit.
We focus on the meaning of the numbers used and derive reference values for the ideal case when the semi-analytical prediction matches the true underlying covariance.
Due to sample variance, these expectation values can be nontrivial, and understanding the noise in the comparison measures is crucially important as well.
We choose a smaller number of observables for lower noise and clearer interpretation, and further project the covariances into the lower-dimensional and more physically meaningful space of model parameters.
We also note the prospects of standard reconstruction techniques that aim to reverse the large-scale displacements during the times after the drag epoch.
Such subsequent evolution leads to broadening and contamination of the BAO peak, thus undoing it sharpens the feature <cit.>.
The formalism is applicable to reconstructed 2PCF covariance as well with minor adjustments.
This paper is organized in the following manner.
We review previous 2PCF estimators and covariances, discuss a modification of random counts computation and a formal extension to reconstructed data in Sec.ย <ref>.
In Sec.ย <ref> we discuss the problem of covariance matrix comparison and present our selection of methods,
before applying them to validation with DESI LRG mocks in Sec.ย <ref>.
We conclude in Sec.ย <ref> by reviewing current accomplishments and future prospects.
Appendixย <ref> provides more complete details on the covariance matrix estimators.
Appendixย <ref> provides an overview and derivations of useful properties of covariance matrix comparison metrics.
ยง METHODS OF COVARIANCE MATRIX ESTIMATION
We start by recapitulating the 2PCF estimators and covariance matrix formalism from <cit.>, <cit.>, and <cit.>, with a revised notation similar to <cit.>.
In the following two subsections, we discuss a slight modification for optimized disjoint random count computation and an extension for reconstructed data.
ยง.ยง Overview of previous work
In a galaxy survey, we may define the 2PCF of tracers X and Y through the ratio of pair counts:
ฮพฬ^XY(r,ฮผ) = N^XN^Y(r,ฮผ)/R^XR^Y(r,ฮผ)
<cit.>, where ฮผ=cosฮธ is used instead of angle ฮธ (ฮผ can be restricted to 0โคฮผโค 1 by symmetry) and N^X=D^X-R^X.
D^X and R^X are (weighted) galaxies and random particles (tracing the expected mean density) of kind X respectively.
In radial bin a and angular bin c, this estimate transforms to
ฮพฬ^XY_a^c = N^XN^Y_a^c/R^XR^Y_a^c
with
N^XN^Y_a^c = โ_iโ jn^X_in^Y_jw^X_iw^Y_jฮ^a(r_ij)ฮ^c(ฮผ_ij)ฮด^X_iฮด^Y_j
R^XR^Y_a^c = โ_iโ jn^X_in^Y_jw^X_iw^Y_jฮ^a(r_ij)ฮ^c(ฮผ_ij),
where we have assigned a cubic grid to the survey such that each cell contains no more than one galaxy, n^X_i is the expected mean number density of tracer X in the cell i, w_i is the expected mean weight, ฮด_i is the fractional galaxy overdensity, ฮผ_ij relates to the angle between the line of sight and the separation vector rโ_ij = rโ_i - rโ_j (r_ij being its absolute value), and ฮ are binning functions (unity if the argument fits into the bin and zero otherwise).
Given the binned 2PCF estimator (Eq.ย (<ref>)), the covariance matrix can be computed by definition:
ฮพฬ^XY_a^c, ฮพฬ^ZW_b^d = ฮพฬ^XY_a^c ฮพฬ^ZW_b^d
- ฮพฬ^XY_a^cฮพฬ^ZW_b^d,
where means ensemble average over realizations of overdensity ฮด.
According to Eq.ย (<ref>), this affects the NN counts but not RR.
Thus the right-hand side of Eq.ย (<ref>) has the product of RR counts as a constant denominator and a sum over sets of 4 cells (from the product of NN counts) containing an ensemble average of the product of 4 ฮด values, where some of the cells can coincide with each other.
These ensemble averages are by definition 4-point correlation functions, but same-cell overdensities handled naively give zero separation and divergent values.
To overcome this issue, <cit.> further expanded Eq.ย (<ref>) into sums over distinct 2-, 3- and 4-cell configurations.
Additionally, squares of overdensity in one cell (products of overdensities in coinciding cells) were replaced by a shot-noise approximation:
ฮด^X_i^2 โ1/n^X_i1+ฮด^X_i.
As a result, only products of 4, 3, or 2 overdensities in distinct cells remained.
After ensemble averaging, these give 4-, 3- and 2-point correlation functions at nonzero separations, respectively.
Lastly, the disconnected (Gaussian) part of the 4-point function can be separated from the connected (non-Gaussian) one according to Isserlis' (Wick's) theorem <cit.>.
The full resulting expressions are provided in Appendixย <ref>, Eq.ย (<ref>).
Due to high noise in higher-point correlation function measurements and difficulties in their theoretical modeling, an alternative approach of mimicking non-Gaussianity has been established.
All the higher-order correlation functions are set to zero (save for disconnected 4-point, which reduces to products of 2-point functions), but the shot-noise approximation is modified with a factor ฮฑ_ SN^X>1 which can have a separate value for each of different samples of galaxies:
ฮด^X_i^2 โฮฑ_ SN^X/n^X_i1+ฮด^X_i.
This increases the correlations on the smallest scales, which is similar to where non-Gaussian effects are the strongest.
The sums can be transformed to continuous form by changing sums to integrals over positions and replacing cell quantities with continuous functions in 3-dimensional space.
However, it is convenient to leave them discrete, which allows us to estimate them using importance sampling directly from the random catalog <cit.>, without the need to write functional forms for survey number density, weights, and so on.
Two-point function values for pairs of points are interpolated from a grid/table to sampled pair separation r_ij,ฮผ_ij (the latter can be computed with respect to the midpoint radius-vector of the pair, or a fixed axis) with the bicubic method, which is in practice based on radially and angularly binned 2PCF estimates.
A special iterative correlation function rescaling procedure is used to modify the values of the correlation function on the interpolation grid such that the bin-averaged values of the interpolation result resemble the binned 2PCF estimates more closely.
Originally <cit.> fit the shot-noise rescaling to a sample covariance obtained from a smaller set of mocks, with the idea that lower precision was sufficient for obtaining just one parameter as opposed to estimating the whole matrix.
<cit.> proposed that a jackknife covariance from the data itself can be used instead, eliminating the dependence on mocks completely.
Noting that jackknife has issues in the cosmological application, they developed a separate estimator for this covariance taking into account correlations between different estimates.
We follow a modified formalism from <cit.> called unrestricted jackknife.
According to it, the jackknife correlation function estimate ฮพ_A is the cross-correlation function between jackknife region number A and the whole survey.
In other words, an additional weight of particle i (denoted by q_i^A) is equal to one if it belongs to the region A and zero if not.
A pair of particles is additionally weighted by the mean of their weights: one if both belong to the region A, one half if only one does and zero if both are out of it.
Conveniently, the sum of these weights for any selected pair over all jackknife regions is unity.
As a consequence, the mean of jackknife 2PCF estimates (weighted by RR counts) is equal to the full 2PCF estimate.
Then an estimator for jackknife covariance is worked out separately (Eq.ย (<ref>)), with a similar shot-noise rescaling procedure.
In principle, shot-noise rescaling (Eq.ย (<ref>)) can be different for each tracer, and it can be obtained by fitting the prediction for the jackknife covariance of its auto-correlation function to the data-based jackknife estimate.
The resulting ฮฑ^X_ SN value(s) is used together with the full covariance estimator (Eq.ย (<ref>)) for the final result.
Let us reiterate the key approximation: non-Gaussianity can be mimicked by rescaling shot noise while dropping the terms with higher-order correlation functions.
It works because the primary effect of non-Gaussian contributions is an additional correlation at small distances, typically smaller than the bin width of 2PCF used in actual fits and requiring a covariance.
Enhancing the shot noise results in increased correlation on infinitely small scales.
This should remain a good estimate as long as the correlation functions' contribution to covariances on scales of interest is dominated by their squeezed limits.
Whether this is the case is not clear generally, but <cit.> reported good agreement with large sets of mock catalogs achieved with this method, and <cit.> demonstrated the applicability of the approach to BAO analysis.
It is important to specify the means of fitting covariance matrices.
Following <cit.>, we choose to optimize the Kullback-Leibler (KL) divergence between the jackknife precision ฮจฬ_J ฮฑ_ SN and the data-based jackknife covariance estimate (given by Eq.ย (<ref>)), D_ KLฮจฬ_J ฮฑ_ SN, C_J, where
D_ KLฮจ_1, C_2 = 1/2ฮจ_1 C_2 - N_ bins - lnฮจ_1 C_2.
In this equation, N_ bins is the dimension of covariance matrices (number of correlation function bins).
Inversion of the covariance has a certain bias.
Since it is not expected to obey Wishart statistics like the (mock) sample covariance, the Hartlap factor <cit.> is not relevant.
A special second-order bias correction has been derived in <cit.>:
ฮจฬ = (๐ - Dฬ) Cฬ^-1
Dฬ = N_subsamples-1/N_subsamples-๐+1/N_subsamplesโ_i=1^N_subsamplesCฬ_[i]^-1Cฬ_i,
which uses the partial covariance estimates Cฬ_i from N_subsamples distinct sets of configurations resulting from importance sampling in the estimation of sums (Eq.ย (<ref>) or (<ref>)) and mean of all the partial estimates but the i'th Cฬ_[i].
Thus the correction is applicable for both jackknife and full covariance.
ยง.ยง Split random-random computation
<cit.> showed that splitting the random catalog into a number of sub-catalogs of the same size as the data catalog when calculating randomโrandom pairs and excluding pairs across different sub-catalogs provides the optimal error at a fixed computational cost.
The splitting can be used in .
It gives little to no speed-up and impact on results because the importance sampling is too far from complete.
However, it can be useful for multi-node parallelization.
This approach has been used for the data-based computation in <cit.>.
A robust implementation of split random-random pair calculations in would require considering only quadruples of random points where members of each pair are from the same sub-catalog, but the pairs can be from different catalogs.
However, this has been found to have little to no impact on the results, probably due to the fact that importance sampling covers only a small fraction of all possible configurations.
At the same time, such implementation makes the code less efficient and makes it impossible to split the computation of different catalogs between nodes.
ยง.ยง Reconstructed two-point function covariance
After standard reconstruction, a common approach is to replace N=(D-R) by N=(D-S) (S being the random point with position shifted in the same manner as data) in the Landy-Szalay estimator (Eq.ย (<ref>)), leaving RR in the denominator, so that it is
ฮพฬ^XY = D^XD^Y - D^XS^Y - S^XD^Y + S^XS^Y/R^XR^Y,
instead of
ฮพฬ^XY = D^XD^Y - D^XR^Y - R^XD^Y + R^XR^Y/R^XR^Y.
This means shifted randoms are to be used in sums or integrals representing NN.
These eventually form the sums (or integrals) for C terms.
Thus, strictly speaking, the procedure for reconstructed 2PCF should be:
* use shifted randoms for sampling, corresponding to the numerator of (<ref>);
* provide a differently normalized 2PCF as input, namely
ฮพฬ^XY_ in = D^XD^Y - D^XS^Y - S^XD^Y + S^XS^Y/S^XS^Y,
since non-shifted randoms do not appear in the sampling procedure;
* use non-shifted random counts for denominator in Eq.ย (<ref>), or correction function (Eq.ย (<ref>)) in Legendre case, corresponding to the denominator of (<ref>).
Shifted randoms are individual for each mock catalog.
Therefore they can not be defined clearly for mock-averaged computations.
In those cases, we continue to use the non-shifted randoms everywhere for consistency.
ยง METHODS OF COMPARISON OF COVARIANCE MATRICES
Since a covariance matrix is a high-dimensional object, it can be hard to explore and interpret.
Moreover, we run the pipeline multiple times independently and aim to study all the covariance matrix products to assess their stability and fluctuations.
Thus compact and numerical comparison measures are instructive.
ยง.ยง Interpretable measures of similarity for covariance matrices
The first characteristic we consider is the Kullback-Leibler (KL) divergence, a measure of distance between distributions used to fit covariances in (Sectionย <ref>, Eq.ย (<ref>)).
It is generally defined as an expectation value of the logarithm of the ratio of the two probability distribution functions according to the first distribution:
D_ KL (P_1 || P_2) = โซlnP_1(x)/P_2(x) P_1(x) dx
By this expression, KL divergence can be seen as an average difference in log-likelihood.
For two Gaussian distributions with covariance matrices C_i and precision matrices ฮจ_i= C_i^-1 describing N_ bins observables (correlation function bins in our setup), it can be found as
D_ KLฮจ_1, C_2 = 1/2ฮจ_1 C_2 - N_ bins - lnฮจ_1 C_2.
This is the expression we will use.
<cit.> show that the KL divergence is related to the log-likelihood if the covariance matrix is estimated from a sample with multivariate normal distribution, which is a very good approximation for correlation function bins, and additionally assuming that the precision matrix is the inverse of the true covariance matrix characterizing the multivariate normal distribution of measured quantities.
This is very appropriate for testing the hypothesis that the precision matrix is a precise unbiased estimate.
The next metric assesses how close the first precision matrix is to the inverse of the second covariance matrix, and at the same time a โdirectionalโ root-mean-square relative difference in ฯ^2 given by the two covariance matrices (explained in more detail in Appendixย <ref>):
R_ invฮจ_1, C_2 = 1/โ(N_ bins) C_2^1/2ฮจ_1 C_2^1/2 - ๐_F
= โ(ฮจ_1 C_2 - ๐^2/N_ bins).
This measure can also be seen as the average relative difference in the errorbars.
Moreover, if the covariance matrix is estimated from a sample with multivariate normal distribution and the precision matrix is assumed to be true, R_ inv^2 is proportional to the ฯ^2 computed using covariance of independent covariance matrix elements (Appendixย <ref>).
Thus, in this case, it can serve as an approximation of log-likelihood for optimization.
The last metric is akin to the mean reduced ฯ^2 of samples corresponding to one covariance matrix with respect to the other precision matrix:
ฯ^2_ redฮจ_1, C_2 = 1/N_ binsฮจ_1 C_2.
It can be seen as the mean ratio of ฯ^2 given by the two covariance/precision matrices.
All three metrics are not symmetric, meaning that values for ฮจ_1, C_2 and ฮจ_2, C_1 may be different, so in principle, it might be informative to consider differences both ways.
On the other hand, the sample covariance is less robust than the result and its inversion can be less stable.
Moreover, computing each metric twice makes the results more numerous and less clear.
Finally, the KL divergence (Eq.ย <ref>) is expected to lose its log-likelihood sense if computed between the sample precision and model covariance, since the latter one does not necessarily follow Wishart distribution.
Therefore we decided to limit ourselves to precision matrices and sample covariance matrices.
For a better understanding of the metrics, let us consider the eigenvalues of ฮจ_1 C_2 (alternatively, one can use ฮจ_1^1/2 C_2 ฮจ_1^1/2 which is symmetric) and denote them as ฮป_a.
We would like ฮจ_1 โ C_2^-1 thus all ฮป_a โ 1.
The metrics then can be expressed as
D_ KLฮจ_1, C_2 = 1/2โ_a=1^N_ bins[ ฮป_a - 1 - lnฮป_a ] โ1/4โ_a=1^N_ bins (ฮป_a - 1)^2,
R_ invฮจ_1, C_2 = โ(1/N_ binsโ_a=1^N_ bins (ฮป_a - 1)^2),
ฯ^2_ redฮจ_1, C_2 = 1/N_ binsโ_a=1^N_ binsฮป_a.
Thus D_ KL and R_ inv accumulate any deviation of ฮป_a from 1, although they cannot indicate the direction of such differences.
Note that the quadratic expression for D_ KL is approximate so it is not generally degenerate with R_ inv, although as the covariance matrices approach each other these two measures become more redundant:
D_ KLฮจ_1, C_2โN_ bins/4 R_ inv^2 ฮจ_1, C_2.
ฯ^2_ red can show which covariance matrix is โlargerโ on average, while deviations in opposite directions may cancel each other.
Next, we would like to understand what to expect from these metrics.
For this purpose, we consider the case when the precision matrix is predicted perfectly (ฮจ_0), ideally matching the true underlying covariance (C_0=ฮจ_0^-1), and focus on the noise properties of the sample covariance matrix C_S obtained via the standard unbiased estimator for the case when the true mean is not known:
C_S,ab = 1/n_S-1โ_i=1^n_S (ฮพ_a,i - ฮพฬ
_a) (ฮพ_b,i - ฮพฬ
_b)
where a,b denote bin numbers, i,j index sample numbers, and ฮพฬ
_a is the estimate of the mean:
ฮพฬ
_a โก1/n_Sโ_i=1^n_Sฮพ_a,i.
Since the clustering measurements are described well by a multivariate normal distribution, their sample covariance matrix follows the Wishart statistics.
This provides a reference of how the metrics behave when the perfect precision matrix is compared to a covariance matrix estimated from n_S samples with N_ bins bins (or any other Gaussian observables).
Full derivations are presented in Appendixย <ref>, here we will only provide the results for mean/expectation values and standard deviations:
D_ KLฮจ_0, C_SโN_ bins(N_ bins+1)/4(n_S-1),
ฯD_ KLฮจ_0, C_Sโ1/2โ(N_ bins [(N_ bins + 1) (n_S + 2 N_ bins + 2) + 2]/n_S-1^3);
R_ invฮจ_0, C_Sโโ(N_ bins+1/n_S-1),
ฯR_ invฮจ_0, C_Sโ1/n_S-1โ((N_ bins + 1) (n_S + 2 N_ bins + 2) + 2/N_ bins (N_ bins+1)).
Naively, one could expect D_ KL and R_ inv to become arbitrarily small as ฮจ_1 โฮจ_0.
However, in reality, they can have large expectation values, especially as the number of bins increases.
ฯ^2_ red, however, would behave like the reduced ฯ^2 with N_ binsร (n_S-1) degrees of freedom in this case (see Appendixย <ref>):
ฯ^2_ redฮจ_0, C_S = 1,
ฯฯ^2_ redฮจ_0, C_S = โ(2/N_ bins (n_S-1)).
It might seem like D_ KL and R_ inv could be unbiased by multiplying one of the matrices by a factor similar to the Hartlap factor <cit.>, but a lack of bias in ฯ^2_ red expectation value suggests that this is not true.
It is notable that a deviation of ฯ^2_ red from 1 would contribute to R_ inv โ see expressions (<ref>) and (<ref>); their consequence is also that
ฯ^2_ redฮจ_1, C_2 - 1โค R_ invฮจ_1, C_2.
However, the direction of such deviation will not be clear in R_ inv, and nontrivial expectation value can make it harder to interpret.
This keeps ฯ^2_ red useful in many cases.
ยง.ยง Internal convergence assessment
Internal consistency of covariance matrices in one run and the convergence of the Monte-Carlo integration procedure are also important to assess quantitatively.
We propose to employ the above-mentioned methods to accomplish this and provide valuable diagnostics which do not rely on a reference (e.g. sample) covariance and thus can be used in any run, including the pure data-based one.
However, we need to note that such a test can only quantify limited sources of uncertainty or error, leaving aside the factors like adequacy of the approximations in the formalism, the precision of the input clustering, and noise in the jackknife covariance estimated from the data.
The code provides multiple partial intermediate results corresponding to practically non-overlapping sets of quadruples, triples, and pairs of points.
These resulting covariance matrices can be split into two distinct sets of similar size, averaged within them, and compared using the three metrics.
In this case, however, the arguments for the ฯ^2_ red weaken โ we can expect R_ inv to become arbitrarily low as the number of Monte-Carlo samples increases, which would limit the reduced chi-squared via Eq.ย (<ref>), and it is not as interesting to understand which of the halves gives a โsmallerโ matrix.
Then D_ KL also becomes more redundant with R_ inv via Eq.ย (<ref>).
Therefore it is reasonable to only show R_ inv, which can be seen as an estimate of root-mean-square relative precision (considered over all directions in measurement space).
ยง APPLICATION TO DESI LRG MOCKS
In this section, we use the described methods on mocks to assess the performance and stability of the approach on the actual dataset.
We describe the setup first, then perform intrinsic validation described in Sectionย <ref>, look at the shot-noise rescaling values resulting from jackknife calibration used for the final covariance estimates, validate the results by comparison with the mock sample covariance in measurement/observable and parameter space and finally focus specifically on errorbars on BAO scale.
ยง.ยง Mock catalogs and reconstruction method
We use the 999 effective Zel'dovich (EZ) mocks <cit.> with cuts corresponding to the DESI LRG sample <cit.> (described in more detail in <cit.>), which will be referred to as .
Sample covariance based on these does not provide a perfect reference because both the number of mock catalogs and the level of details in each simulation are limited, but the best one can have realistically since increasing one without making the other worse would require even more significant computational resources.
Comparing these is also robust to the mismatch between data and mock clustering.
The reconstruction method is also the same as in <cit.>: the iterative procedure <cit.> implemented in the IterativeFFTReconstruction
algorithm of the pyrecon package[<https://github.com/cosmodesi/pyrecon> (Arnaud de Mattia, Martin J. White, Julian E. Bautista, Pedro Rangel Caetano, Sesh Nadathur, Enrique Paillas, Grant Merz, Davide Bianchi)] with the RecIso convention.
Three iterations are used with a Gaussian smoothing kernel of width 15.
An approximate growth rate and the expected bias are assumed.
ยง.ยง Setup
For this study, we have performed separate runs using 2PCF measured from single LRG catalogs.
This has been repeated 10 times for pre- and post-recon.
In the latter case, individual shifted random catalogs have been used for each mock following the procedure we described in Sectionย <ref>.
Pre-reconstruction galaxies and randoms were assigned unity weights, for post-reconstruction FKP weights <cit.> were used, given by
w_ FKP = 1/1+n(z) C P_0
where n(z) is the weighted number density (per volume),
C is the mean completeness for the sample,
and P_0 is a fiducial power-spectrum amplitude.
For LRG, C=0.579 and P_0=10^4 ()^3 <cit.>.
We note that the weighting schemes are not exactly the same as for real data, but since weights are included explicitly in the covariance estimators we expect to work with any fixed choice applied consistently for 2PCF measurements and Monte-Carlo integration.
For the importance sampling input, 10 random catalogs were used in pre-recon computations and 20 in post-recon, like for the RR (SS) pair counts computation for the 2PCF estimates.
These randoms have been concatenated before being provided to executable.
We assign 60 jackknife regions assigned by a K-means subsampler based on data positions (but not weights) as in data, compute the jackknife covariance matrix and use it to calibrate the shot-noise rescaling.
We note that some validation has been performed in <cit.>: covariance matrix based on 2PCF averaged over all LRG with no shot-noise rescaling applied (and using unshifted randoms in the post-reconstruction case) has been compared to the sample covariance matrix in terms of ฯ^2 of BAO fits, best-fit values and standard deviations of BAO isotropic scale parameter ฮฑ, yielding a good agreement.
However, there are significant limitations to this approach:
* effect of noise in the input clustering is significantly smaller in 2PCF averaged over โ 1000 mocks than in the real data, which is close to a single mock catalog;
* possible differences in other parameters of the BAO model or more generic aspects of correlation function have not been assessed.
Single-mock runs address the first issue since each of them is a fair proxy of the data.
The use of covariance matrix comparison metrics from Sectionย <ref> expands on the second one.
We keep the result with mock-average 2PCF, labeled โAverage Gโ (Gaussian), to assess the importance of precision of input clustering.
We also consider a shot-noise-rescaled version of the run with mock-averaged clustering, labeled โAverage NGโ (non-Gaussian).
We note that calibration of an all-mocks run on a jackknife estimate can be ambiguous or require a repeated computation of all the pair counts with jackknives, which was not done before because jackknives are not necessary for the sample covariance.
Thus we choose to fit the full covariance matrix to the mock sample covariance by minimizing the KL divergence between them (analogously to the jackknife procedure described in Sectionย <ref> and Eq.ย (<ref>)).
Since only one parameter is varied in the fit, a perfect agreement is still not guaranteed.
On the other hand, this setup is clearly idealized and would be closer to the closest possible match to the mock covariance the method can provide.
It comprises another useful reference to compare to the data-like performance on single mocks.
We only consider 45 radial bins, spanning 4ย each from 20 to 200ย .
The covariances are produced with single angular bins, which is a simplifying assumption since treating monopole more precisely in Legendre mode with shot-noise rescaling would require 2 runs per dataset, as explained in Appendixย <ref>.
The 2PCF measurements for the mock sample covariance use a more precise monopole estimate provided by pycorr[<https://github.com/cosmodesi/pycorr> (Arnaud de Mattia, Lehman Garrison, Manodeep Sinha, Davide Bianchi, Svyatoslav Trusov, Enrique Paillas, Seshadri Nadathur, Craig Warner, James Lasker)].
We also project them into a BAO model parameter space using the derivatives near the best fit (Fisher forecast).
The model uses only a separation range from 48 to 148ย .
ยง.ยง Internal convergence checks
We perform an intrinsic diagnostic procedure (as described in Sectionย <ref>) to ensure that integrals converged well in each run and exclude importance sampling random noise from significant error factors.
We found that pre-recon mock 6 and post-recon mock 5 showed significantly worse consistency than all the rest.
Therefore we have run them twice longer.
After that, all the results have reached a high and quite uniform level of internal consistency, as presented in Tableย <ref>.
We only show R_ inv, which are easier to interpret as the root-mean-square relative deviation between different partial estimates of the covariance matrix (considered over all directions in measurement space).
ฯ^2_ red are limited via Eq.ย (<ref>), and D_ KL values are quite close to estimates from Eq.ย (<ref>).
Due to high consistency in measurement space, we have not performed projection to parameter space here.
The splitting of Monte-Carlo subsamples has been done in a few different ways and the non-symmetric metric has been computed both ways (ฮจ_1C_2 and ฮจ_2C_1), but all the values were very close[In the case of disjoint running (Sec.ย <ref>) there can be a meaningful difference between splittings within or between different random sub-catalogs, due to fluctuations in pair counts between these. But here all the randoms have been concatenated together.] and thus have been averaged to one number for each metric.
These low (sub-percent) internal deviations give us confidence that the Monte-Carlo integration procedure in has converged well and it will not be a significant error source in further comparison.
After ascertaining this, we have not touched the covariance matrix products to be fair โ with real survey and no mocks, other validation procedures described in this paper are not available.
ยง.ยง Shot-noise rescaling values
Next, we look into the shot-noise rescaling values because the final covariance estimates (with approximate non-Gaussianity) are based on them.
The shot-noise rescaling values for single mocks are obtained by fitting the separate jackknife covariance prediction to the jackknife covariance estimate for each mock.
For the mock-average clustering, the full covariance was fit to the mock sample covariance instead, as discussed in Sectionย <ref>.
The shot-noise rescaling values are gathered in Tableย <ref>.
We note that all of them are greater than one (which corresponds to purely Gaussian covariance), in accordance with our expectation that the non-Gaussianity expands the errorbars.
Moreover, the mean shot-noise rescaling of the 10 single mocks is โ 4 standard deviations larger than 1 both before and after reconstruction.
The values obtained from jackknife and mock covariance are consistent.
After reconstruction, the shot-noise rescaling decreases for every mock.
The pre-recon mean is larger than the post-recon one by โ 1.8 standard deviations.
The scatter after reconstruction is also smaller than before.
These deviations can be caused by the random fluctuations in the input 2PCF estimates, noise in jackknife covariances, and differences in shifted randoms (for post-recon only).
The key conclusion is that we have obtained the shot-noise rescaling parameter for a data-like setup (single mock runs) with a percent-level precision.
This maps into a similar or smaller relative deviation in the rescaled covariance matrices since the 2-point term has the strongest scaling, โฮฑ_ SN^2, and the 4-point term remains the same (Eq.ย (<ref>)).
ยง.ยง Measurement-space validation
Now we proceed to comparison with the sample covariance matrices as reference, keeping in mind they are not devoid of noise so not all the comparison measures can be ideal.
We consider the higher-dimensional space of observables first, where the effects of sample variance are quite significant.
It consists of 45 bins of 2PCF, spanning 20โ200ย linearly with a bin width of 4ย .
Sample covariances for all original bins have been estimated using n_S=999 2PCF measurements from all the and the standard unbiased estimator (Eq.ย (<ref>)).
The procedures have been similar for pre- and post-recon.
The comparison measures between the precision matrices (estimated via Eq.ย <ref>) and the sample covariance matrices have been computed and are presented in Tableย <ref> for pre-reconstruction and Tableย <ref> for post-reconstruction.
First of all, there are fluctuations in the comparison measures involving the single-mock results, stemming from the input 2PCF estimates, jackknife covariances, and differences in shifted randoms (for post-recon only) โ the same causes as for scatter in ฮฑ_ SN discussed in Sectionย <ref>.
The individual pre-reconstruction covariances appear to agree with the mock sample covariance better than the post-reconstruction.
The covariance with mock-averaged clustering and no shot-noise rescaling, on the contrary, gives a closer agreement after reconstruction.
Before reconstruction, any individual shot-noise rescaled covariance shows better agreement than the Gaussian mock-averaged clustering run; after reconstruction, it is very often worse.
With mock-averaged clustering, the shot-noise rescaling is clearly beneficial for pre-reconstruction and less so for post-reconstruction.
This may be a hint that the shot-noise rescaling might not be doing as well after reconstruction as before.
Compared to the perfect case, typically performs worse (higher D_ KL and R_ inv, reduced chi-squared further from one), which we expect since the code (and the mocks) involve (different) approximations.
We note that on average the comparison metrics are within a couple of standard deviations of the expectation value for the true underlying covariance matrix.
On the other hand, it is the larger standard deviation in results that is allowing this conclusion, and reducing the noise factors causing it (input 2PCF fluctuations, single jackknife covariance) may allow us to reach a closer agreement in the future works.
ยง.ยง Parameter-space validation
In this section, we project the covariance into a lower-dimensional and more physically meaningful space of BAO model parameters.
Lower dimensionality makes the reference values for comparison metrics clearer and the results become easier to interpret.
In addition, the correlation function modes that are not physically possible or do not affect the parameter constraints are removed from consideration, which leaves only real and important โdirectionsโ for consideration.
We choose a commonly used BAO model[<https://github.com/cosmodesi/BAOfit_xs/> (Ashley J. Ross, Juan Mena Fernandez)] <cit.> with a scalable template ฮพ_0 and three nuisance polynomial terms:
ฮพ_ mod(r) = Bฮพ_0(ฮฑ_ BAO r) + A_0 + A_1/r + A_2/r^2,
comprising N_ pars=5 parameters: B, A_0, A_1, A_2, ฮฑ_ BAO.
Instead of performing full fits, we use Fisher matrix formalism.
This can be seen as less precise than full fits on every mock or MCMC using a 2PCF likelihood.
On the other hand, the parameter distribution is not Gaussian when the model is not a linear function of parameters, which makes linear approximation within Fisher matrix formalism more suitable for the comparison methods we have discussed.
We estimate the parameter covariance matrix as the inverse of the Fisher matrix:
C_S^ par = n_S-1/n_S-N'_ bins+N_ pars-1 M C'^ meas_S^-1 M^T^-1
where the measurement-space mock sample covariance matrix C'^ meas_S is cut to the N'_ bins=25 bins spanning separations from 48 to 148ย used in BAO fits, and M is the matrix of derivatives of binned 2PCF vector ฮพ with respect to parameters p:
M_caโกโฮพ_a/โ p_c.
The derivatives have been taken at the best-fit parameters for the mock-averaged clustering measurements (separate before and after reconstruction).
Note that Eq.ย (<ref>) is scaled by a correction factor according to Eq.ย (B6) in <cit.> to account for biases caused by both matrix inversions.
This provides an unbiased (although not noiseless) estimate of the true underlying covariance in parameter space as validated in Appendixย <ref>.
A similar but simpler procedure was performed with products:
ฮจ^ par_R,cdโ M C'^ meas_R^-1 M^T,
where C'^ meas_R was also cut to the 25 bins spanning separations from 48 to 148ย used in BAO fits.
There is a bias correction matrix D for (Eq.ย (<ref>)), but for the results presented here absolute values of its eigenvalues are โฒ 10^-3 thus we have decided to neglect this correction factor.
The comparison measures have been computed between the projected matrices and are presented in Tableย <ref> for pre-recon and Tableย <ref> for post-recon.
Generally, lower expectation values of D_ KL and R_ inv for perfect precision make these numbers for easier to interpret.
There is a less apparent difference between pre- and post-recon.
A notable exception is that all ฯ^2_ red for rescaled (with mock-averaged and single-mock clusterings) pre-recon are significantly less than 1 (meaning โoverestimatesโ the covariance then).
In other cases, mimicking non-Gaussianity gives a slight improvement for mock-averaged clustering, but higher noise in 2PCF and jackknife covariance in single-mock estimates often drives the agreement with mock sample covariance worse than in the mock-averaged Gaussian estimate.
Overall, single-mock results are within <2ฯ (dominated by the standard deviation of the perfect reference values, except the reduced chi-squared before reconstruction, which deviates by โ 2.2 std (combined).
However, the scatter in these numbers is quite significant (e.g. a few percent in root-mean-square relative error R_ inv), and we should try to reduce it in future work.
ยง.ยง Errorbars on BAO scale parameter
Since the scale parameter ฮฑ_ BAO is the important output of the current BAO analysis (Eq.ย <ref>), we have decided to extract its errorbar, marginalized over the other four parameters.
This is quite trivial after the previous subsection โ we only needed to invert the parameter-space precisions
C^ par_R = ฮจ^ par_R^-1
neglecting the inversion bias, since it is expected to be even smaller than before with the smaller size of the matrices.
Then we extract the marginalized errorbars from all the parameter covariances as
ฯ(ฮฑ_ BAO) = โ(C^ par_ฮฑฮฑ).
For the sample covariance, we expect the variance of C_S,ฮฑฮฑ to nearly follow Eq.ย <ref>:
C^ par_S,ฮฑฮฑโ2/n_S-1C^ par_S,ฮฑฮฑ^2
and therefore the standard deviation of ฯ(ฮฑ_ BAO) of
ฯฯ(ฮฑ_ BAO)โฯ(ฮฑ_ BAO)/โ(2(n_S-1)),
resulting in relative precision of 2.2%.
This has been confirmed in Appendixย <ref>.
The resulting errorbar (Fisher) forecasts are provided in Tableย <ref> and also presented as a scatter plot in Figureย <ref>.
We can notice that in any case the post-recon precision is expected to be higher than pre-recon.
In both pre-recon and post-recon, ฯ(ฮฑ_ BAO) in the mock-averaged clustering run without shot-noise rescaling are noticeably smaller than predicted from the sample covariance and are brought closer in the rescaled results.
Mock-averaged clustering with fit shot noise and single mock runs give very similar numbers.
The key conclusion is that single-mock runs are in good agreement with the sample covariance on ฯ(ฮฑ_ BAO), with a remarkably close match before reconstruction (just fractions of standard deviation) and a difference of โ 2 standard deviation after reconstruction.
This gives assurance that data-based covariances are on par with mock sample one for isotropic BAO fits.
ยง SUMMARY AND OUTLOOK
This work continues a series of papers <cit.> developing a semi-empirical approach for estimating covariances of 2PCFs, combining analytical methods with the usage of measured clustering and calibration on jackknives.
The former brings smoothness and reliability, and the latter allows for flexibility of the results while being independent of mock galaxy catalogs.
We should note that the method is expected to be applicable at intermediate scales โ as analytical methods tend to fail on the smallest scales, while on the largest scales, the number of configurations increases making the computation longer, and the signal to noise in correlation function measurement decreases.
The latter issue could be alleviated by a smooth transition to a theoretically modeled 2PCF, which we leave for future work.
We have discussed the implications of split random-random counts computation and made a slight modification to the formalism to cover the reconstructed 2PCF estimates.
Then, we reconsidered the methods for covariance matrix comparison, paying great attention to their meaning, interpretation, and noise stemming from mock sample variance.
Finally, we have applied the selected approaches to the validation of on single catalogs (using their individual clustering measurements and shifted random catalogs after reconstruction), each representing a reasonable proxy for data, by comparison with full mock sample covariance.
We find a close agreement (maximum deviation โ 2.2ฯ) with a perfect case, although much of this deviation is due to scatter in results.
The preceding discussion about the interpretations of the metrics, focusing on a smaller number of observables and even fewer parameters allowed us to obtain a clearer quantitative assessment of the precision and accuracy of results than in previous works.
One should keep in mind the mocks are approximate and this can partially account for the imperfection of the match with the reference statistics.
Focusing on the errorbar of the BAO scale, we found a very close, percent-level agreement with the sample covariance from mocks.
It is on par with the accuracy that a set of โ 1000 simulations can provide.
The number of available mocks thus limits the precision of the validation at the current level.
The comparison suggests that noise in the input 2PCF might be a significant limiting factor for the accuracy of our covariance matrices in higher-dimensional spaces.
Smoothing this input or complementing it with a theoretical best-fit model could help to mitigate this issue without introducing many additional assumptions.
This marks an important topic for follow-up studies.
In the full measurement space before reconstruction, using a shot-noise rescaling is particularly clearly beneficial compared to the pure Gaussian estimate even with less noisy (mock sample average) input clustering.
The discrepancies and fluctuations are likely to be impacted by the precision of correlation function estimates from data, which will improve with its size in the future.
Further validations with larger mocks, corresponding to a year and/or full five years of DESI data, will follow.
During the comparison, we have seen indications that the reconstructed extension may not be working better than with the pre-reconstructed data, contrary to the expectation.
This might be related to the subtleties of small-scale behavior of reconstructed points and will be investigated in further detail in future work.
Another perspective direction is the development of alternatives to shot-noise rescaling within the more generic semi-analytic configuration-space formalism.
Usage of fully empirical higher-point functions is likely to be not viable, due to a significantly higher number of bins and accordingly lower signal-to-noise.
Precise theoretical modeling of non-Gaussian correlation functions is also very challenging.
Instead, we might include a basic prescription for non-Gaussian covariance contribution inferred from a set of detailed simulations, or use approximate expressions for higher-point functions like ฮถ(rโ_1,rโ_2) = Q[ฮพ(r_1)ฮพ(r_2)+ฮพ(r_1)ฮพ(|rโ_1-rโ_2|) + ฮพ(r_2)ฮพ(|rโ_1-rโ_2|)] motivated by hierarchical models <cit.>, and possibly a similar structure for the 4PCF.
This might provide better accuracy than rescaling the Gaussian terms while keeping the number of parameters low and thus still allowing us to fit them to a reference (e.g. jackknife) covariance.
On the other hand, the aforementioned 3PCF prescription is known to be far from exact with constant Q <cit.>, and the computations may suffer from slower convergence due to additional large values of small-scale 2PCF compared to Gaussian parts.
The key advantage of the approaches considered in this paper is that a covariance can be based on the data itself and does not require matching mock catalogs each time.
This alleviates the concern about the accuracy of such approximate simulations, not completely removing it since we still need some as references for validation of the prescriptions.
Perhaps more importantly, covariance computation becomes much more flexible.
Relevant cases are when the clustering signal changes after more data are gathered, or when alternative assumptions (for instance, the base cosmology) are tested.
For such updates, calibration, generation, and processing of a suite of mocks large enough for good sample covariance consumes great resources.
The computation of one proxy-data covariance in this paper took about 100 core-hours for pre-recon and about 300 core-hours for post-recon[In one case out of ten (both pre- and post-recon), a repeated computation taking twice longer was performed.].
This could be optimized further by noting that the intrinsic consistency in each run was much higher than the resemblance of reference sample covariance โ time of computation helps with the former but the latter is fundamentally limited by the accuracy of approximations and precision of the input correlation function.
Moreover, the number of mocks limits the level of accuracy of validation.
Fast covariance matrix computation without mocks can also allow shifting the balance from quantity to quality of the simulations, freeing resources for more detailed ones.
ยง ACKNOWLEDGEMENTS
We thank Daniel Forero-Sanchez, Violeta Gonzalez-Perez, Nikhil Padmanabhan, Will Percival, Oliver Philcox, and Martin White for useful comments and fruitful discussions.
MR and DJE are supported by the U.S. Department of Energy (DOE) grant DE-SC0007881 and by the Simons Foundation Investigator program.
This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of High-Energy Physics, under Contract No. DEโAC02โ05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSFโs National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: <https://www.desi.lbl.gov/collaborating-institutions>. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U. S. National Science Foundation, the U. S. Department of Energy, or any of the listed funding agencies.
The authors are honored to be permitted to conduct scientific research on Iolkam Duโag (Kitt Peak), a mountain with particular significance to the Tohono Oโodham Nation.
ยง DATA AVAILABILITY
The code is openly accessible at <https://github.com/oliverphilcox/RascalC>.
All data from the tables and figures are available in machine-readable format at 10.5281/zenodo.7750637 in compliance with the DESI data management plan.
The used in this paper will be made public with the DESI Y1 data release (DR1), and all the covariance matrices used in this work will be released in the same supplementary material <cit.>.
mnras
ยง COVARIANCE ESTIMATORS
ยง.ยง Full covariance in radial and angular bins
The expression for full covariance in radial and angular bins (ฮพฬ^XY_a^c, ฮพฬ^ZW_b^d) is
Cฬ^XY,ZW_ab^cdฮฑ_ SN = ^4C^XY,ZW_ab^cd + ฮฑ_ SN^X/4ฮด^XW^3C^X,YZ_ab^cd + ฮด^XZ^3C^X,YW_ab^cd
+ ฮฑ_ SN^Y/4ฮด^YW^3C^Y,XZ_ab^cd + ฮด^YZ^3C^Y,XW_ab^cd
+ ฮฑ_ SN^X ฮฑ_ SN^Y/2ฮด^XWฮด^YZ + ฮด^XZฮด^YW^2C^XY_ab^cd
with
^4C^XY,ZW_ab^cd = 1/(R^XR^Y)_a^c (R^ZR^W)_b^dโ_iโ jโ kโ l n^X_i n^Y_j n^Z_k n^W_l w^X_i w^Y_j w^Z_k w^W_l ฮ^a(r_ij) ฮ^c(ฮผ_ij) ฮ^b(r_kl) ฮ^d(ฮผ_kl)
รฮท^( c),XYWZ_ijkl + 2ฮพ^XZ_ikฮพ^YW_jl
^3C^Y,XZ_ab^cd = 4/(R^XR^Y)_a^c (R^YR^Z)_b^dโ_iโ jโ k n^X_i n^Y_j n^Z_k w^X_i w^Y_j^2 w^Z_k ฮ^a(r_ij) ฮ^c(ฮผ_ij) ฮ^b(r_jk) ฮ^d(ฮผ_jk) ฮถ^XYZ_ijk+ฮพ^XZ_ik
^2C^XY_ab^cd = 2ฮด^abฮด^cd/(R^XR^Y)_a^c (R^XR^Y)_b^dโ_iโ j n^X_i n^Y_j w^X_i w^Y_j^2 ฮ^a(r_ij) ฮ^c(ฮผ_ij) 1+ฮพ^XY_ij
where ฮด^XY, ฮด^ab and ฮด^cd are Kronecker deltas.
Similarly to <cit.>, we have written 2ฮพ_ikฮพ_jl instead of ฮพ_ikฮพ_jl+ฮพ_ilฮพ_jk, allowing to optimize the computation using the symmetries of the term, but requiring to compute a few more distinct terms in multi-tracer setup.
Computing full sums (beyond the pair one) is not feasible, so are estimated by the Monte-Carlo method instead, by randomly sampling a subset of 2/3/4-point configurations from the random catalog <cit.>.
The most practical form for single tracer X is simpler:
C^XX,XX_ab^cdฮฑ_ SN^X = ^4C^XX,XX_ab^cd + ฮฑ_ SN^X ^3C^X,XX_ab^cd + ฮฑ_ SN^X^2 ^2C^XX_ab^cd.
ยง.ยง Jackknife covariance in radial and angular bins
The jackknife pair counts are
N^XN^Y_A_a^c = โ_iโ jn^X_in^Y_jw^X_iw^Y_jฮ^a(r_ij)ฮ^c(ฮผ_ij)ฮด^X_iฮด^Y_j q^A_ij
R^XR^Y_A_a^c = โ_iโ jn^X_in^Y_jw^X_iw^Y_jฮ^a(r_ij)ฮ^c(ฮผ_ij) q^A_ij,
where q^A_ij is the jackknife weighting factor for the pair of particles, and for any pair โ_A q^A_ij = 1 in the unrestricted jackknife formalism.
The binned correlation function estimate ฮพฬ^XY_A_a^c is their ratio.
It is sensible to weight the regions by the RR pair counts, which roughly correspond to the volume fraction of each region:
w^XY_A_a^c = R^XR^Y_A_a^c/R^XR^Y_a^c,
and then the weighted average of the jackknife 2PCF is identical to full-survey estimate (Eq.ย (<ref>)).
The jackknife covariance estimate is
C_J^XY,ZW_ab^cd = 1/1 - โ_B w^XY_B_a^c w^ZW_B_b^dโ_A w^XY_A_a^c w^ZW_A_b^d ฮพฬ^XY_A_a^c - ฮพฬ^XY_a^cฮพฬ^ZW_A_b^d - ฮพฬ^ZW_b^d.
By substituting Eq.ย (<ref>), expanding using Eqs.ย (<ref>)ย &ย (<ref>) and simplifying through Eq.ย (<ref>) one can arrive to
Cฬ_J^XY,ZW_ab^cd = ^4C_J^XY,ZW_ab^cd + ฮฑ^X/4ฮด^XW^3C_J^X,YZ_ab^cd + ฮด^XZ^3C_J^X,YW_ab^cd + ฮฑ^Y/4ฮด^YW^3C_J^Y,XZ_ab^cd + ฮด^YZ^3C_J^Y,XW_ab^cd
+ ฮฑ^X ฮฑ^Y/2ฮด^XWฮด^YZ + ฮด^XZฮด^YW^2C_J^XY_ab^cd
with
^4C_J^XY,ZW_ab^cd = 1/(R^XR^Y)_a^c (R^ZR^W)_b^d 1 - โ_B w^XY_B_a^c w^ZW_B_b^dโ_iโ jโ kโ l n^X_i n^Y_j n^Z_k n^W_l w^X_i w^Y_j w^Z_k w^W_l ฮ^a(r_ij) ฮ^c(ฮผ_ij)
รฮ^b(r_kl) ฮ^d(ฮผ_kl) ฮท^( c),XYWZ_ijkl + ฮพ^XY_ijฮพ^ZW_kl + 2ฮพ^XZ_ikฮพ^YW_jlฯ^XY,ZW_ijkl_ab^cd
^3C_J^Y,XZ_ab^cd = 4/(R^XR^Y)_a^c (R^YR^Z)_b^d 1 - โ_B w^XY_B_a^c w^YZ_B_b^dโ_iโ jโ k n^X_i n^Y_j n^Z_k w^X_i w^Y_j^2 w^Z_k ฮ^a(r_ij) ฮ^c(ฮผ_ij) ฮ^b(r_jk) ฮ^d(ฮผ_jk)
รฮถ^XYZ_ijk+ฮพ^XZ_ikฯ^XY,YZ_ijjk_ab^cd
^2C_J^XY_ab^cd = 2ฮด^abฮด^cd/(R^XR^Y)_a^c (R^XR^Y)_b^d 1 - โ_B w^XY_B_a^c w^XY_B_b^dโ_iโ j n^X_i n^Y_j w^X_i w^Y_j^2 ฮ^a(r_ij) ฮ^c(ฮผ_ij) 1+ฮพ^XY_ijฯ^XY,XY_ijij_ab^cd,
where ฯ_ijkl_ab^cd is an additional weight tensor:
ฯ^XY,ZW_ijkl_ab^cd = โ_A q^A_ij - w^XY_A_a^cq^A_kl - w^ZW_A_b^d.
In practice, shot-noise rescaling for each tracer has been obtained by fitting the prediction for jackknife covariance of its auto-correlation function
C_J^XX,XX_ab^cd = ^4C_J^XX,XX_ab^cd + ฮฑ^X ^3C_J^X,XX_ab^cd + ฮฑ^X^2 ^2C_J^XX_ab^cd
to the data-based jackknife estimate (computed via Eq.ย (<ref>)).
The resulting ฮฑ^X value(s) should be plugged into Eq.ย (<ref>) to obtain the full survey covariance.
ยง.ยง Covariance of Legendre multipoles in radial bins
<cit.> further derive direct covariance estimators for Legendre moments of the anisotropic 2PCF, which is related to ฮพ(r,ฮผ) through
ฮพ^XY(r,ฮผ) = โ_โ=0^โฮพ^XY^โ(r)L_โ(ฮผ),
ฮพ^XY^โ(r) = 2โ+1/2โซ_-1^1dฮผ ฮพ^XY(r,ฮผ)L_โ(ฮผ) =(-1)^โ+1/2(2โ+1)โซ_0^1dฮผ ฮพ^XY(r,ฮผ)L_โ(ฮผ),
L_โ(ฮผ) being the Legendre polynomial of order โ, and the second equality in last line assumes symmetry ฮพ^XY(r,ฮผ) = ฮพ^XY(r,-ฮผ) (necessarily true for auto-correlation, X=Y, and violation in cross-correlations is debatable).
While one could estimate the angularly binned 2PCF first and then transform it to Legendre moments, a direct computation of the latter allows to evade inaccuracies caused by the replacement of integral by sum over bins, and removes the need for very fine splitting in ฮผ, which would make the covariance integrals slower to converge.
Averaging Eq.ย (<ref>) between the boundaries of radial bin a, we obtain
ฮพ^XY^โ_a = (2โ+1)โซ_0^1dฮผ ฮพ^XY_a(ฮผ)L_โ(ฮผ).
Here ฮพ^XY_a(ฮผ), binned radially but not angularly, is given by the reduced Landy-Szalay estimator
ฮพ^XY_a(ฮผ) = N^XN^Y_a(ฮผ)/R^XR^Y_a(ฮผ)
similarly to Eqs.ย (<ref>)ย &ย (<ref>).
The continuous counts PP_a(ฮผ) (where each P can stand for D^X, D^Y, R^X or R^Y) are the limit of the ratio of conventional pair counts in infinitesimally small angular bins c to its width ฮดฮผ,
PP_a^c = โซ dฮผ PP_a(ฮผ)ฮ^c(ฮผ) โ PP_a(ฮผ_c)ฮดฮผ,
ฮผ_c being the bin center.
A functional form for the continuous random pair counts R^XR^Y_aฮผ is needed to go further due to integration in Eq.ย (<ref>).
This is done via a survey correction function ฮฆ^XY(r_a,ฮผ) (following <cit.>), which accounts for survey boundaries and selection, defined through
(R^XR^Y)_a(ฮผ) โกV^X nฬ
^X nฬ
^Y wฬ
^X wฬ
^Y v_a/ฮฆ^XY(r_a,ฮผ),
where V^X is the total volume of survey for tracer X, and nฬ
^X and wฬ
^X are the survey-averaged number density and weight for tracer X, respectively.
v_a = 4ฯ3(r^3_a,max-r^3_a,min) is the volume of radial bin a (r_a,min and r_a,max being its lower and upper boundaries).
The numerator is the expression for a periodic box with uniform number density, weights and bins in ฮผ (i.e. negative values reversed into [0,1] interval), thus in this simple case the survey correction function ฮฆ^XY(r_a,ฮผ)=1.
For nontrivial geometry, varying density and/or weights, it may become different but not too far by order of magnitude.
However, note that only the angular variable value is arbitrary, while the radius is taken as representative of the bin.
In practice, a piecewise-polynomial functional fit to empirical data was used by <cit.>.
Then the 2PCF Legendre moment can be estimated
and the covariance can be computed by definition, using Eq.ย (<ref>), resulting in
(C^XY,ZW)_ab^pq = (^4C^XY,ZW)_ab^pq + ฮฑ^X/4[ฮด^XW(^3C^X,YZ)_ab^pq+ฮด^XZ(^3C^X,YW)_ab^pq]
+ ฮฑ^Y/4[ฮด^YW(^3C^Y,XZ)_ab^pq+ฮด^YZ(^3C^Y,XW)_ab^pq]
ฮฑ^Xฮฑ^Y/2(ฮด^XWฮด^YZ+ฮด^XZฮด^YW)(^2C^XY)_ab^pq.
(^4C^XY,ZW)_ab^pq = (2p+1)(2q+1)/V^X V^Z nฬ
^X nฬ
^Y nฬ
^Z nฬ
^W wฬ
^X wฬ
^Y wฬ
^Z wฬ
^W v_a v_bโ_iโ jโ k โ ln^X_in^Y_jn^Z_kn^W_lw^X_iw^Y_jw^Z_kw^W_lฮ^a(r_ij)ฮ^b(r_kl)
ร ฮฆ^XY(r_a,ฮผ_ij)ฮฆ^ZW(r_b,ฮผ_kl)L_p(ฮผ_ij)L_q(ฮผ_kl)[ฮพ^(4),XYZW_ijkl+ฮพ^XZ_ikฮพ^YW_jl+ฮพ^XW_ilฮพ^YZ_jk]
(^3C^Y,XZ)_ab^pq = 4ร(2p+1)(2q+1)/V^X V^Z nฬ
^X nฬ
^Y^2 n^Z wฬ
^X wฬ
^Y^2 w^Z v_a v_bโ_iโ jโ kn^X_in^Y_jn^Z_kw^X_i(w^Y_j)^2w^Z_kฮ^a(r_ij)ฮ^b(r_jk)
ร ฮฆ^XY(r_a,ฮผ_ij)ฮฆ^ZY(r_b,ฮผ_jk)L_p(ฮผ_ij)L_q(ฮผ_jk)[ฮถ^XYZ_ijk+ฮพ^XZ_ik]
(^2C^XY)_ab^pq = 2ฮด_abร(2p+1)(2q+1)/(V^X nฬ
^X nฬ
^Y wฬ
^X wฬ
^Y v_a)^2โ_iโ jn^X_in^Y_j(w^X_iw^Y_j)^2ฮ^a(r_ij)
ร (ฮฆ^XY(r_a,ฮผ_kl))^2L_p(ฮผ_ij)L_q(ฮผ_ij)[1+ฮพ^XY_ij].
Jackknife poses certain challenges to direct computation for the Legendre moments of the 2PCF.
So far it has been suggested that the shot-noise rescaling value(s) should be optimized with a jackknife estimate on angularly binned 2PCF <cit.>.
ยง STATISTICS OF COMPARISON METRICS FOR NOISY SAMPLE COVARIANCE MATRIX
Here we provide derivations of the expectation values for comparison metrics between a noisy sample covariance and the true covariance/precision matrix.
This is useful for testing how close results are to the latter.
ยง.ยง KL divergence mean and variance
A more generic setup โ two sample covariance matrices based on draws from a multivariate normal distribution โ has been considered in Appendixย D of <cit.>.
However, the derivation was limited to the expectation value of the KL divergence between them, and we have not been able to find a reference about the metric's scatter around the mean (variance or standard deviation).
In addition, we believe the final result there is slightly incorrect, namely 1 should be subtracted from the number of samples.
This is because for the estimate of sample covariance commonly used with mocks
X_ab = 1/n_S-1โ_i=1^n_S (x_a,i - xฬ
_a) (x_b,i - xฬ
_b)
the mean is not known beforehand but estimated from the sample as well: xฬ
_a โก1/n_Sโ_i=1^n_S x_a,i.
This reduces the number of degrees of freedom by one.
Then the covariance of sample covariance matrix elements is
(X_ab, X_cd) = C_0,ac C_0,bd + C_0,ad C_0,bc/n_S-1
instead of
(X_ab, X_cd) = C_0,ac C_0,bd + C_0,ad C_0,bc/n_S
as in <cit.>.
C_0 is the true underlying covariance matrix of the Gaussian distribution the samples are drawn from.
The sample covariance estimate is unbiased, meaning that the expectation value is the true covariance: X= C_0.
Then, considering two sample covariance matrices X_i obtained from n_S^(i) samples each, decomposing them as X_i= C_0+ฮด X_i, Taylor expanding and only leaving the leading nontrivial (quadratic) order in ฮด X_i, we obtain
D_ KL X_1^-1, X_2โN_ bins(N_ bins+1)/41/n_S^(1)-1 + 1/n_S^(2)-1
instead of
D_ KL X_1^-1, X_2โN_ bins(N_ bins+1)/41/n_S^(1) + 1/n_S^(2)
as in <cit.>.
Since in this work we only consider one sample covariance matrix, while the results are not expected to follow the Wishart distribution, the more relevant result is for the true precision matrix ฮจ_0:
D_ KLฮจ_0, XโN_ bins(N_ bins+1)/4(n_S-1),
which can be obtained from Eq.ย (<ref>) by setting the first number of samples to infinity, reducing the noise in X_1^-1 to zero.
For the further derivations, it is convenient to โnormalizeโ the covariance. Let us take
y_i = ฮจ_0^1/2 x_i,
where ฮจ_0^1/2 means the matrix square root of ฮจ_0 โ a matrix with the same eigenvectors and eigenvalues equal to the square roots of corresponding eigenvalues of the original matrix.
Then
(y_a,i, y_b,j) = ฮด^ijฮด^ab.
Let us also introduce yฬ
_a โก1/n_Sโ_i=1^n_S y_a,i, define ฮด y_a,iโก y_a,i - yฬ
_a (so that โจฮด y_a,iโฉ = 0) and finally compute the โnormalizedโ covariance matrix:
Y_abโก1/n_S-1โ_i=1^n_Sฮด y_a,iฮด y_b,i.
Then also
Y = ฮจ_0^1/2 X ฮจ_0^1/2.
Let us compute
(ฮด y_a,i, ฮด y_b,j) = โจฮด y_a,iฮด y_b,jโฉ = ฮด^ijฮด^ab - 2 ร1/n_Sฮด^ab + n_S ร1/n_S^2ฮด^ab = ( ฮด^ij - 1/n_S) ฮด^ab.
As a consequence, Y = ๐, and we can expand
Y = ๐ + ฮด Y
while ฮด Y = 0. Also,
(Y_ab, Y_cd) = โจฮด Y_abฮด Y_cdโฉ = ฮด^acฮด^bd + ฮด^adฮด^bc/n_S-1.
Now let us expand the KL divergence using the โnormalizedโ covariance matrix, starting from
2 D_ KLฮจ_0, X = ฮจ_0 X - N_ bins - lnฮจ_0 X,
we can write ฮจ_0 = ฮจ_0^1/2ฮจ_0^1/2, use the cyclic property of trace and determinant to arrive to
2 D_ KLฮจ_0, X = Y - N_ bins - ln Y,
remembering Eq.ย (<ref>).
Then we expand in ฮด Y (Eq.ย (<ref>)):
2 D_ KLฮจ_0, X = ฮด Y - ln๐ + ฮด Y.
Now using ln A = ln A and expanding the second term in Taylor series up to quadratic order in ฮด Y we obtain
2 D_ KLฮจ_0, Xโ1/2ฮด Y^2 = 1/2โ_a,b=1^N_ binsฮด Y_abฮด Y_ab.
Taking the expectation value of Eq.ย (<ref>) and using Eq.ย (<ref>), one can re-derive Eq.ย (<ref>). We will proceed to compute the variance:
ฮด Y^2 = ฮด Y^2^2 - ฮด Y^2^2 = โ_a,b,c,d=1^N_ bins [โจฮด Y_abฮด Y_abฮด Y_cdฮด Y_cdโฉ - โจฮด Y_abฮด Y_abโฉโจฮด Y_cdฮด Y_cdโฉ].
Full expansion gives
Y_ab Y_ab Y_cd Y_cd = 1/(n_S-1)^4โ_i,j,k,l=1^n_Sฮด y_a,iฮด y_b,iฮด y_a,jฮด y_b,jฮด y_c,kฮด y_d,kฮด y_c,lฮด y_d,l.
ฮด y are normally distributed and have zero means, so for them, we can use Wick's theorem to split this into all possible pairs. (ฮด Y also has zero mean, but not Gaussian distribution, this is why we need to go to a deeper level.) The total number of pairs is 8!/(4!ร 2^4)=105, so it is easy to go over them in a computer program. Additionally, it is useful to check which are similar. It is apparent that the following five index permutations leave the expression unchanged: a โ b, c โ d, i โ j, k โ l and (a,b,i,j) โ (c,d,k,l). Finally, some of the pairs will not contribute to variance and can be excluded: contraction of a pair inside the same Y contribute to โจ Y โฉ = ๐ and must be subtracted; and if all the contracted pairs correspond to Y's with the same indices, that contributes to the mean of D_ KL and has to be subtracted too.
We find there are 56 pair assignments contributing to โ 16(n_S-1)^4 [D_ KL (ฮจ_0, X)], but no more than 8 of them are distinct after using symmetries:
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_a,jโฉโจฮด y_b,iฮด y_c,kโฉโจฮด y_b,jฮด y_c,lโฉโจฮด y_d,kฮด y_d,lโฉ =
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ij - 1/n_S) ( ฮด^ik - 1/n_S) ฮด^bc( ฮด^jl - 1/n_S) ฮด^bc( ฮด^kl - 1/n_S)
= 8 N_ bins^3 (n_S - 1)
16 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_a,jโฉโจฮด y_b,iฮด y_c,kโฉโจฮด y_b,jฮด y_d,lโฉโจฮด y_d,kฮด y_c,lโฉ =
16 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ij - 1/n_S) ( ฮด^ik - 1/n_S) ฮด^bc( ฮด^jl - 1/n_S) ฮด^bd( ฮด^kl - 1/n_S) ฮด^dc = 16 N_ bins^2 (n_S-1)
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_b,jโฉโจฮด y_b,iฮด y_c,kโฉโจฮด y_a,jฮด y_d,lโฉโจฮด y_d,kฮด y_c,lโฉ =
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ij - 1/n_S) ฮด^ab( ฮด^ik - 1/n_S) ฮด^bc( ฮด^jl - 1/n_S) ฮด^ad( ฮด^kl - 1/n_S) ฮด^cd = 8 N_ bins (n_S-1)
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_c,kโฉโจฮด y_b,iฮด y_d,kโฉโจฮด y_a,jฮด y_c,lโฉโจฮด y_b,jฮด y_d,lโฉ =
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ik - 1/n_S) ฮด^ac( ฮด^ik - 1/n_S) ฮด^bd( ฮด^jl - 1/n_S) ฮด^ac( ฮด^jl - 1/n_S) ฮด^bd = 4 N_ bins^2 (n_S-1)^2
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_c,kโฉโจฮด y_b,iฮด y_d,kโฉโจฮด y_a,jฮด y_d,lโฉโจฮด y_b,jฮด y_c,lโฉ =
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ik - 1/n_S) ฮด^ac( ฮด^ik - 1/n_S) ฮด^bd( ฮด^jl - 1/n_S) ฮด^ad( ฮด^jl - 1/n_S) ฮด^bc = 4 N_ bins (n_S-1)^2
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_c,kโฉโจฮด y_b,iฮด y_c,lโฉโจฮด y_a,jฮด y_d,kโฉโจฮด y_b,jฮด y_d,lโฉ =
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ik - 1/n_S) ฮด^ac( ฮด^il - 1/n_S) ฮด^bc( ฮด^jk - 1/n_S) ฮด^ad( ฮด^jl - 1/n_S) ฮด^bd = 4 N_ bins (n_S-1)
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_c,kโฉโจฮด y_b,iฮด y_c,lโฉโจฮด y_a,jฮด y_d,lโฉโจฮด y_b,jฮด y_d,kโฉ =
8 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ik - 1/n_S) ฮด^ac( ฮด^il - 1/n_S) ฮด^bc( ฮด^jl - 1/n_S) ฮด^ad( ฮด^jk - 1/n_S) ฮด^bd = 8 N_ bins (n_S-1)
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_Sโจฮด y_a,iฮด y_c,kโฉโจฮด y_b,iฮด y_d,lโฉโจฮด y_a,jฮด y_c,lโฉโจฮด y_b,jฮด y_d,kโฉ =
4 โ_a,b,c,d=1^N_ binsโ_i,j,k,l=1^n_S( ฮด^ik - 1/n_S) ฮด^ac( ฮด^il - 1/n_S) ฮด^bd( ฮด^jl - 1/n_S) ฮด^ac( ฮด^jk - 1/n_S) ฮด^bd = 4 N_ bins^2 (n_S-1)
Gathering all together gives
ฮด Y^2 โ N_ bins/(n_S-1)^3 [8 N_ bins^2 + 16 N_ bins + 8 + 4 N_ bins (n_S-1) + 4 (n_S-1) + 4 + 8 + 4 N_ bins] =
4 N_ bins/(n_S-1)^3 [2 N_ bins^2 + 4 N_ bins + 4 + (N_ bins + 1) n_S] =
4 N_ bins/(n_S-1)^3 [(N_ bins + 1) (n_S + 2 N_ bins + 2) + 2].
Then, according to Eq.ย (<ref>), the variance of the KL divergence is approximately 16 times smaller:
[D_ KLฮจ_0, X] โN_ bins [(N_ bins + 1) (n_S + 2 N_ bins + 2) + 2]/4(n_S-1)^3.
This is an approximation because in Eq.ย (<ref>) the Taylor expansion was truncated at the quadratic order in ฮด Y.
Other derivation steps are exact.
ยง.ยง Inverse test
We considered how different ฯ^2 would result from one matrix compared to the other.
If u is an unit vector in N_ bins-dimensional space (u^T u = 1), then w = C_2^1/2 u gives an unit ฯ^2 according to C_2: w^T C_2^-1 w=1.
Here C_2^1/2 means the matrix square root of C_2 โ a matrix with the same eigenvectors and eigenvalues equal to the square roots of corresponding eigenvalues of the original matrix.
Then we consider the ฯ^2 with respect to C_1 (ฮจ_1): w^T ฮจ_1 w and subtract the expected value of 1:
w^T ฮจ_1 w - 1 = u^T C_2^1/2ฮจ_1 C_2^1/2 - ๐ u.
Taking the RMS over all directions of u, one arrives at the RMS eigenvalue of this matrix, which can be expressed through the Frobenius norm:
R_ invฮจ_1, C_2 = 1/โ(N_ bins) C_2^1/2ฮจ_1 C_2^1/2 - ๐_F.
The Frobenius norm can be recast as a trace and simplified further using its cyclic property:
C_2^1/2ฮจ_1 C_2^1/2 - ๐_F = C_2^1/2ฮจ_1 C_2^1/2 - ๐^T C_2^1/2ฮจ_1 C_2^1/2 - ๐ = C_2^1/2ฮจ_1 C_2^1/2 - ๐^2
= C_2^1/2ฮจ_1 C_2 ฮจ_1 C_2^1/2 - 2 C_2^1/2ฮจ_1 C_2^1/2 + ๐ = ฮจ_1 C_2 ฮจ_1 C_2 - 2 ฮจ_1 C_2 + ๐ = ฮจ_1 C_2 - ๐^2.
This allows us to compute the quantity as
R_ invฮจ_1, C_2 = โ(ฮจ_1 C_2 - ๐^2/N_ bins),
which is more computationally robust โ it is better to avoid factorizing (and inverting) covariance matrices (especially the sample one) when possible.
It is notable that this metric is related to the discriminant matrix
P = โ(ฮจ_R)^T C_S โ(ฮจ_R) - ๐,
where โ(ฮจ_R) is the lower Cholesky decomposition, used in <cit.>.
Through a similar procedure, one can find that its Frobenius norm is the same as above:
๐_F^2 = ฮจ_1 C_2 - ๐^2.
However, the interpretation of elements of the discriminant matrix is less clear.
Now let us consider the square of this metric (to remove the root):
R_ inv^2 ฮจ_0, X = 1/N_ bins [ฮจ_0 X ฮจ_0 X - 2 ฮจ_0 X + ๐].
Remembering Eq.ย (<ref>), we arrive to
R_ inv^2 ฮจ_0, X = 1/N_ bins Y^2 - 2 Y + ๐.
Furthermore, expanding in ฮด Y (according to Eq.ย (<ref>)), we get
R_ inv^2 ฮจ_0, X = 1/N_ binsฮด Y^2,
which is similar to Eq.ย (<ref>) up to a constant factor of 1/N_ bins and lack of approximations.
We can obtain the expectation value by plugging in Eq.ย (<ref>):
R_ inv^2 ฮจ_0, X = N_ bins+1/n_S-1.
For variance we can use Eq.ย (<ref>):
R_ inv^2 ฮจ_0, X = 4 [(N_ bins + 1) (n_S + 2 N_ bins + 2) + 2]/N_ bins (n_S-1)^3.
Assuming R_ inv^2 ฮจ_0, XโชR_ inv^2 ฮจ_0, X^2, we can take the square root to estimate the mean and variance of not-squared metric as
R_ invฮจ_0, X โ โ(R_ inv^2 ฮจ_0, X) = โ(N_ bins+1/n_S-1),
R_ invฮจ_0, X โ R_ inv^2 ฮจ_0, X/4R_ inv^2 ฮจ_0, X = (N_ bins + 1) (n_S + 2 N_ bins + 2) + 2/N_ bins (N_ bins+1) (n_S-1)^2.
Also, it is useful to note that R_ inv is related to the ฯ^2 approximation of minus log-likelihood, considering the covariance of all covariance matrix elements.
This is rather easy to see with ฮด Y expression (Eq.ย (<ref>)).
Since ฮด Y is real and symmetric, we obtain
N_ binsร R_ inv^2 ฮจ_0, X = ฮด Y^2_F = โ_a,b=1^N_ binsฮด Y_ab^2.
From Eq.ย (<ref>) we conclude that distinct elements of Y matrix have zero covariance (excluding the pairs symmetric with respect to the diagonal), its diagonal elements have a variance of 2/(n_S-1) and off-diagonal elements have a variance of 1/(n_S-1).
Then this covariance is trivial to invert and we have to sum squares of deviation of independent elements divided by their variance to get the ฯ^2:
ฯ^2 = n_S-11/2โ_a=1^N_ binsฮด Y_aa^2 + โ_a=1^N_ binsโ_b=a+1^N_ binsฮด Y_ab^2 = n_S-1/2โ_a,b=1^N_ binsฮด Y_ab^2 = n_S-1 N_ bins/2ร R_ inv^2 ฮจ_0, X.
Since elements of Y are independent linear combinations of elements of sample covariance matrix X (via Eq.ย (<ref>), since ฮจ_0 is not degenerate), the same holds for X, but a direct computation without the โrotationโ into Y would be significantly longer as the covariance of X_ab elements (Eq.ย (<ref>)) has a more generic and complex structure.
ยง.ยง Mean chi-squared
We consider the sum of ฯ^2 associated with the deviation of data vectors in individual samples from the estimate of average:
โ_i=1^n_Sโ_a,b=1^N_ bins (x_a,i - xฬ
_a) ฮจ_0,ab (x_b,i - xฬ
_b) = (n_S-1) โ_a,b=1^N_ binsฮจ_0,ab X_ab = (n_S-1) (ฮจ_0 X).
Remembering Eq.ย (<ref>), we can rewrite the LHS as
โ_i=1^n_Sโ_a,b=1^N_ bins (y_a,i - yฬ
_a) ฮด_ab (y_b,i - yฬ
_b) = โ_a=1^N_ binsโ_i=1^n_S (y_a,i - yฬ
_a)^2 โผโ_a=1^N_ binsฯ^2(n_S-1) โผฯ^2[N_ binsร (n_S-1)].
Therefore the corresponding reduced ฯ^2 is
ฯ^2_ red = 1/N_ binsฮจ_0 X.
ยง.ยง Validation of means and standard deviations
To check the theoretical results from the previous sections, we have performed a quick Monte-Carlo validation.
10,000 batches of 999 samples having 45 bins each have been generated.
For simplicity, we have taken the true covariance and precision to be unity matrices C_0=ฮจ_0=๐.
This should not affect the results, save for numerical instabilities in matrix operations.
Full 45-bin sample covariance and precision matrices have been estimated in each batch.
Then, 25 bins were selected and projected into 5 quantities, for simplicity using 5 random orthonormal vectors as parameter derivatives.
Comparison between theoretical and sampled means and standard deviations are presented in Tableย <ref>.
Differences are most pronounced in D_ KL, but the disagreement is only in the second digit of standard deviation and fractions of standard deviation on the mean.
Therefore we report a close agreement, more than enough for the main part of the paper, where the scatter of results is significantly larger than these standard deviations.
We note that the results for D_ KL and R_ inv (Eq.ย (<ref>), (<ref>), (<ref>) and (<ref>)) are approximate and we expect meaningful deviations from them, especially as N_ bins increases, while the derivations for R_ inv^2 and ฯ^2_ red (Eq.ย (<ref>), (<ref>) and (<ref>)) are exact.
We have repeated this test with a realistic covariance matrix ( Average NG for pre-reconstruction) and derivatives of the observables with respect to the parameters (accordingly, for the BAO model before reconstruction) to confirm whether the comparison measures are indeed not affected.
We have obtained the same numbers as in simpler test (Tableย <ref>), and close agreement of ฯฯ(ฮฑ_ BAO)/ฯ(ฮฑ_ BAO) with 2(n_S-1)^-1/2 according to Eq.ย (<ref>).
ยง
^1Center for Astrophysics | Harvard & Smithsonian, 60 Garden St, Cambridge, MA 02138, USA
^2Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA
^3Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT, UK
^4Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham DH1 3LE, UK
^5Department of Physics and Astronomy, The University of Utah, 115 South 1400 East, Salt Lake City, UT 84112, USA
^6Instituto de Fรญsica, Universidad Nacional Autรณnoma de Mรฉxico, Cd. de Mรฉxico C.P. 04510, Mรฉxico
^7Center for Cosmology and AstroParticle Physics, The Ohio State University, 191 West Woodruff Avenue, Columbus, OH 43210, USA
^8Institut de Fรญsica dโAltes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, E-08193 Bellaterra Barcelona, Spain
^9Departamento de Fรญsica & Observatorio Astronรณmico, Universidad de los Andes, Cra. 1 No. 18A-10, Edificio Ip, CP 111711, Bogotรก, Colombia
^10Department of Astrophysical Sciences, Princeton University, Princeton NJ 08544, USA
^11Department of Physics, Southern Methodist University, 3215 Daniel Avenue, Dallas, TX 75275, USA
^12Instituciรณ Catalana de Recerca i Estudis Avanรงats, Passeig de Lluรญs Companys, 23, 08010 Barcelona, Spain
^13Department of Physics and Astronomy, Sejong University, Seoul, 143-747, Korea
^14Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85748 Garching, Germany
^15Institute of Cosmology & Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX, UK
^16National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Rd., Chaoyang District, Beijing, 100012, PR China
^17Space Sciences Laboratory, University of California, Berkeley, 7 Gauss Way, Berkeley, CA 94720, USA
^18CIEMAT, Avenida Complutense 40, E-28040 Madrid, Spain
^19Korea Astronomy and Space Science Institute, 776, Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea
^20Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA
^21Department of Physics & Astronomy, Ohio University, Athens, OH 45701, USA
^22NSF's NOIRLab, 950 N. Cherry Ave., Tucson, AZ 85719, USA
^23Department of Astronomy, Tsinghua University, 30 Shuangqing Road, Haidian District, Beijing, 100190, China
^24Ecole Polytechnique Fรฉdรฉrale de Lausanne, CH-1015 Lausanne, Switzerland
|
http://arxiv.org/abs/2306.11714v1
|
20230620174230
|
Meta-Analysis of Transfer Learning for Segmentation of Brain Lesions
|
[
"Sovesh Mohapatra",
"Advait Gosai",
"Anant Shinde",
"Aleksei Rutkovskii",
"Sirisha Nouduri",
"Gottfried Schlaug"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] |
Article Title]Meta-Analysis of Transfer Learning for Segmentation of Brain Lesions
1,2]Sovesh Mohapatra
2]Advait Gosai
3,5]Anant Shinde
4] Aleksei Rutkovskii
5]Sirisha Nouduri
[1,3,5]Gottfried [email protected]
[1]Institute for Applied Life Sciences, University of Massachusetts, Amherst, 01003, MA
[2]Manning College of Information and Computer Sciences, University of Massachusetts, Amherst, 01003, MA
[3]Department of Biomedical Engineering, University of Massachusetts, Amherst, 01003, MA
[4]Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, 01003, MA
[5]Department of Neurology, Baystate Medical Center, and UMass Chan Medical School - Baystate Campus, Springfield, 01107, MA
A major challenge in stroke research and stroke recovery predictions is the determination of a stroke lesionโs extent and its impact on relevant brain systems. Manual segmentation of stroke lesions from 3D magnetic resonance (MR) imaging volumes, the current gold standard, is not only very time-consuming, but its accuracy highly depends on the operator's experience. As a result, there is a need for a fully automated segmentation method that can efficiently and objectively measure lesion extent and the impact of each lesion to predict impairment and recovery potential which might be beneficial for clinical, translational, and research settings. We have implemented and tested a fully automatic method for stroke lesion segmentation which was developed using eight different 2D-model architectures trained via transfer learning (TL) and mixed data approaches. Additionally, the final prediction was made using a novel ensemble method involving stacking and agreement window. Our novel method was evaluated in a novel in-house dataset containing 22 T1w brain MR images, which were challenging in various perspectives, but mostly because they included T1w MR images from the subacute (which typically less well defined T1 lesions) and chronic stroke phase (which typically means well defined T1-lesions). Cross-validation results indicate that our new method can efficiently and automatically segment lesions fast and with high accuracy compared to ground truth. In addition to segmentation, we provide lesion volume and weighted lesion load of relevant brain systems based on the lesions' overlap with a canonical structural motor system that stretches from the cortical motor region to the lowest end of the brain stem. Such a combination of an automatically determined lesion with its impact on a relevant brain system provides a fast and objective, user-experience independent functional understanding of a lesionsโ extent and impact.
[
*
July 31, 2023
=================
ยง INTRODUCTION
Strokes are a leading cause of long-term disability and mortality rates, impacting millions of individuals annually in the world <cit.>. In the United States alone, around 795,000 people experience either a new or a recurrent stroke each year, with approximately 610,000 being first-time occurrences and 185,000 being recurrent attacks <cit.>. The identification and segmentation of a stroke lesion plays an important role for accurate diagnosis and etiology, prognosis, and treatment options <cit.>. With extensive and long-time training, clinical and translational professionals can define the extent of a lesion through visual inspection, however, accurate manual segmentation and determining the impact of a lesion on relevant and eloquent regions of the brain is time-consuming, expensive, and can be error-prone due to the expertise necessary for the visual analysis of images and to keep the inter-observer variability to a minimum <cit.>. Thus, developing user independent methods for rapid determination of any lesion, in particular lesions in both subacute and chronic phase of a stroke, becomes an important endeavor in clinical and research settings, allows the stratification of patients into clinical trials, and enhances our understanding of a lesions' impact on a patientโs functional impairment and their stroke outcome potential <cit.>.
Transfer learning (TL) uses pre-trained models to extract significant information and features from one domain and apply them to a related but distinct separate domain. In the context of medical imaging, TL has emerged as an effective approach to deal with issues including the lack of labeled data and the need for robust feature extraction <cit.>. Researchers can initiate their work with models that have already been pre-trained on extensive, general-purpose datasets, possessing a feature set capable of identifying and capturing significant attributes. The models can then be fine-tuned for the particular medical imaging needs at hand or the particular imaging dataset requiring segmentation <cit.>. Intermediate task training (ImTT), a subset of TL, involves training a model on a series of related tasks before fine-tuning on the target task <cit.>. Recent studies have demonstrated the efficacy of TL and ImTT in various medical imaging tasks, including lesion segmentation <cit.>, tumor classification <cit.>, and tissue compartments identification <cit.>.
To explore the potential of TL in 2D based deep learning models, we conducted a comprehensive meta-analysis focusing on the index lesion segmentation in stroke patients that had imaging studies obtained in the subacute and chronic phase after an index stroke event had occurred. The emphasis on the index lesion in our paper was done with the understanding that there are other abnormal regions in a brain with a stroke that might be due to chronic small vessel disease related to the presence of long-standing vascular risk factors (e.g., smoking, high blood pressure, hyperlipidemia, diabetes, etc) or due to secondary degeneration of white matter tracts induced by the index lesion and more visible in the subacute and chronic stroke phase. Our study compared the performance of models utilizing ImTT with those using mixed data. Furthermore, we explored the effectiveness of ensembling fine-tuned models trained on mixed data by incorporating a binary overlap (stacking) and a novel window approach. Our study also emphasized the fact that through these techniques, 2D models can achieve comparable accuracy to hand-drawn lesion masks while requiring fewer computational resources.
ยง METHODOLOGY
ยง.ยง Selection of MRI data
We examined 602 anonymized MR brain images and their corresponding hand-drawn lesions maps that resulted from a consensus of multiple experienced raters curated from Anatomical Tracings of Lesions After Stroke (ATLAS) dataset <cit.> (547 high resolution T1w MR brain images) and our anonymized and de-identified in-house dataset (55 high resolution T1w MR brain images, either 1X1X1 acquisition voxel size or resampled to 1X1X1 voxel size). This encompassed a wide range of acute, subacute and chronic stroke patients (as defined by <cit.>) lesions in various brain regions as defined in Section <ref>
ยง.ยง Preprocessing of MRI data
Several preprocessing measures were conducted to prepare the MRI data for further analysis. These steps were necessary to standardize the data and enhance the performance of our deep learning models. Firstly, all MRI images were resampled to a consistent size of 256x256x128 voxels to ensure uniformity across the dataset. The skull, meninges, and ventricles were then removed/excluded from the images. Subsequently, the images and their corresponding masks were sliced in the axial plane to generate appropriate input to our 2D models.
To enhance the robustness of our model, various image augmentation techniques were applied to generate multiple variants of the original images. This approach increases the dataset size and introduces variability, which ultimately improves the model's ability to generalize and perform well on unseen data. The augmentation process was carried out using a custom Python function leveraging the ImageDataGenerator class from the Keras library <cit.>. Finally, all images and masks were normalized with the mask values being binarized.
ยง.ยง Splitting the whole brain into multiple regions
Based on our comprehensive understanding of anatomy, major vascular systems, and clinical knowledge regarding a stroke lesionโs impact of particular brain systems (the motor system in our dataset), we divided the whole brain into four different โsuper-regionsโ, which were gross-anatomically defined. Dividing the brain into these major super-regions allowed us to identify which model architectures learned and captured different unique complexities associated with each region.
Super Region 1 (R1) consists of the inferior frontal region and sub-cortical deep gray matter regions of the brain (e.g., basal ganglia, thalamus, insular cortex)
Super Region 2 (R2) consists of the cerebellum and three parts of the brainstem (medulla, pons, and midbrain regions).
Super Region 3 (R3) consists of the superior part of the frontal lobe and the entire occipital lobe and parietal lobe.
Super Region 4 (R4) consists of the temporal lobe, limbic lobe (e.g., cingulate gyrus), and fronto-mesial part of the brain.
All the sub-regions in each of the super-regions were selected from the Talairach atlas <cit.> as shown in Figure <ref>. The subregions were added and multiplied with consequent intensity values using the FSL software. As shown in Section <ref>, the models' performance was improved by this approach when ensemble methods were applied.
ยง.ยง Model training
We conducted an extensive study by training and evaluating eight distinct model architectures to enhance the analysis. These architectures encompass a wide variety of approaches, including U-Net, U-Net++, Residual U-Net (ResUNet), Residual Network (ResNet), VGG U-Net (VGGUNet), V-Net, Fully Convolutional DenseNet (FC DenseNet), and Attention U-Net (Att UNet). All models were trained on a high-performance computer equipped with 128 gigabytes of RAM, an NVIDIA GeForce RTX 3080 GPU with 12 gigabytes of memory, and an AMD Threadripper Pro 5955WX Processor, running on the Ubuntu 20.04 operating system. Detailed information about the hyperparameters and model architectures can be found in the supporting information (SI) section of our manuscript.
Our meta-analysis consisted of primarily comparing two different model training and fine-tuning approaches: training and validation on mixed data and training using TL with ImTT. Our aim was to compare the efficacy of both approaches for the specific task of subacute/chronic stroke lesion segmentation.
ยง.ยง.ยง Mixed data
We trained, validated and tested the models on slices of all 547 MR images along with their corresponding hand-drawn lesion masks from the ATLAS v2.0 dataset and 33 of the image mask pairs from our in-house dataset. In this case, the training set was made up of 80% of all 2D slices of all MR image datasets, while the validation set (used for hyperparameter tuning) and testing set each contained 10%. Finally, the models were evaluated on the independent test set consisting of the remaining 22 MR brain images from the in-house dataset.
ยง.ยง.ยง Intermediate Task Training
We initially trained the models on slices of 447 MR images along with their corresponding hand-drawn lesion masks curated from the ATLAS v2.0 dataset. Next, we divided the remaining 100 MR images from the ATLAS into four groups based on the lesion locations, as depicted in Section <ref>. We then fine-tuned the models with the brains and corresponding hand-drawn lesion masks focusing on specific super-regions, further fine-tuning them on our target task which is segmenting our in-house dataset by using 33 of its T1w MR images. These models too were evaluated on the independent test set as described previously. The suffix _FT indicates models trained using ImTT.
ยง.ยง Ensembles of trained models
In addition to using two different training approaches, we examined different ensembles of these eight unique model architectures to evaluate whether a collective strategy would produce superior results compared to relying on a single model after applying the binary mask to the predicted lesion masks. We implemented two ensemble techniques as detailed below.
ยง.ยง.ยง Stacking
The stacking technique employed in this study utilized a binary overlap method. The final mask for each image was determined by considering the overlapping voxels resulting from stacking individual model predictions. To construct the stacks, we chose the four top-performing models per super-region, and analyzed all of their possible combinations to derive the best ensemble. The stacking method has been illustrated in Figure <ref>.
ยง.ยง.ยง Agreement Window
The agreement window ensemble method is a novel approach developed for this study that combined a window kernel with stacking. The 3D window kernel was designed to retain the union of predicted voxels if their stacking overlap was greater than a certain threshold; whereas it would discard all voxels in the window if the minimum overlap was not satisfied. This window was convolved on the chosen stack of individual model predictions to generate a final outcome. While binary stacking generally resulted in a smaller, less descriptive mask, this method allowed us to eliminate predicted noise from individual models, as well as retain a more accurate mask shape around the targeted index lesion area. Different values of the window size and overlap threshold were tested to identify the optimal parameters. The agreement window method is illustrated in Figure <ref>.
ยง.ยง Evaluation metrics
We assessed our results using four distinct metrics: dice coefficient, Jaccard Index (also known as IoU), precision, and recall to gain insight into the performance of the models and ensemble methods. Detailed definitions of each of the chosen evaluation metrics and their relevance can be found in the SI.
ยง.ยง Evaluating lesion impact on motor regions
We employed a systematic approach to assess the impact of brain lesions in stroke patients on relevant or eloquent canonical brain structures, such as the corticospinal tract (CST), connecting the motor cortex with alpha motor neurons in the spinal cord. The canonical CST should be understood as a surrogate structural marker of the extent, intersubject variability as well as anatomical variability of a relevant motor system <cit.>. Based on the CST-lesion load value for either side of the brain <cit.>, we classified the impacts into three categories on the basis of incremental order of lesion load values such as Small, Medium, and Large.
* Large: Indicates a high lesion load, corresponding to a severe neurological impairment.
* Medium: Indicates a medium lesion load, corresponding to a moderate neurological impairment.
* Small: Indicates a small lesion load, corresponding to a mild neurological impairment .
This evaluation parameter not only helps in understanding the anatomical impacts of lesions on a relevant brain system but also examines the reliability of the model predictions as detailed in Section <ref>.
ยง RESULTS
ยง.ยง Performance assessment of 2D-based DL models
Based on our evaluation of the models using an independent set of 22 T1w MR Sequences from our in-house dataset and the chosen evaluation metrics, we observed that the ImTT training approach is more effective than the mixed-data approach, particularly when considering the entire brain. Additionally, the evaluation of predictions from a single model revealed that the Att-UNet_FT model outperforms all other models in terms of the DSC, IoU and ER LL metrics while the ResNet_FT model has the lowest error rate in predicting the Lesion Volume as shown in Table <ref>.
ยง.ยง Performance assessment of model prediction in super-region-wise and ensemble methods
In comparison to performing a whole-brain analysis, we identified four anatomically distinct brain super-regions and evaluated the performance of individual models and ensemble methods in calculating critical lesion information - lesion volume and lesion load.
Following the labeling of stroke locations in our training data with their corresponding brain super-regions, we selected the top 4 performing models for each super-region and applied the two ensemble methods as described in Section <ref>. In our analysis, we combined super-regions 1 and 4, as we observed that all lesions present in super-regions 1 also overlapped with super-regions 4.
Table <ref> presents the super-region-wise individual model and ensemble performances. It is observed that the agreement window ensemble method consistently produces lower error rates than the stacking method for all super-regions, while only being outperformed by the Att UNet_FT model for strokes present in super-regions 3. Specifically, when evaluated based on lesion load, the agreement window predictions have significantly improved and have a lower average error rate across all super-regions, as opposed to the average error rates from individual models and binary stacking.
Table <ref> presents a comprehensive comparison of the top-performing model(s) for all of the different whole-brain and super-region-wise ensemble methods described in this paper. The super-region-specific predictions of the top performing models from each super-region (as shown in Table <ref>) were aggregated to generate a final super-region-wise lesion prediction for a target brain.
Examining the results, we notice that super-region-wise ensembles surpass the performance of whole-brain predictions by individual models and even whole-brain ensembles. Furthermore, the effectiveness of the agreement window method is seen by its superior performance in both the whole-brain and super-region-wise approaches, with the super-region-wise agreement window method generating the best lesion volume and lesion load predictions.
ยง.ยง Performance assessment of final predictions using lesion load
Figure <ref> illustrates the comparison between the three lesion categories of hand-drawn lesion masks and predicted lesion masks, based on their respective CST lesion load values. Out of 22 predictions, 18 exhibit category levels within the same range, and most instances have a significantly low error rate. The overall error rate for the models used is presented in Table <ref>.
Figure <ref> clearly shows three instances of outliers or inaccurate predictions in the final outcomes. Across all three cases, the model excessively estimated the CST lesion load values, incorrectly moving one from the small to medium range and two from the medium to large. This trend signifies an overestimation of lesion by the model, which can be viewed more favorably than if the model has an underestimation of lesion, as the overestimation ensured that the true lesions were not left out.
ยง.ยง Visual comparison
Figure <ref> presents a comparative analysis of prediction results from two distinct training approaches for models, as discussed in Section <ref>. A, B, C shows three separate predictions generated by three different model architectures on brain MR images in the independent validation set. It is evident that the mixed data approach leads to some false positives in the models' predictions, which are effectively eliminated when employing the ImTT training approach.
Figure <ref> presents a comparative analysis of the prediction results from various approaches applied to the independent validation set. For a more effective comparison, the figure displays predictions corresponding to lesions in two separate super regions, along with their hand-drawn ground truth lesion masks. In this instance, the lesions are present in super-regions 2 and 4. In accordance with the Table <ref>, the models which are ensembled are Att UNet, VNet and FC DenseNet_FT, UNet++ for super-regions 2 and 4 respectively. The agreement window has a dimension of 2X2X2 pixels and a minimum overlap threshold of 75% for the super-region 4 ensemble, compared to a dimension of 3X3X3 pixels with a minimum overlap threshold of 50% for the super-region 2 ensemble.
It can be observed that the agreement window ensemble methods tend to marginally overpredict the lesion area. However, it effectively predicts the targeted lesion area and exhibits almost no false positives in areas outside the lesion's location compared to other approaches.
ยง DISCUSSION AND CONCLUSION
ยง.ยง Interpretability of results
Our study not only offers valuable insights into the comparative performance of ensemble methods when used in conjunction with TL or mixed data training approaches but also provides a basis of comparison with other recent works using the ATLAS dataset <cit.>. The primary goal of our research was to demonstrate that 2D-based models, when combined with TL and ensemble techniques, can achieve fast, user-experience independent, accuracy levels comparable to hand-drawn or ground truth lesion masks.
In comparison to recent works by <cit.>, where the ATLAS dataset is utilized for model training, the achieved Dice Similarity Coefficient (DSC) on the validation set are 0.723, 0.592, and 0.650 respectively. In our study using the ATLAS dataset, our best performing approach yields a DSC of 0.736 on the independent dataset, demonstrating the effectiveness of our method in segmenting lesions in T1w MR sequences. Furthermore, our study provides insights into lesion volume and lesion load of relevant brain structures, contributing to a more comprehensive understanding of each lesion's impact.
As we observe in Table <ref> and <ref>, the improvement seen using the super-region-wise prediction can be attributed by the adaptability, reduced complexity, and noise reduction capabilities of specific model architectures, leading to more effective segmentation in particular super-regions for some models. Furthermore, the empirically derived parameters ensure that our ensemble methods are tailored to the main aim of our current approach, namely predicting the index stroke lesion.
However, our individual models and ensemble methods have also been able to detect more wide-spread small vessel disease lesions in the stacking method as seen in Figure <ref>A and <ref>C highlighted circle. Upon further study, a tailored adjustment of parameters in the combined stacking and agreement window algorithm would allow us to detect small vessel disease(SVD) lesions, in particularly SVD lesions leading to T1-hypodensities <cit.>, as well as the degeneration of long-range tracts as a secondary effects of the index lesion which can also lead to a decrease in T1 signal, but this signal decrease is seen in the anatomical course of the descending corticospinal tract (CST) <cit.>.
ยง.ยง Advantages and limitations in ensemble methods
Three important innovations were made in our study. Along with applying TL, we implemented two ensemble methods: the stacking method and the stacking method combined with the window algorithm. A primary advantage of employing the stacking method lies in its ability to effectively reduce a significant number of false positives that frequently occur when relying solely on a single model's predictions. This improvement in accuracy can be attributed to the integration of predictions from multiple models, which consequently increases the likelihood of identifying true index lesion areas. Furthermore, the integration of the window algorithm with stacking provides an additional advantage by not only considering the overlapping predictions but also detecting edges of the lesions that might have been excluded otherwise.
However, ensemble methods also come with some limitations. The stacking method tends to underpredict lesions in some cases due to overlapping, which may lead to missed detections and a potentially incomplete assessment of the targeted lesion area. Conversely, the combined stacking and window algorithm tends to overpredict the lesions by approximately 10% in other cases. While this overprediction can result in an inflated estimation of lesion presence, we accept this trade-off in light of the overall enhancement in model performance. Furthermore, the overprediction might be due to an increased sensitivity of the combined stacking and agreement window algorithm in detecting secondary lesions (e.g., degeneration of the corticospinal tract distal to an index lesion - which might also lead to a decrease in signal in T1w images) as well as small vessel disease lesions in areas of the brain that are unrelated to the index stroke.
ยง.ยง Clinical implications and potential applications
Our findings have considerable implications for various applications in clinical practice and in translational research, particularly in improving the accuracy and speed of diagnosis, prognosis, and treatment planning for acute, subacute and chronic stroke patients. By adopting these methods, we can improve the detection and characterization of lesions, leading to more informed decision-making,better patient outcomes predictions, as well as fast and accurate stratification of stroke patients based on lesion load data. One critical aspect of the management and treatment of stroke patients is an understanding of the impact of a lesion and correlating it to the patients' functional impairments and to their possible gains in standard rehabilitation therapy as well as in experimental therapies. Our approach allows for the rapid and precise calculation of lesion volume and lesion load of relevant systems in the brain e.g., the CST as a surrogate structure of the motor system, enabling clinicians to evaluate the severity of the stroke, predict the likelihood of recovery as well as a posthoc determination of the failure of recovery, develop targeted rehabilitation strategies to reduce the effects of stroke-related disabilities, and provide a stratification tool for enrollment in various clinical trials.
ยง.ยง Future work
Future research work will concentrate on automating lesion segmentation and rapidly measuring the impact of a lesion on relevant brain systems utilizing machine learning approaches. Furthermore, exploring personalized treatment strategies based on individual lesion characteristics, patient demographics (e.g., biological age versus calendar age), small vessel disease lesion load, surrogate markers of brain health, and genetic factors could result in more targeted and efficient therapeutic interventions. Longitudinal studies as well as multimodal MR sequences could be pursued for fine-grain analysis of lesion progression over time.
ยง.ยง Conclusion
Our study underscores the potential and relevance of applying TL to fine-tune large pre-trained 2D-based models on specific, targeted data. This approach facilitates precise segmentation of lesions in both subacute and chronic stroke patients.
The models we built for this study take into consideration unique aspects of the task at hand, focusing on lesion load and lesion volume.
A comparative analysis of different 2D model architectures was also conducted, providing valuable insights into their respective strengths and weaknesses. Our findings elucidate how variations in model architectures can influence the learning patterns of the models in different regions of the brain. This infers that different models, due to their architectural differences, can learn differently from the specific features of various brain regions.
In essence, our research not only substantiates the effectiveness of 2D-based models when fine-tuned with Transfer Learning and used in conjunction with ensemble methods, but also emphasizes the importance of choosing the appropriate model architecture for particular brain regions and task-specific criteria. Our work, therefore, offers meaningful contributions to the task of lesion segmentation in stroke patients and may pave the way for further improvements in stroke diagnosis and prognosis.
ยง.ยง Supplement information
All the tables and definitions mentioned in the manuscript are available in the supplement information. The link here is https://shorturl.at/nqIJK.
ยง.ยง Acknowledgement
GS and AS were partly supported by a grant from NIMH (Brain-Initiative) (7R01MH111874-05) the data analysis and computing expenses were supported by an in-kind financial contribution from Brainify, LLC.
ยง.ยง Author contributions
Conceptualization: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug
Data curation: Anant Shinde, Sirisha Nouduri, Gottfried Schlaug
Formal analysis: Sovesh Mohapatra, Advait Gosai, Aleksei Rutkovskii
Funding acquisition: Anant Shinde, Gottfried Schlaug
Investigation: Sovesh Mohapatra, Advait Gosai, Aleksei Rutkovskii
Methodology: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug
Project administration: Sovesh Mohapatra, Gottfried Schlaug
Resources: Gottfried Schlaug
Supervision: Gottfried Schlaug
Clinical Validation: Sirisha Nouduri, Anant Shinde, Gottfried Schlaug
Writing - original draft: Sovesh Mohapatra, Gottfried Schlaug
Writing - review and editing: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug
|
http://arxiv.org/abs/2306.09828v1
|
20230616131302
|
Version 2.0 -- cashocs: A Computational, Adjoint-Based Shape Optimization and Optimal Control Software
|
[
"Sebastian Blauth"
] |
math.OC
|
[
"math.OC"
] |
label1]Sebastian Blauthcor1
[email protected]
[cor1]corresponding author
[label1]Fraunhofer ITWM, Kaiserslautern, Germany
In this paper, we present version 2.0 of cashocs. Our software automates the solution of PDE constrained optimization problems for design optimization and optimal control. Since its inception, many new features and useful tools have been added to cashocs, making it even more flexible and efficient. The most significant additions are a framework for space mapping, the ability to solve topology optimization problems with a level-set approach, the support for parallelism via MPI, and the ability to handle additional (state) constraints. In this software update, we describe the key additions to cashocs, which is now even better-suited for solving complex PDE constrained optimization problems.
PDE Constrained Optimization Shape Optimization Topology Optimization Space Mapping
[2020] 65K05 49M05 49M41 35Q93 49Q10
ยง REFERS TO
Sebastian Blauth, cashocs: A Computational, Adjoint-Based Shape Optimization and Optimal Control Software, SoftwareX, 13 (2021), p. 100646, <https://doi.org/10.1016/j.softx.2020.100646>
ยง METADATA
ยง DESCRIPTION OF THE SOFTWARE-UPDATE
This articles serves as update to our software cashocs <cit.> and discusses some of the key additions, changes, and improvements to the software. We describe both new functionalities which have been added to cashocs to extend its capabilities as well as structural improvements which make the software more efficient and easier to apply.
ยง NEW FUNCTIONALITIES
Since its inception, cashocs has received many new functionalities, which we present in the following. One of the most practical additions to cashocs are space mapping methods which enable the solution of highly complex problems in the following way: The space mapping technique utilizes a model hierarchy consisting of a fine (detailed, complex) and a coarse (approximate, cheap) model. It provides an efficient way to optimize the fine model by successively optimizing and correcting a sequence of coarse model approximations. Particularly, there is no need for a direct optimization of the fine model. This makes the technique particularly interesting for industrial applications, where often commercial solvers without optimization capabilities are used for simulation. For more details on the space mapping technique, we refer the reader to, e.g., <cit.>.
Whereas space mapping techniques for optimal control problems have already been investigated in the literature (see, e.g., <cit.>), space mapping methods for shape optimization have only been introduced very recently in <cit.>. Moreover, to the best of our knowledge, cashocs is the only software providing a framework for implementing and solving general space mapping problems in the context of PDE constrained optimization for both shape optimization and optimal control problems.
As a reference, we briefly take a look at a problem from <cit.>, which is solved with the space mapping methods for shape optimization and is depicted in Figureย <ref>. The aim of the optimization problem is to achieve a uniform flow distribution over three outlet pipes. For the fine model, the Navier-Stokes equations with a Reynolds number of 1000 are used and solved with Ansys Fluent <cit.>, whereas the coarse model is given by the linear Stokes system and is solved on a coarser grid with FEniCS <cit.>. We see that the space mapping method converges really fast in about five iterations and that the optimized geometry yields a nearly perfect uniform flow distribution. For more details, we refer the reader to <cit.>.
Another major new feature for cashocs is its support for solving topology optimization problems with a level-set method. Our software provides several optimization algorithms for such problems, including state-of-the-art algorithms for topology optimization <cit.> as well as novel quasi-Newton methods for topology optimization proposed in <cit.>. Due to the inherent difficulty of deriving topological derivatives, the latter cannot yet be computed with automatic differentiation methods but have to be supplied by the user. However, the automatic derivation of adjoint systems is already implemented in cashocs. This feature extends the applicability of cashocs to the field of topology optimization with topological sensitivity information.
Further, cashocs has received methods for the automatic treatment of additional constraints for optimization problems, such as state or control constraints. In particular we have implemented a quadratic penalty and an augmented Lagrangian method (see, e.g., <cit.>) for dealing with constraints other than PDEs. These methods are applicable for a wide variety of constraints, including state constraints, which opens up new problem classes for cashocs to solve. An overview over all types of optimization problems that can be solved with cashocs is given in Figureย <ref>.
The new version of cashocs also comes with the ability to automatically scale individual terms of the cost functional based on their magnitude for the initial iteration. This makes it easier for users to weigh different parts of the cost functional according to their needs. Moreover, this also facilitates the solution of multi-criteria optimization problems with scalarization methods, see, e.g., <cit.>.
With the new version of cashocs, users now have the possibility to define custom scalar products for the optimization, which is particularly useful for shape optimization, where the choice of the scalar product for computing the shape gradient is particularly important. Moreover, the updated version of cashocs also supports computing the shape gradient with the p-Laplace equations, based on the approach presented in <cit.>. This approach may preserve the mesh quality and can yield better optimized geometries for certain problems.
Finally, we have also added more sophisticated line search methods to cashocs, which make use of polynomial models to efficiently compute a suitable stepsize. During the line search, a quadratic or cubic model of the cost functional along the search direction is formed and minimized to obtain suitable stepsizes in fewer iterations compared to a classical Armijo backtracking search (see, e.g., <cit.>).
ยง STRUCTURAL IMPROVEMENTS
In addition to the new features described above, there have also been some major structural improvements to cashocs which we discuss in the following.
The most important structural improvement is the support for parallelism with MPI. In particular, most of the PDE constrained optimization problems that can be solved in serial can now also be solved in parallel without the need for any code modifications. Therefore, cashocs is now able to be run on high-performance-computing systems and, thus, can solve huge optimization problems which cannot be treated in serial. The MPI implementation builds on and leverages the MPI support of FEniCS, so that there are only minimal code changes necessary on the user's side.
Another structural improvement for cashocs is a new remeshing workflow which overcomes the limitations induced by the previous one. The benefit of the new workflow is that remeshing can now be applied also for the space mapping and constraint handling problems, which was not possible beforehand. As remeshing is of particular importance for solving shape optimization problems, this change greatly increases the applicability of cashocs. However, user's must now follow a slightly different, but still straightforward, syntax to solve shape optimization problems with remeshing.
ยง CONCLUSION AND OUTLOOK
With the update to version 2.0, our software cashocs has gained significant new features, such as a space mapping framework, support for parallelism via MPI, and the ability to treat topology optimization problems. These additions make cashocs even more flexible and relevant for solving PDE constrained optimization problems for practical and industrial applications.
ยง REFERENCES
elsarticle-num
|
http://arxiv.org/abs/2306.04387v2
|
20230607123537
|
M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
|
[
"Lei Li",
"Yuwei Yin",
"Shicheng Li",
"Liang Chen",
"Peiyi Wang",
"Shuhuai Ren",
"Mukai Li",
"Yazheng Yang",
"Jingjing Xu",
"Xu Sun",
"Lingpeng Kong",
"Qi Liu"
] |
cs.CV
|
[
"cs.CV",
"cs.CL"
] |
[
*
July 31, 2023
=================
Instruction tuning has significantly advanced large language models (LLMs) such as ChatGPT, enabling them to align with human instructions across diverse tasks. However, progress in open vision-language models (VLMs) has been limited due to the scarcity of high-quality instruction datasets. To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M^3IT) dataset, designed to optimize VLM alignment with human instructions.
Our M^3IT dataset comprises 40 carefully curated datasets, including 2.4 million instances and 400 manually written task instructions, reformatted into a vision-to-text structure. Key tasks are translated into 80 languages with an advanced translation system, ensuring broader accessibility. M^3IT surpasses previous datasets regarding task coverage, instruction number and instance scale.
Moreover, we develop Ying-VLM, a VLM model trained on our M^3IT dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese. We have open-sourced the dataset to encourage further research.[Our dataset is available at <https://huggingface.co/datasets/MMInstruction/M3IT>]
ยง INTRODUCTION
There has been a continuously increasing trend to develop intelligent assistants that can follow human instructionsย <cit.>.
In the natural language processing (NLP) field, instruction tuningย <cit.> is a success paradigm that leverages large-scale well-formatted instances to align large language models (LLMs) to human instructions.
By finetuning on instances with specific task descriptions, LLMs learn to follow the instruction to perform various tasks, and demonstrate strong generalization ability on unseen tasks
ย <cit.>.
Expanding beyond NLP, a general-purpose intelligent agent must encompass various modalities, such as vision, prompting recent efforts to investigate instruction tuning in vision-language domainsย <cit.>.
To develop powerful vision-language models (VLMs), it is essential to have a well-constructed dataset that encompasses diverse vision-language tasks and aligns with human instructions.
However, the instructional data supporting existing VLMs is either not publicly available (e.g., GPT-4) or offers limited task and language coverage (e.g., only tasks in English are considered). This scarcity of comprehensive datasets has impeded the progress of open vision-language models, highlighting the importance of multi-modal instruction tuning and the need for high-quality datasets.
In this paper, we aim to advance instruction tuning research in the multi-modal domain by introducing an open dataset M^3IT, a Multi-Modal Multilingual Instruction Tuning dataset, as an essential step towards building a versatile general-purpose assistant.
We build this dataset by converting existing datasets into a unified vision-to-text schema with four stages: (1) manual instruction writing, (2) dataset pre-processing, (3) careful quality check and (4) dataset translation for key tasks.
Our dataset encompasses a wide range of tasks, including classic image-text tasks such as image classification, visual question answering, and image captioning.
Video-related tasks, such as video question-answering, are also incorporated to ensure comprehensive coverage across multiple modalities.
We further integrate Chinese vision-language datasets with corresponding Chinese instructions.
The resulting dataset compiles 40 diverse tasks and 400 instructions.
Finally, key vision-language tasks are translated into 80 languages with a strong translation system, to support multilingual studies.
To evaluate the effectiveness of the proposed dataset, we develop a vision-language model, Ying-VLM, by integrating a strong vision encoder, BLIP-2ย <cit.> with a large language model, Ziya-13Bย <cit.>, derived from LLaMAย <cit.>.
Building on the successful approach of incorporating visual tokens as textual prompts in LLMsย <cit.>, we employ a two-stage training process: (1) the initial stage aligns vision features with text embeddings through image captioning on LAION400Mย <cit.>, and (2) the second stage enhances the model by conducting instruction tuning on selected tasks of our dataset.
Experimental results reveal that Ying-VLM surpasses strong baseline models in knowledgeable VQA tasks and exhibits improved generalization performance to unseen video and cross-lingual tasks.
Further analysis indicates that the improved performance corresponds to increased tasks for instruction tuning, while the diversity of instructions also affects outcomes.
This paper presents two key contributions:
(1) We introduce the open-source, large-scale Multi-modal, multilingual Instruction Tuning (M^3IT) dataset, designed to enable the development of general-purpose multi-modal agents.
(2) We develop Ying-VLM, a visual assistant that excels in knowledgeable VQA tasks, demonstrates strong generalization to unseen video QA and Chinese multi-modal tasks, and offers valuable insights for future research.
ยง RELATED WORK
Our work draws inspiration from recent language instruction tuning benchmarksย <cit.>, which have been proven effective for improving language models to obtain cross-task generalization abilityย <cit.>.
In this paper, we focus on exploring the instruction tuning paradigm from LLMs to multi-modal agents. Unlike text-only tasks, vision-language tasks generally have more diverse formats, which poses new challenges toward vision-language instruction tuning benchmarks.
To develop a general-purpose vision-language model, it is crucial to create high-quality multi-modal instruction tuning datasets encompassing diverse tasks, languages, and instructions.
Several studies have investigated multi-modal instruction tuning for VLMs.
LLaVAย <cit.> and MiniGPT-4ย <cit.> generate visual content-related dialog by incorporating image caption data into GPT-4/ChatGPT models.
MultiInstructย <cit.> reformats a series of visual classification tasks into an instruction-tuning format, while InstructBLIPย <cit.> adapts 28 existing image-to-text tasks.
However, these datasets do not provide an ideal multi-modal instruction tuning dataset due to their limited (1) coverage of various task types in multi-modal fields, (2) diversity and quality of instances, and (3) inclusion of multiple languages for wide linguistic diversity. In this paper, we construct an improved multi-modal instruction tuning dataset by expanding task coverage to 40 datasets, supplementing instances with 10 manually written task instructions, and including tasks in different languages.
Tableย <ref> compares the characteristics of existing multi-modal instruction tuning datasets and M^3IT.
ยง M^3IT: A MULTI-MODAL MULTILINGUAL INSTRUCTION TUNING DATASET
In this section, we introduce our proposed M^3IT dataset by first elaborating the dataset coverageย (ย <ref>), followed by the details of the annotation process(ย <ref>).
Finally, we present the dataset format and provide the statistics of the crafted datasets instructions(ย <ref>).
ยง.ยง Task Coverage
Our dataset compiles diverse tasks of classical vision-language tasks, including captioning, visual question answeringย (VQA), visual conditioned generation, reasoning and classification.
Captioning This task aims to produce descriptions of the given images according to different needs. We include
MS COCOย <cit.> (the Karpathy split) for generic image descriptions.
TextCapsย <cit.> requires models to capture the text presented in the image and generate captions accordingly.
Image-Paragraph-Captioningย <cit.> focuses on generating detailed descriptions for images.
Reasoning This task evaluates specific reasoning capabilities. We incorporate
CLEVRย <cit.> and
NLVRย <cit.> for spatial reasoning,
Visual Commonsense Reasoning (VCR)ย <cit.> for commonsense reasoning,
Visual MRCย <cit.> for reading comprehensive over images,
and Winogroundย <cit.> for fine-grained semantics reasoning over text descriptions and image contents.
Visual Question Answering (VQA)
This is the most widely studied multi-modal task, which requires the model to answer a given question based on the image correctly. Tasks include
VQA v2ย <cit.>,
Shapes VQAย <cit.>,
DocVQAย <cit.>,
OCR-VQAย <cit.>,
ST-VQAย <cit.>,
Text-VQAย <cit.>,
and GQAย <cit.>.
Knowledgeable Visual Question Answering
Unlike traditional VQA tasks focusing on the question relevant to the content image, knowledgeable visual question answerย (KVQA) requires the model to draw upon outside knowledge to answer questions. We incorporate two outside knowledge VQA datasets:
OK-VQAย <cit.> and A-OK-VQAย <cit.>,
ScienceQAย <cit.> which
contains multi-modal science questions,
and ViQuAEย <cit.> focusing on knowledge facts of named entities in images.
Classification
This task involves classifying an image based on a given set of candidate labels.
ImageNetย <cit.>,
Grounded Object Identification (COCO-GOI)ย <cit.>,
COCO-Textย <cit.>,
Image Text Matching (COCO-ITM)ย <cit.>,
e-SNLI-VEย <cit.>,
Multi-modal Fact Checking (Mocheg)ย <cit.>,
and IQAย <cit.> are included.
Due to language model input length constraints, we reduce the number of options in some datasets with extensive candidate labels, such as ImageNet.
Generation
Visual conditional general requires models to understand the visual content and make a composition meeting the task demand. We have
Visual Storytelling (VIST)ย <cit.>,
Visual Dialog (VisDial)ย <cit.>,
and multi-modal machine translation
Multi30kย <cit.> in this category.
Chinese and multilingual Vision-Language Tasks
To examine the effect of instruction tuning on different languages, we incorporate several Chinese vision-language tasks including FM-IQAย <cit.> for VQA, COCO-CNย <cit.> and Flickr8k-CN ย <cit.> for captioning, Chinese Food Netย <cit.> for classification, and MMChatย <cit.> for generation.
Video-Language Tasks
Beyond the static images, we are interested in whether instruction tuning can also be applied to video-text tasks. We include the classic MSR-VTT datasetsย <cit.> for video captioning, MSRVTT-QAย <cit.>, ActivityNet-QAย <cit.>, iVQAย <cit.> and MSVD-QAย <cit.> for video question answering, Something-Somethingย <cit.> for video action classification.
As shown in Figureย <ref>, our dataset makes a wide coverage of the current existing visual-language and video-language benchmarks, enabling different skill sets for the language models, from simple image captioning to complicated reasoning based on the image even beyond the visual content.
ยง.ยง Annotation Process
To build high-quality multi-modal instruction datasets, we rewrite various datasets into a vision-to-text format. The annotation process includes four steps: (1) writing instructions for each task, (2) structuring images and texts into a unified schema, (3) checking the overall dataset quality, and (4) building multilingual sets.
Eight authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature.
Stage I: Instruction Writing
To build high-quality instructions, we first ask annotators to carefully read the dataset paper and check the original dataset with some instances to get a clear understanding of the task. After that, they are required to write 10 diverse task instructions manually, covering the key characteristics of the task.
Tableย <ref> shows the statistics of the written instructions for each task. In total, we annotate 400 instructions for all tasks. The average length per instruction is 24.4. To evaluate the diversity of annotated instructions, we employ the average edit distance to measure the similarity between two strings. The average edit distance within the same task is 76.6, indicating a good range of instruction diversity.
Stage II: Data Format Unification
After the instruction has been written according to the task characteristics, we further process the images and corresponding text for a unified instance schema.
For most datasets, we keep the original images and text, where images are converted into corresponding base64 encoded strings for easy data loading.
We perform two modifications on potential examples: (1)
Adding Bounding Box to Images.
For tasks designed for specific regions in the image, a straightforward solution is to provide the bounding box information in natural language for informing the language models of the regions in interest.
However, the image preprocessing techniques adopted by different vision encoders may resize the original image, and the original bounding box annotation thus needs further adjustments.
Inspired by the recent observation that common vision encoders such as CLIPย <cit.> are sensitive to the visual promptย <cit.>, we directly tag the bounding box as a red rectangle to the image, serving as a hint for VLMs to focus on the target region.
(2)
Short Answer Paraphrasing.
As recent studies have shown that the original short and brief answers in the common VQA dataset could negatively influence the model generation performanceย <cit.>,
we propose to utilize the ChatGPTย <cit.> model for paraphrasing the original answers, by providing origin question and answer with potential extra contextual information. Contextual information includes the caption of the original images and OCR tokens for the scene-related question.
The prompt used for answer paraphrasing can be found in Appendix. Figureย <ref> illustrates the data modifications we performed on our dataset.
Stage III: Quality Check
In this stage, we assign a different annotator to each task to review 10 examples from each split. During this stage, we identify minor format inconsistencies between tasks and address them by standardizing the task formats. We also observe that a few answers (less than 3% of examined instances) were not effectively paraphrased by ChatGPT
due to insufficient image information. We employ simple heuristics to filter these paraphrased answers and use a basic template to convert the original answer into a sentence. We find that this small portion of unsuccessful paraphrased answers has negligible impact. Finally, the task dataset is deemed complete once the annotator can successfully load it and re-examine the accuracy of the instructions, inputs, and outputs for each instance examined.
Stage IV: Key Datasets Translation
To boost the language diversity and support the evaluation across different languages, we select a subset of datasetsย (OK-VQA, ImageNet, Winoground, VQAv2, VIST, MSRVTT and MSRVTT-QA) that covers different tasks and translate their evaluation data into 100 languages following FLORES-101ย <cit.>.
We translate 500 samples for each split of each task in our first version.
More multilingual samples will be supported in the future.
We adopt the distillation version NLLB-1.3Bย <cit.> for translation, one of the state-of-the-art open multilingual translation models.
As there are no native speakers for different languages, we adopt an automatic filtering mechanism to ensure the translation quality, where languages with translation BLEU scores from English larger than 20 based on FLORES-101 results are kept.
After this step, only 80 languages are kept (see Appendix for detailed language names).
ยง.ยง Dataset Format
The instance in our dataset consists of five fields:
(1) Images: we represent the images with the potentially added bounding box by a base64 string.
(2) Instruction: we randomly select an instruction from the task instruction pool for each instance.
(3) Inputs: we allocate this field for providing task-specific inputs to the model, e.g., the question in the VQA tasks.
For tasks such as captioning, there is no extra input so the corresponding field is left as an empty string.
(4) Outputs: the required output to the specific tasks, such as the description of the image for captioning tasks and the answer to the image-related question.
(5) Meta Data: we provide this field to preserve important information such as image id for referencing the original dataset.
Figureย <ref> illustrates an instance in the unified format.
With the clear distinction of these fields, the user of our benchmark can flexibly construct the training instances needed and evaluate the models conveniently.
Tableย <ref> gives the statistics aggregated by tasks, and we refer readers to Appendix for detailed statistics and the license of each dataset.
ยง EXPERIMENTS
In this section, we build a VLM to validate the effectiveness of the proposed M^3IT dataset for multi-modal agents. We first introduce the experimental setupsย (ย <ref>), then report and discuss the resultsย (ย <ref>). Lastly, we analyze the influence of task number and instruction diversity, and provide a qualitative resultย ย (ย <ref>).
ยง.ยง Experimental Settings
Implementation Details Inspired by the recent success of BLIPย <cit.>, we adopt the vision encoder and the Q-former architecture in the BLIP2-OPT-2.7Bย <cit.> model to extract relevant visual features from images. For the large language models, we utilize Ziya-13Bย <cit.> derived from LLaMAย <cit.> with bilingualย (English and Chinese) ability.
We employ a two-staged training.
Stage I Visual-Text Alignment: To align the visual and textual feature space, we utilize the instructions in the coco captioning and perform an initial alignment training on LAION 400Mย <cit.>.
We train the Q-former and the language projection, resulting in a total 130M parameters to optimize with AdamWย <cit.>.
The batch size is set to 256 to maximize the utilization of GPU and the model is trained with 300k steps. The learning rate linearly increases to a peak value of 5e-5 in the first 2000 steps and follows a cosine decay scheduler. The weight decay is set to 0.05.
Stage II Multi-modal Instruction Tuning:
We further perform a multi-modal instruction tuning in our benchmark to activate the great potential of LLMs.
We train the model after alignment training for 3 epochs and with a
lower learning rate of 1e-5 and a warmup stage of 1000 steps.
Inspired by LoRa tuningย <cit.>, the weights for mapping query and value vectors in the attention layer of LLMs are learnable in this stage to better adapt to the instruction tuning dataset.
Other training parameters are consistent with Stage I. All experiments are conducted with 8 NVIDIA 80GB A100 GPUs. It took about 10 days for Stage I and Stage II can be finished in a day.
Evaluation Setup
To examine the generalization of instruction tuning, some tasks are held-out for evaluation (see Figureย <ref> for held-in/out tasks).
We are interested in the following research questions:
(RQ1) Can multi-modal instruction tuning elicit world knowledge from LLMs?
(RQ2) Can English-only instruction tuning generalize to other languages such as Chinese?
and (RQ3) Can image-only multi-modal instruction tuning generalize to video-language tasks?
For RQ1, we evaluate our models on three KVQA tasks in our datasets, i.e., OK-VQAย <cit.>, A-OKVQAย <cit.> and ViQuAE.
For RQ2 and RQ3, we perform zero-shot transfer evaluation on Chinese vision-language and video-language datasets, respectively.
We use greedy decoding in inference if not otherwise specified.
Metrics
We adopt ROUGE-Lย <cit.> as an automatic metric to assess the consistency between predictions and ground-truth answers, focusing on evaluating the model's conversational abilities.
As the automatic metric may not fully capture the nuances of conversational quality, we further introduce GPT-4 as a proxy of human evaluatorsย (ย <ref>).
Baselines
We compare our models to recently proposed powerful multi-modal agents, including (1) BLIP-2-Flan-T5-XXLย <cit.> where an instruction-tuned Flan-T5ย <cit.> is connected with a powerful vision encoder to perform a series of multi-modal tasks; (2) MiniGPT-4 which aligns a CLIP visual encoder with a frozen Vicunaย <cit.> with artificially collected dialog dataset; and (3) InstructBLIP, a recently proposed instruction tuning enhanced multi-modal agents with Vicuna-13B with converted multi-model datasets and the LLaVAย <cit.> dataset generated by GPT-4.
ยง.ยง Main Results
RQ1: Knowledgeable Visual Question Answer Evaluation
The results on the KVQA benchmarks are shown in Tableย <ref>.
In comparison to the strongest baseline, our model achieves an improvement of 3.2 and 2.7 ROUGE-L points for OK-VQA and A-OKVQA, respectively.
Additionally, Ying-VLM delivers the best performance on the held-out ViQuAE dataset. These findings indicate that instruction tuning on M^3IT effectively harnesses knowledge from LLMs and elevates response quality.
RQ2: Zero-Shot Transfer to Chinese Vision-Language Tasks
We assess models on three unseen Chinese vision-language tasks to investigate the cross-language generalization capabilities of instruction tuning.
BLIP-2 is not considered, as Flan-T5 does not support Chinese.[For all models, we introduce a prompt to promote Chinese outputs. See Appendix D for details.]
As illustrated in Tableย <ref>, our model outperforms MiniGPT4 and InstructBLIP on all evaluated tasks, demonstrating notable improvements. These findings indicate that instruction tuning with English datasets can effectively generalize to different languages, showcasing the promising potential that can be further explored.
RQ3: Zero-Shot Transfer to Video-Language Tasks To evaluate performance on video-language tasks, we uniformly sample 8 frames from each video. A comparison with MiniGPT4 is excluded, as it does not support video inputs. Following the approach of InstructBLIPย <cit.>, we concatenate the visual embedding extracted from the Q-former of each frame as a prefix embedding to the language model. As demonstrated in Tableย <ref>, our model excels in these challenging settings, significantly surpassing the BLIP-series baselines. It is worth noting that the training dataset does not include any visual inputs such as videos, implying that our instruction tuning effectively aids the model in generalizing to video inputs with a temporal dimension.
GPT-4 Evaluation Results
To further validate the quality of the generated response, we propose to utilize the powerful GPT-4 model as a proxy of human evaluatorsย <cit.>.
Specifically, following Vicunaย <cit.>, we use GPT-4 to rate the performance of different models against our Ying-VLM.
Considering the API cost of GPT-4, 300 examples are randomly sampled from OK-VQA, A-OKVQA and ViQuAE datasets as a subset for evaluation.
For each sample, we construct a prompt consisting of the original question, its corresponding reference answer, the response generated by our Ying-VLM, and a baseline system output.
GPT-4 is queried with the prompt to rate both responses on a scale of ten based on the given question and its reference answer.
The ratings are primarily based on the accuracy, relevance, and naturalness of the response to meet the requirements when humans are interacting with multi-modal agents (see Appendix for the detailed evaluation template).
We employ the strategy proposed by <cit.> to mitigate potential evaluation biases regarding the response order.[<https://github.com/i-Eval/FairEval>]
Figureย <ref> shows that our Ying-VLM outperforms all baseline models in most samples.
Notably, Ying-VLM beat the strongest baseline MiniGPT4 on 167 over 300 tested samples.
Consistent with the previous ROUGE-L evaluation, this result indicates that the model fine-tuned on our instruction dataset can produce more accurate and engaging responses on the challenging KVQA tasks.
ยง.ยง Analysis
We investigate the effect of task number and instruction diversity on the performance of learned models, providing insights for future studies to utilize our benchmark better.
Effect of Task Number
We investigate the influence of task numbers by randomly shuffling our tasks and then selecting a subset to train the model during the instruction tuning stage.
Due to the computational resource limitation, we set up a maximum of 5k examples for each task and train all the models for 5k steps with a batch size of 64.
We select 0, 4, 8, 16 and all 27 tasks for training, and report the individual ROUGE-L score and the average score.
As illustrated in Figureย <ref>, increasing the number of tasks greatly improves the results of the generalization performance.
Besides, the performance gain is not diminished as the task number increases. This is promising as it indicates that we can continually improve performance by introducing more tasks into the training.
It would be interesting to investigate the influence of different task clusters, which we leave for future studies.
Effect Instruction Diversity
To investigate the influence of instruction diversity, we limit the number of instructions used in each dataset to 1, 2, 4, and 8, resulting in varying levels of diversity for each task. The other training parameters are consistent with those used in previous experiments on task number investigation. Figureย <ref> shows that the performance varies with the level of diversity. Specifically, our results suggest that using four instructions per task is sufficient for achieving decent performance.
We leave a more in-depth analysis of the instruction diversity for future work.
Qualitative Results
We conduct a case study to provide a more straightforward understanding of instruction-tuned models. The cases are chosen from the held-out ViQuAE and ChineseFoodNet datasets.
As shown in Figureย <ref>, our model generates accurate responses to all questions. In contrast, MiniGPT4 produces an incorrect answer for the stadium question on the left and fails to follow instructions in the subsequent cases, providing generic image descriptions instead. Additionally, compared to InstructBLIP, which provides concise but less engaging answers for the two questions requiring external knowledge, our model responds more naturally and engagingly, underlining the value of our dataset.
Our model also successfully generalizes to Chinese inputs, accurately classifying the food image based on the instruction. These cases emphasize the importance of instruction tuning and demonstrate that our dataset can effectively enhance the capabilities of VLMs.
ยง CONCLUSION
In this paper, we present M^3IT, a multi-modal multilingual instruction tuning dataset for aiding the development of multi-modal large language models.
The dataset comprises 2.4 million carefully curated instances and 400 manually written task instructions across 40 tasks.
We build Ying-VLM to validate the effectiveness of our dataset.
Quantitative and qualitative results demonstrate that the models trained with our datasets successfully follow human instructions, provide more engaging responses, and achieve strong generalization performance on unseen video and Chinese tasks.
Further analysis shows that the increased task number can continually boost performance, and instruction diversity can influence results.
We hope our proposed benchmark, trained models, and experimental findings can facilitate future studies toward building powerful multi-modal intelligent agents.
ยง DATASET STATISTICS
Tableย <ref> lists the detailed statistics in our benchmark.
We collect the dataset license from PaperWithCode.[<https://paperswithcode.com/>]
For datasets under Unknown and Custom licenses, we suggest the users check the project page or contact the dataset owner before usage.
ยง TEMPLATE FOR ANSWER PARAPHRASE
We provide the paraphrase template in Tableย <ref> for querying the ChatGPT to re-write the original short answers, where {Q} and {A} is filled with the question and the answer need to be paraphrased, respectively.
We incorporate an example to better inform the model of the paraphrasing tasks.
For VQAv2 tasks, we add an extra {Caption} field in the template filled with corresponding captions from the COCO dataset to provide extra context information to help to paraphrase.
ยง DATASET TRANSLATION
We translate all the task instructions and evaluation sets of ImageNet, Winoground, VQAv2, OK-VQA, VIST, MSRVTT and MSRVTT-QA into 80 languages, as shown in Tableย <ref>.
Due to the computational resource constraint, we translate the whole test of Winoground ( 800 examples) and set a maximum instance number of 500 for each split in other tasks.
ยง PROMPT FOR ZERO-SHOT CHINESE VISION-LANGUAGE TASKS
In our experiments, all Vision-Language models are fine-tuned exclusively using English data. In our preliminary study, we observe that these models tend to generate English responses, even when the input and instructions are written in Chinese.
We introduce a simple Chinese dialogue context during the zero-shot Chinese Vision-Language Task evaluation for all models, as illustrated in Tableย <ref>,
Interestingly, this minor adjustment can encourage models to produce reasonable Chinese output. We leave the analysis of instruction-tuned VLM models' multilingual capabilities for future research.
ยง TEMPLATE FOR GPT-4 EVALUATION
We adopt the template in Tableย <ref> to query GPT-4 and obtain the evaluation results with FairEvalย [<https://github.com/i-Eval/FairEval>] to obtain more stable results.
Specifically, each tested instance is a quaternion: , where and are two responses from our Ying-VLM and the baseline model, respectively.
For each instance,
we query GPT-4 to judge which response is of better quality regarding accuracy, relevance and naturalness.
We populate the quaternion into the evaluation template to form two query prompts:
and .
We set the temperature of GPT-4 to 1 and sample three completions for each query prompt.
Therefore, each response will receive 6 scores, and we use the average score as the final score for each response.
The response with the higher final score is considered the better response.
The GPT-4 evaluation incurred a cost of $20.45 for InstructBlip and $20.90 for MiniGPT-4.
abbrvnat
|
http://arxiv.org/abs/2306.06658v1
|
20230611121007
|
Direct Link Interference Suppression for Bistatic Backscatter Communication in Distributed MIMO
|
[
"Ahmet Kaplan",
"Joao Vieira",
"Erik G. Larsson"
] |
eess.SP
|
[
"eess.SP"
] |
IEEEexample:BSTcontrol
Direct Link Interference Suppression for Bistatic Backscatter Communication in Distributed MIMO
Ahmet Kaplan and Erik G. Larsson were supported by the REINDEER project of the European Unionโs Horizon 2020 research and innovation program under grant agreement No. 101013425, and in part by ELLIIT and the Knut and Alice Wallenberg (KAW) Foundation. Preliminary results of this article were presented at the 2022 IEEE Globecom <cit.>.
Ahmet Kaplan and Erik G. Larsson are with the Department of
Electrical Engineering (ISY), Linkรถping University, 58183 Linkรถping, Sweden
(e-mail: [email protected]; [email protected]).
Joao Vieira is with Ericsson Research, 22362 Lund, Sweden (e-mail: [email protected]).
ยฉ2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
This paper will appear in the IEEE Transactions on Wireless Communications, 2023.
Digital Object Identifier (DOI): 10.1109/TWC.2023.3285250
Ahmet Kaplan,ย Graduate Student Member,ย IEEE, Joao Vieira, Erik G. Larsson,ย Fellow,ย IEEE
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Backscatter communication (BC) is a promising technique for future Internet-of-Things (IoT) owing to its low complexity, low cost, and potential for energy-efficient operation in sensor networks.
There are several network infrastructure setups that can be used for BC with IoT nodes.
One of them is the bistatic setup where typically there is a need for high dynamic range and high-resolution analog-to-digital converters at the reader.
In this paper, we investigate a bistatic BC setup with multiple antennas.
We propose a novel transmission scheme, which includes a protocol for channel estimation at the carrier emitter (CE) as well as a transmit beamformer construction that suppresses the direct link interference between the two ends of a bistatic link (namely CE and reader), and increases the detection performance of the backscatter device (BD) symbol.
Further, we derive a generalized log-likelihood ratio test (GLRT) to detect the symbol/presence of the BD.
We also provide an iterative algorithm to estimate the unknown parameters in the GLRT.
Finally, simulation results show that the required dynamic range of the system is significantly decreased, and the detection performance of the BD symbol is increased, by the proposed algorithm compared to a system not using beamforming at the CE.
Bistatic backscatter communication, dynamic range, interference suppression, Internet of Things (IoT), multiple-input multiple-output (MIMO)
ยง INTRODUCTION
There were almost 15 billion Internet-of-Things (IoT) connections in 2021, and this number is expected to reach 30 billion by 2027 <cit.>. Backscatter communication (BC) is a promising technique for massive connectivity in 6G <cit.> owing to its low cost, low complexity, and potential to enable energy-efficient solutions for battery-less IoT devices and sensors.
In a BC setup, we have the following types of equipment: a carrier emitter (CE), a reader, and a backscatter device (BD). [A variety of CE and reader devices have been considered in the literature, including but not limited to base stations, Wi-Fi access points, distributed antenna panels, user devices such as smartphones and smartwatches, and software-defined radios <cit.>.]
The main BC configurations are
* Monostatic: In a monostatic BC (MoBC) setup, the CE and reader are co-located and share parts of the same infrastructure.
For example, the CE and reader may share the same antenna elements.
The CE sends a radio frequency (RF) signal to the BD, and the BD modulates the incoming RF signal and backscatters it to the reader <cit.>.
A monostatic system suffers from round-trip path loss, and
requires full-duplex technology if the same antennas are simultaneously used for transmission and reception.
* Bistatic: In a bistatic BC (BiBC) setup, the CE and reader are spatially separated from each other, and therefore do not share RF circuitry, which is beneficial for many reasons <cit.>. For example, the CE and reader can be located to decrease the round-trip path loss.
* Ambient: In an ambient BC (AmBC) system, the CE and reader are separated in a similar way as in a bistatic system.
However, the ambient system does not have a dedicated CE but instead relies on ambient RF sources such as Bluetooth, Wi-Fi, or TV signals <cit.>.
ยง.ยง Motivation
The focus of this paper is a BiBC setup with a multiple-antenna CE and multiple-antenna reader corresponding to a distributed MIMO setup <cit.> with one antenna panel functioning as CE and one as reader.
In BiBC, due to the double path-loss effect on the two-way backscatter link, the received backscattered signal is typically weak compared to the direct link interference (DLI) from a CE.
This requires a high dynamic range of the circuitry in the reader: this dynamic range must be proportional to the signal strength ratio between the weak backscattered signal and the received signal from the direct link <cit.>.
As a result, a high-resolution analog-to-digital converter (ADC) is required to detect the weak backscattered signal under heavy DLI; this is an important consideration as with multiple-antenna technology, ADCs are major power consumers <cit.>.
Moreover, the backscattered signal is pushed to the last bits of ADC due to the DLI which causes a low signal-to-interference-plus-noise ratio (SINR) <cit.>.
In this paper, we propose a new transmission scheme that reduces the DLI, along with a detection algorithm for use at the reader; see Sectionย <ref> for specifics.
ยง.ยง Review of Previous and Related Work
In this subsection, we review the literature on BC with multiple-antenna technology and the literature on interference suppression for BC. Table <ref> provides an overview of this literature, and places our contribution in context.
ยง.ยง.ยง Monostatic and Bistatic BC with Multiple-Antenna Technology
Here, we provide a literature summary on multiple-antenna technology for MoBC and BiBC, respectively.
Monostatic: In <cit.> and <cit.>, the authors show that the communication range of monostatic BC systems increases when using multiple antennas at the reader. The authors of <cit.> maximize the minimum throughput for
multiple battery-less single-antenna users in a MIMO monostatic system
under a perfect channel state information (PCSI) assumption.
In <cit.> and <cit.>, the authors prove that we can increase the diversity gain by increasing the number of antennas at the BD in a monostatic system under PCSI; consequently the bit-error-rate (BER) performance is improved.
It is also shown in <cit.> that the BER performance increases with an increasing number of BD antennas.
Bistatic:
In <cit.>, a channel estimation algorithm is proposed to estimate all the backscattering links in a multi-antenna CE and reader setup, and then a transceiver is designed using the estimates.
In <cit.>, a BD is added to a conventional multiple-input multiple-output (MIMO) system to convey additional information from the BD to a multi-antenna receiver applying joint decoding.
ยง.ยง.ยง Interference Suppression in BC
In this subsection, we survey the literature on interference suppression in monostatic, ambient, and bistatic BC systems, respectively.
Monostatic:
In <cit.>, the authors propose a front-end architecture for the reader to cancel the self-interference (SI) in a full-duplex monostatic system.
In <cit.>, the authors cancel the SI using a directional coupler and an adjustable reflective modulator.
The authors of <cit.> propose a method to cancel the SI using the regenerated transmitted signal at the reader.
However, interference cancellation methods for monostatic BC are usually complex and have high power consumption. They cannot directly be implemented in a BiBC system: the structure of the problem is different, and has new elements, such as a carrier frequency offset between the CE and reader <cit.>.
Ambient:
Receive beamforming techniques to cancel the DLI are proposed for AmBC with a single-antenna transmitter setup in <cit.> and <cit.>. However, in <cit.>, the authors only provide the solution for a parametric channel model, and in <cit.>, DLI is canceled after the ADCs in the reader, which requires the use of high-resolution ADCs.
In <cit.>, a multi-antenna receiver operating without digital computation is proposed to decode the backscattered signal by separating the DLI from the received signal. In <cit.> and <cit.>, the authors avoid the DLI by shifting the carrier frequency of the incident signal in a BD in AmBC. However, the use of different frequency bands can reduce the spectral efficiency, and increase the power consumption and complexity at the BD. In <cit.> and <cit.>, the authors propose methods to avoid DLI in OFDM AmBC systems using null subcarriers <cit.> and cyclic prefixes <cit.>. However, in <cit.>, the interference is canceled after the ADC which does not solve the dynamic range problem.
Bistatic:
The authors of <cit.> investigate the coverage region for IoT communication, and the effect of the DLI on the dynamic range in the BiBC and AmBC systems. They show that the high dynamic range limits the system performance.
In <cit.> and <cit.>, the carrier frequency of the reflected signal is changed at the BD to solve the DLI problem in a single-input single-output (SISO) BiBC system. In <cit.>, the authors apply Miller coding at the BD and exploit the periodicity of the carrier signal to mitigate the DLI in a SISO BiBC system. However, the proposed method cancels the interference after the ADC which does not address the high-resolution ADC/high dynamic range problem.
ยง.ยง Contributions and Organization
In this paper, we address the dynamic range problem and interference suppression for BiBC with a multi-antenna CE and reader.
To the best of our knowledge, no previous work deals with this setup.
Our specific contributions are:
* To address the high-resolution ADC/high dynamic range problem in a BiBC system, we propose a transmission scheme that suppresses the DLI by steering the transmission from the CE using beamforming.
* We analyze the effect of the proposed transmission scheme on the dynamic range under channel estimation errors.
* We derive an algorithm based on a generalized log-likelihood ratio test (GLRT) to detect the BD symbol/presence in BiBC. We propose an iterative algorithm to estimate the unknown parameters in the GLRT.
* Using the derived GLRT detector, we analyze the performance of BD symbol detection at the reader in a BiBC setup with multiple antennas.
Parts of the results in this paper were presented in <cit.>. The main difference between this paper and <cit.> is that herein all channels of a bistatic link are treated as unknown, whereas <cit.> assumes that the reader has perfect channel state information. As a result, novel algorithms for channel estimation are proposed in this paper which are necessary to detect the BD symbol/presence based on the GLRT. Note that the detection of the BD symbol and the presence of BD are the same problem. This is because we detect a pre-determined sequence of symbols from the BD in order to determine its presence as explained in detail in Section <ref>.
The remaining part of this paper is organized as follows. In Section <ref>, we introduce the system model and our proposed transmission scheme.
In Section <ref>, we give the details of our interference suppression algorithm.
The GLRT detector and the estimation of unknown parameters are discussed in Section <ref>. We present our simulation results in Section <ref>. Finally, Section <ref> concludes the paper.
Notation: In this paper, (ยท)^T, (ยท)^*, and (ยท)^H denote transpose, conjugate, and Hermitian transpose, respectively. Re{ยท} and Tr{ยท} denote the real part of a signal and the trace of a matrix, respectively.
We use ยท for Frobenius norm.
The cardinality of a set is denoted by |ยท|.
E{ยท} stands for the expected value.
C(ยท) denotes the column space of a matrix.
Italic, boldface capital, and boldface lowercase letters are used for scalars, matrices, and column vectors, respectively.
ยง PROPOSED TRANSMISSION SCHEME
In this section, we present a model of our bistatic communication system.
We also describe our proposed transmission scheme, consisting of two phases: the channel estimation phase and the BD symbol detection phase.
ยง.ยง System Model
Fig. <ref> gives an overview of the system.
Panel A (PanA) with M antennas is the carrier emitter, Panel B (PanB) with N antennas is the reader, and backscatter device C has a single antenna. The BD can change its antenna reflection coefficient by varying the impedance of the load connected to the antenna in order to modulate the backscattered signal.
Additionally, PanA and PanB can be a part of a larger distributed MIMO setup with several panels <cit.>, and perform regular uplink/downlink communication, positioning, and sensing in addition to BC. Note, however, that if the distributed MIMO system operates in time-division duplexing (TDD) mode (the default assumption, for example, in <cit.>), then when communicating with the BD, one of the panels must switch to reception mode while the other panel is transmitting. In this respect, the interaction with the BD breaks the TDD flow.
Our aim is to decrease the required dynamic range of the reader, and detect the symbol/presence of the BD.
In Phase 1 (P1), we estimate the channel between PanA and PanB.
In Phase 2 (P2), we construct a beamformer using a projection matrix that is designed based on the estimated channel. The proposed beamformer decreases the dynamic range and increases the detection performance by suppressing the interference due to the direct link PanAโPanB. [Note that our proposed interference suppression algorithm for BiBC uses beamforming in the CE, relying on explicit channel state information between the CE and the reader. This is feasible for the assumed BiBC setup, but it would not be possible for AmBC as there is no dedicated CE in AmBC.]
In Fig. <ref>, ๐ _AC^T, ๐ _CA, ๐ _CB, ๐ _BC^T, ๐_AB, and ๐_BA stand for the channels from PanA to BD, BD to PanA, BD to PanB, PanB to BD, PanA to PanB, and PanB to PanA, respectively.
Note that, channels are the effective baseband channels between the units, and not the wireless propagation channels. This is because, in its most general form, channels account for 1) wireless propagation effects, 2) (non-reciprocal) transceiver effects, and 3) calibration effects.
In this paper, we have assumed that PanA and PanB are jointly reciprocity-calibrated such that
G_AB= G_BA^T.
Here, the dimensions of g_AC, g_CA, g_CB, g_BC, and G_AB are M ร 1, M ร 1, N ร 1, N ร 1, and N ร M, respectively. It is assumed that all channels are time-invariant during
P1 and P2.
ยง.ยง Transmission Scheme
In Fig. <ref>, the proposed transmission scheme of our bistatic communication setup is illustrated. There are two phases, as explained below.
ยง.ยง.ยง Phase 1: Channel Estimation at PanA
The first phase comprises J_p slots (ฯ_p J_p symbols).
In each slot, PanB sends N orthogonal pilot signals one per antenna, which each has ฯ_p symbols, in order to facilitate estimation of ๐_BA at PanA.
The orthogonal pilot signals sent in a slot can be written in matrix form as ฮฆโโ^N รฯ_p and satisfy
ฮฆฮฆ^H = ฮฑ_p ๐_N,
where ฮฑ_p=p_t ฯ_p/N, ฯ_p โฅ N, and p_t stands for the transmit power. The total transmitted energy during P1 is
E_p โ J_p||ฮฆ||^2 = J_p p_t ฯ_p.
In Fig. <ref>, ฮณ_jย โย {0,1} denotes the reflection coefficient.
In P1, we select ฮณ_j=0 for jโ๐ฎ_p ={1,2,โฆ,J_p}, i.e., the BD is silent in each slot. When the BD is silent, its reflection coefficient is a part of G_BA like other scattering objects in the environment. (It is also possible to design a BD that absorbs the incoming signal for energy harvesting during ฮณ_j=0 <cit.>; that, however, would not make a difference to our proposed algorithms.) When ฮณ_j=1, the relative difference in the channel as compared to when ฮณ_j=0 from PanB to PanA is g_CAg_BC^T. Alternating between
two different values of ฮณ_j corresponds to on-off keying modulation with two states, which is commonly used also in much other literature, for example <cit.>.
ยง.ยง.ยง Phase 2: Detection at PanB
The second phase consists of J_d slots (ฯ_d J_d symbols). In each slot, PanA sends a probing signal to detect the symbol of the BD at PanB. The probing signal sent in a slot can be represented in matrix form as ฮจโโ^M รฯ_d and satisfy
ฮจฮจ^H = ฮฑ_d I_M,
where ฮฑ_d = p_t ฯ_d/M and ฯ_d โฅ M.
The received signal of dimension N รฯ_d at PanB, in slot j can be written as:
๐_j=๐_A BP_s ฮจ+ฮณ_j ๐ _C B๐ _A C^TP_s ฮจ + ๐_j,
where jโ๐ฎ_d={J_p+1,J_p+2,โฆ,J_p+J_d} and J_p+J_d=J.
Similar to in P1, when the BD is silent (ฮณ_j=0), its contribution to the propagation environment is considered to be included in G_AB. Therefore, ๐ _C B๐ _A C^T represents the difference in the channel from PanA to PanB when ฮณ_j=1 as compared to when ฮณ_j=0. Due to this fact and because of reciprocity of propagation, G_AB=G_BA^T.
We assume that all channels are time-invariant during the J slot durations, i.e., the coherence time of all the channels exceeds J_p ฯ_p + J_d ฯ_d symbols.
P_s โโ^M ร M is a scaled projection matrix introduced in order to minimize the DLI between PanA and PanB; the design principles for it will be explained in Sectionย <ref>. ๐_jย โย โ^N รฯ_d comprises additive Gaussian noise; all elements of W_j are independent and identically distributed (i.i.d.) ๐๐ฉ(0, 1).
Depending on a pre-determined pattern associated with the BD, in some slots of P1, ฮณ_jย =0, and in the remaining slots ฮณ_jย =1.
๐ฎ_d is a set of {J_p+1,โฆ,J}, and ๐ฎ_d^0 and ๐ฎ_d^1, which are subsets of ๐ฎ_d, contain the indices for which ฮณ_j=0 and ฮณ_j=1 in P2, respectively. The cardinalities of ๐ฎ_d, ๐ฎ_d^0, and ๐ฎ_d^1 are
|๐ฎ_d|=J_d, |๐ฎ_d^0|=J_d^0 and |๐ฎ_d^1|=J_d^1, where J_d=J_d^0+J_d^1.
The dimensions of the quantities in the model are summarized in Table <ref>.
The next sections explain our proposed choice of the projector P_s to minimize DLI,
and the detection algorithm to be used at the reader.
ยง PROPOSED INTERFERENCE SUPPRESSION ALGORITHM
In this section, we first define the dynamic range in our system. Next, we present the channel estimation algorithm in P1 for the direct link between PanB and PanA. We then propose a novel algorithm based on the estimated direct link channel to mitigate the DLI at PanB in P2.
The proposed algorithm decreases the required dynamic range of the system, and increases the SINR and the detection performance. In practice, it also enables the use of low-resolution ADCs due to the decreased dynamic range.
ยง.ยง Dynamic Range
The dynamic range of an n-bit resolution ADC is 6.02n dB, and the quantization error of the ADC decreases exponentially with increasing n <cit.>.
A high-resolution ADC, which has low quantization error, is required to detect the weak backscatter signal under strong DLI.
We also define the dynamic range of the received signal during P2 as <cit.>
ฮถ = E{๐_A BP_s ฮจ^2+๐ _C B๐ _A C^TP_s ฮจ^2/๐ _C B๐ _A C^TP_s ฮจ^2},
where the matrix product ๐_A BP_s ฮจ represents the DLI. In Eq. (<ref>), ฮถ, the dynamic range of the received signal, is a good indicator of the required dynamic range of the reader circuitry.
In this equation, the expectation is taken with respect to random channel estimation errors (which affect P_s); the channels here are considered fixed. Note that the received signal from the BD is added to the numerator to satisfy ฮถโฅ 1 (0 dB).
When there is no projection, i.e., P_s=I_M, the dynamic range can be large: ฮถโซ 1.
When ฮถโซ 1, we need high-resolution ADCs which are not energy and cost efficient. This is because, in this operating regime, the backscattered signal is pushed to the last bits of the ADC, and the low-resolution ADCs cannot distinguish the weak backscatter signal under strong DLI due to the high quantization error.
The aim of the scaled projection matrix, P_s โโ^M ร M, is to project the transmitted signal onto the nullspace of the dominant directions of G_AB (or more exactly, an estimate of it); consequently, the received DLI decreases which reduces the dynamic range requirements on the reader circuitry.
The design of the projection matrix is detailed in Section <ref>.
ยง.ยง Channel Estimation at PanA
In this subsection, we present the algorithm to estimate the channel from PanB to PanA, G_BA.
In the channel estimation phase, PanB sends the same pilot signal ฮฆ in each slot, that is, J_p times.
At PanA, the received pilot signal in slot j is given by
Y_j^p=G_B Aฮฆ + ฮณ_j g_CAg_BC^T ฮฆ + W_j^p,
where j=1,2,โฆ,J_p and W_j^p โโ^M รฯ_p comprises additive noise and all elements of W_j^p are i.i.d. ๐๐ฉ(0,1). We select the reflection coefficients
ฮณ_j=0 for j=1,2,โฆ,J_p, i.e., the BD is silent. As a result, Eq. (<ref>) simplifies to
Y_j^p=G_B Aฮฆ + W_j^p.
The channel G_B A is estimated by least-squares (LS) as follows:
ฤ_B A=1/J_pโ_j=1^J_p๐_j^p ฮฆ^H (ฮฆฮฆ^H)^-1.
Due to the reciprocity, the channel G_AB is simply G_AB=G_BA^T;
the same holds for their estimates: ฤ_A B=ฤ_B A^T.
The singular value decomposition (SVD) of ฤ_A B can be written as
ฤ_A B=UฮV^H,
where Uโโ^N ร K_0 and Vโโ^M ร K_0 are semi-unitary matrices, and K_0โคmin{M,N} is the rank of ฤ_AB. ฮ is a K_0ร K_0 diagonal matrix with positive diagonal elements ordered in decreasing order.
ยง.ยง Interference Suppression Algorithm
This subsection explains our proposed design of the projector, P_s, whose application at PanA will reduce the DLI and consequently improve the detection performance at PanB.[Since the location of the BD is unknown, we focus on the suppression of DLI, and we do not attempt to beamform power towards the BD in order to increase the backscattered power.]
In P2, PanA transmits a probing signal to enable PanB to detect the symbol of the BD.
The probing signal, ฮจ, satisfies ฮจฮจ^H=ฮฑ_d I_M.
Before transmitting ฮจ, PanA first designs a projection matrix based on the channel estimates obtained in P1.
After that, PanA projects the probing signal onto the nullspace of ฤ_AB in order to minimize the PanAโPanB DLI, decrease the dynamic range, and increase the detection probability of BD symbol at PanB.
After the projection, PanA transmits the following signal
P_s ฮจ = ฮPฮจ,
where
P is an orthogonal projection of dimension M ร M and rank M-K, with K to be appropriately selected, and
ฮ=โ(M/M-K)
is the non-zero eigenvalue of P_s=ฮP.
In Eq. (<ref>), ฮ is used to keep the total radiated energy the same as without the projector, that is,
E_d โ J_d||ฮจ||^2 = J_d||P_sฮจ||^2.
We select P to project onto the orthogonal complement of the space spanned by the columns of V_K:
P=I-V_KV_K^H,
where V_K contains the first K columns of V in Eq. (<ref>).
The choice of the value of K depends on several parameters, such as the number of antennas on the panels, the panel shapes (e.g., uniform linear arrays and rectangular linear arrays), the signal-to-noise ratio (SNR), and the channel model. For instance, when there are a strong line-of-sight (LoS) link and specular multipath components (SMC), setting K to 1 makes it possible to cancel the LoS link and decrease the dynamic range. As K increases, the dynamic range continues to decrease by the cancellation of SMCs, but the coverage area also decreases due to the increasing nulls in the antenna radiation pattern. Clearly, we must have K โค M. Selecting an appropriate value of K is critical, and it could be chosen based on a predetermined dynamic range requirement and/or the number of dominant singular values of G_A B (or more exactly, an estimate of it).
Note that the location of the BD is unknown. To avoid reducing the backscatter link power, g_C Bg_A C^T should not lie in the
subspace spanned by the dominant right singular vectors of G_A B.
For example, when rank(G_AB)=1 in line-of-sight conditions, the proposed algorithm requires that the CE, BD, and reader should not be located on a line, i.e., g_C Bg_A C^Tโ ฯG_A B
for all ฯโโ.
In summary, the received signals during both phases are given by
Y_j^p=G_B Aฮฆ + W_j^p, j โ{1,โฆ,J_p}
Y_j=G_A BP_s ฮจ+ฮณ_j g_C Bg_A C^TP_s ฮจ + W_j, j โ{J_p+1,โฆ,J}
ยง DETECTION OF THE BACKSCATTER DEVICE SYMBOL/PRESENCE
In this section, we formalize the problem of detection during P2 as a hypothesis test, and develop a GLRT approach towards computing this test.
The hypothesis test models the two scenarios absence and presence, respectively, of the BD as โ_0 and โ_1.
Specifically the test is:
โ_0: Y_j=G_A BP_s ฮจ + W_j, j โ๐ฎ_d
Y_j^p=G_B Aฮฆ + W_j^p, j โ๐ฎ_p
โ_1: Y_j =G_A BP_s ฮจ +ฮณ_j g_C Bg_A C^TP_s ฮจ+ W_j, j โ๐ฎ_d
Y_j^p=G_B Aฮฆ + W_j^p, j โ๐ฎ_p.
Under โ_1, the BD varies its antenna reflection coefficient ฮณ_j according to a known (and pre-determined) pattern. Note that to detect the presence of the BD, we detect a pre-determined sequence of symbols from it. Therefore, technically, the problems of symbol detection and presence detection are equivalent. Furthermore, this hypothesis test can be interpreted as detecting the BD data, where the null hypothesis โ_0 corresponds to bit โ0โ, and the alternative hypothesis โ_1 corresponds to bit โ1โ.
We consider the standard distributed MIMO setup, with a backhaul link between the access points <cit.>, <cit.>. PanA and PanB can be any two antenna panels/access points in such a system.
For this setup, we can assume that panels can share
information, and also receive data over a backhaul network.
Therefore, we assume that ฮจ, ฮฆ, P_s, and Y_j^p are known at PanB. The only information that needs to be sent through the backhaul network on a slot-by-slot timescale is P_s and Y_j^p. We also assume that G_A B, g_C B, and g_A C are unknown by the receiver.
The GLRT, using the received signals in P1 and P2, to detect the symbol/presence of the BD is given in Eq. (<ref>).
In Eq. (<ref>), p( Y_j^p |G_BA), p(๐_j |โ_0, G_AB), and p(๐_j |โ_1, G_AB, ๐_BL, ฮณ_j) denote the likelihood functions of the observations in P1, and under โ_0 and โ_1 in P2, respectively; they are given as follows:
p( Y_j^p |G_BA) =
1/ฯ^M ฯ_pexp[-||Y_j^p-G_B Aฮฆ||^2],
p(๐_j |โ_0, G_AB) = 1/ฯ^N ฯ_dexp[-||Y_j-G_A BP_s ฮจ||^2],
p(๐_j |โ_1, G_AB, ๐_BL, ฮณ_j )
=1/ฯ^N ฯ_dexp[-||Y_j-G_A BP_s ฮจ - ฮณ_j g_C Bg_A C^TP_s ฮจ||^2].
The detection threshold is ฮท, and
H_BL=g_C Bg_A C^TP_s
is an N ร M rank-1 matrix, where g_C Bg_A C^T stands for the backscatter link cascade channel.
The next subsections develop the estimate of G_AB that maximizes the denominator of the the left hand side of Eq. (<ref>), assuming โ_0 is true, and the estimates of G_AB and H_BL that maximize the numerator in Eq. (<ref>), assuming โ_1 is true.
ยง.ยง Estimation of Unknown Parameters Under โ_0
Under โ_0, G_AB is the only unknown parameter. We estimate G_A B, that maximizes the denominator in Eq. (<ref>), using the received signals in P1 and P2 as follows:
ฤ_A B
=max_G_ABโ_jโ๐ฎ_p
p(๐_j^p |G_BA)
โ_jโ๐ฎ_d
p(๐_j |โ_0, G_AB)
=min_G_A B
(
โ_jโ๐ฎ_p
||๐_j^p - G_B Aฮฆ||^2
+
โ_jโ๐ฎ_d
||Y_j - G_A BP_s ฮจ||^2
)
(a)=min_G_A B
(
โ_jโ๐ฎ_p
||๐_1,j - โ(ฮฑ_p)G_AB ||^2
+
โ_jโ๐ฎ_d
||Y_2,j - โ(ฮฑ_d)G_A BP_s||^2
)
(b)=min_G_A B
(
||๐_P1 - โ(ฮฑ_p)G_ABฮฅ_J_p||^2
+
||Y_P2 - โ(ฮฑ_d)G_A BP_s ฮฅ_J_d||^2
)
=min_G_A B
(||[ Y_P1 Y_P2 ]
- G_A B[ โ(ฮฑ_p)ฮฅ_J_p โ(ฮฑ_d)P_s ฮฅ_J_d ] ||^2
)
=[ Y_P1 Y_P2 ][ โ(ฮฑ_p)ฮฅ_J_p^T; โ(ฮฑ_d)(P_s ฮฅ_J_d)^T ]
(
[ โ(ฮฑ_p)ฮฅ_J_p โ(ฮฑ_d)P_s ฮฅ_J_d ][ โ(ฮฑ_p)ฮฅ_J_p^T; โ(ฮฑ_d)(P_s ฮฅ_J_d)^T ])^-1
= (
โ_jโ๐ฎ_pโ(ฮฑ_p)Y_1,j +
โ_jโ๐ฎ_dโ(ฮฑ_d)Y_2,jP_s
)
(ฮฑ_p J_p I_M + ฮฑ_d J_d P_s)^-1
= (
โ_jโ๐ฎ_pฮฆ^* (Y_j^p)^T +
โ_jโ๐ฎ_dY_j ฮจ^H P_s
)
(ฮฑ_p J_p I_M + ฮฑ_d J_d P_s)^-1,
where equality (a) is shown in Appendix <ref>.
In (<ref>), Y_1,j=ฮฆ^*(Y_j^p)^T / โ(ฮฑ_p) and Y_2,j=Y_j ฮจ^H / โ(ฮฑ_d).
In (b), the block matrices Y_P1โโ^N ร M J_p and Y_P2โโ^N ร M J_d are created by ordering Y_1,j and Y_2,j in increasing order based on the index j, respectively. The block matrices ฮฅ_J_pโโ^M ร M J_p and ฮฅ_J_dโโ^M ร M J_d are ฮฅ_J_p=[I_M โฏI_M] and ฮฅ_J_d=[I_M โฏI_M]. In Eq.ย (<ref>), (ฮฑ_p J_p I_M + ฮฑ_d J_d P_s) is invertible and positive definite.
ยง.ยง Estimation of Unknown Parameters Under โ_1
Under โ_1, G_AB and g_C B๐ _A C^T are the unknown parameters, but the estimate of g_C B๐ _A C^T is not unique when P_s is not a full-rank matrix. Therefore, instead, we find the estimate of H_BL, which is unique.
We assume that reflection coefficients are known under โ_1.
We propose a cyclic optimization algorithm to minimize the sum of the squared norms in Eqs. (<ref>) and (<ref>), and consequently to estimate G_AB and H_BL. The algorithm consists of following steps:
* Step 1: First find an initial value by minimizing the objective function with respect to (w.r.t.) G_AB using the observations ๐_j and ๐_j^'^p, where j โ๐ฎ_d^0 and j^'โ๐ฎ_p.
* Step 2: Next, minimize the objective function w.r.t. ๐_BL by using the estimated value of G_AB and the observations ๐_j, where j โ๐ฎ_d^1.
As mentioned earlier, ๐ฎ_d^0 and ๐ฎ_d^1 denote the sets of reflection coefficient indices for which ฮณ_j=0 and ฮณ_j=1 in P2, respectively.
* Step 3: Estimate G_AB by using all observations and the estimated value of ๐_BL.
* Step 4: Iterate Steps 2โ3 until convergence.
The details of these steps are given below.
ยง.ยง.ยง Step 1 - Initial Estimation of G_AB
The estimate of G_AB using the observations ๐_j and ๐_j^'^p, where j โ๐ฎ_d^0 and j^'โ๐ฎ_p, is calculated (similar to Eq. (<ref>) but with ๐ฎ_d^0 in lieu of ๐ฎ_d) as
ฤ_AB^โ_1
=min_G_AB(
โ_jโ๐ฎ_p
||๐_j^p - G_B Aฮฆ||^2
+
โ_jโ๐ฎ_d^0
||Y_j - G_A BP_s ฮจ - ฮณ_j H_BLฮจ||^2
)
(a)=min_G_AB(
โ_jโ๐ฎ_p
||๐_j^p - G_B Aฮฆ||^2
+
โ_jโ๐ฎ_d^0
||Y_j - G_A BP_s ฮจ||^2
)
= (
โ_jโ๐ฎ_pฮฆ^* (Y_j^p)^T +
โ_jโ๐ฎ_d^0Y_j ฮจ^H P_s )
(ฮฑ_p J_p I_M + ฮฑ_d J_d^0 P_s)^-1.
In (a), we have ฮณ_j=0 for j โ๐ฎ_d^0.
ยง.ยง.ยง Step 2 - Estimation of ๐_BL
In this step, we minimize our objective function w.r.t. H_BL by using ฤ_AB^โ_1 and the observations ๐_j, where j โ๐ฎ_d^1.
To estimate H_BL, we apply the following steps. We first express the scaled projection matrix as
P_s = ฮP = ฮQQ^H
in terms of its eigenvalue decomposition, where ฮ=โ(M/(M-K)) and Q is an M ร (M-K) matrix that
satisfies Q^HQ=I.
Note that rank(P_s)=M-K.
The minimization problem has two constraints: (1) H_BL= ฮg_C B๐ _A C^TQQ^H is a rank-1 matrix, and (2) H_BL^H lies in the orthogonal complement of V_1,K, i.e., H_BL^H โC(P).
To deal with this problem, we first estimate H_BL^' = g_C B๐ _A C^TQ, which is an unconstrained rank-1 matrix, as follows:
ฤค_BL^' =min_H_BL^'โ_j โ๐ฎ_d^1 ||Y_j-ฤ_AB^โ_1P_s ฮจ - ฮณ_j ฮg_C Bg_A C^TPฮจ||^2
(a)=min_H_BL^' ||Y_DL - ฮg_C Bg_A C^TQQ^H D||^2
=min_H_BL^'( ||Y_DL||^2+|| ฮg_C Bg_A C^TQQ^H D||^2
-2Re{Tr{ฮD^H QQ^H g_A C^*g_C B^H Y_DL}})
=min_H_BL^'( Tr{ฮ^2 g_CBg_AC^T QQ^H DD^H QQ^H g_A C^*g_C B^H}
-2Re{Tr{ฮY_DLD^H QQ^H g_A C^*g_C B^H }})
=min_H_BL^'(J_d^1 ฮฑ_d ฮ^2 ||g_CBg_AC^T Q||^2
-2Re{Tr{ฮY_DLD^HQQ^H g_A C^*g_C B^H }})
=min_H_BL^'{||g_CBg_AC^T Q||^2
-
2Re{Tr{Y_DLD^HQQ^H g_A C^*g_C B^H }}/J_d^1 ฮฑ_d ฮ}
=min_H_BL^'g_CBg_AC^T Q
-1/J_d^1 ฮฑ_d ฮY_DLD^H Q^2.
In (a), we have ฮณ_j=1 for j โ๐ฎ_d^1, and the block matrix Y_DL of dimension N ร J_d^1 ฯ_d is created by ordering {Y_j-ฤ_AB^โ_1P_s ฮจ} in increasing order based on the index j.
D is an M ร J_d^1 ฯ_d-dimensional block matrix of ฮจs: D=[ฮจโฆฮจ], where DD^H=J_d^1 ฮฑ_d I_M.
We look for the best rank-one fit, in the Frobenius-norm sense, in Eq. (<ref>).
The solution is given by the first term of the SVD of 1/J_d^1 ฮฑ_d ฮY_DLD^H Q <cit.>:
ฤค_BL^' = u_1ฮด_1v_1^H,
where u_1, v_1, and ฮด_1 are the dominant left singular vector, the dominant right singular vector, and the dominant singular value, respectively.
Using Eq. (<ref>), the estimate of H_BL that maximizes the numerator in the GLRT is given by
ฤค_BL =ฮฤค_BL^'Q^H = ฮu_1ฮด_1v_1^H Q^H,
where ฤค_BL^H lies in C(P) because the eigenvectors of the projection matrix, i.e., the columns of Q, lie in C(P).
ยง.ยง.ยง Step 3 - Estimation of G_AB
In Eq. (<ref>), the initial estimate of G_AB is calculated without using the observations Y_j, where j โ๐ฎ_d^1. After the estimation of H_BL in step 2, we can use all observations to estimate G_AB (similar to Eq. (<ref>)) as follows:
ฤ_AB^โ_1 = (
โ_jโ๐ฎ_pฮฆ^* (Y_j^p)^T +
โ_jโ๐ฎ_d
(Y_j-ฮณ_j ฤค_BLฮจ) ฮจ^H P_s
)
(ฮฑ_p J_p I_M + ฮฑ_d J_d P_s)^-1.
ยง.ยง.ยง Step 4 - Iteration
Finally, we iteratively estimate H_BL and G_AB^โ_1 by using Eqs. (<ref>) and (<ref>) until the Frobenius norm of the difference between two successive estimation of G_AB^โ_1 is smaller than a threshold.
A summary of our proposed algorithm is given in Algorithmย 1. [Note that, if we assume that the pilot signal is sent from PanA to PanB, instead of from PanB to PanA in P1, the computational complexity of Algorithmย 1 will remain the same.]
ยง.ยง Modified Estimator for the Non-Jointly Calibrated Case
Throughout the above derivation, we have assumed that PanA and PanB are jointly reciprocity-calibrated such that
G_AB= G_BA^T.
In practice, such calibration can be achieved by distribution of a phase reference over a cable that connects the two panels, or by bi-directional intra- and inter- panel over-the-air measurements <cit.>.
However, if such calibration has not been performed, and the panels are only individually calibrated for reciprocity, there will be an unknown residual phase offset, say ฯ, between the panels such that
G_AB=e^ฯG_BA^T. In this scenario, the projection matrix designed in Section <ref> will cancel the DLI similar to when joint reciprocity calibration is assumed (since phase offset is only a complex scalar); however, we cannot directly use the signal received in P1 together with the signals received in P2 for the detection of the BD symbol. A simple remedy in this case is to base the detector on only the signals received during P2. The corresponding algorithm is worked out in Appendix <ref>, and is principally different from Algorithmย 1 developed above, as G_AB is unidentifiable (even in the noise-free case) when only data from P2 are available; see the appendix for
details.[In addition, it is also possible to use the modified estimator to decrease the backhaul overhead in the case of limited backhaul link capacity because there is no need to send the received signal in P1 over the backhaul link.]
ยง.ยง Approximate GLRT Detector
Since ฤ_AB^โ_1 and ฤค_BL are not maximum-likelihood estimates, inserting the estimates of G_AB and H_BL obtained above, we obtain an approximation of the GLRT detector in Eq.ย (<ref>).
Specifically, the resulting (approximate) GLRT detector is given in Eq. (<ref>).
1!GLR = โ_jโ๐ฎ_p
p(๐_j^p |ฤ_AB^โ_1)
โ_jโ๐ฎ_d p(๐_j |โ_1, ฤ_AB^โ_1, ฤค_BL, ฮณ_j)
/โ_jโ๐ฎ_p
p(๐_j^p |ฤ_AB^โ_0)
โ_jโ๐ฎ_d
p(๐_j |โ_0, ฤ_AB^โ_0)
โ_0โ_1โทฮท.
Defining A_j, B, C_1, and C_2 as follows,
A_j =ฤ_AB^โ_1P_s ฮจ + ฮณ_j ฤค_BLฮจ,
B =ฤ_AB^โ_0P_s ฮจ,
C_1 = (ฤ_AB^โ_1)^T ฮฆ,
C_2 = (ฤ_AB^โ_0)^T ฮฆ,
we can write log(GLR) as
log(GLR) = -โ_j โ๐ฎ_d ||Y_j-ฤ_AB^โ_1P_s ฮจ - ฮณ_j ฤค_BLฮจ||^2
+โ_j โ๐ฎ_d ||Y_j-ฤ_AB^โ_0P_s ฮจ||^2
-โ_j โ๐ฎ_p ||Y_j^p-(ฤ_AB^โ_1)^T ฮฆ||^2
+โ_j โ๐ฎ_p ||Y_j^p-(ฤ_AB^โ_0)^T ฮฆ||^2
=-โ_j โ๐ฎ_dTr{(Y_j-A_j)(Y_j-A_j)^H}
+โ_j โ๐ฎ_dTr{(Y_j-B)(Y_j-B)^H}
-โ_j โ๐ฎ_pTr{(Y_j^p-C_1)(Y_j^p-C_1)^H}
+โ_j โ๐ฎ_pTr{(Y_j^p-C_2)(Y_j^p-C_2)^H}
= โ_j โ๐ฎ_d(2Re{Tr{Y_j(A_j-B)^H}}
-||A_j||^2+||B||^2)
+ โ_j โ๐ฎ_p(2Re{Tr{Y_j^p(C_1-C_2)^H}}
-||C_1||^2+||C_2||^2).
Finally, we can write the hypothesis test as follows:
โ_j โ๐ฎ_d(2Re{Tr{Y_j(A_j-B)^H}}-||A_j||^2+||B||^2)
+โ_j โ๐ฎ_p(2Re{Tr{Y_j^p(C_1-C_2)^H}}-||C_1||^2+||C_2||^2)
โ_0โ_1โทlog(ฮท).
ยง NUMERICAL RESULTS
In this section, we first provide the simulation parameters and then discuss numerical results. We assume that there is a specular reflector that causes a SMC. The channels are modeled as [In [1], in the numerical results, we used a far-field channel model which was not accurate given the wavelength and the distances in the setup. However, in this journal paper, we have addressed this issue by using the near-field channel model, as shown in Eq. (30). Note that, the far-field channel model and all the numerical results in [1] would remain the same and valid for a different sufficiently small value of ฮป satisfying the far-field assumption.]
[G_AB]_n,m = [G_BA^T]_n,m = โ(ฮฒ_m,n) e^-j 2ฯ/ฮป d_m,n
+ g_SMCโ(ฮฒ_m,n^') e^-j 2ฯ/ฮป d_m,n^',
[g_AC]_m = [g_CA]_m = โ(ฮฒ_m) e^-j 2ฯ/ฮป d_m
+ g_SMCโ(ฮฒ_m^') e^-j 2ฯ/ฮป d_m^',
[g_CB]_n = [g_BC]_n = โ(ฮฒ_n) e^-j 2ฯ/ฮป d_n
+ g_SMCโ(ฮฒ_n^') e^-j 2ฯ/ฮป d_n^',
where mโ{1,2,โฆ,M}, nโ{1,2,โฆ,N}, g_SMC is the amplitude gain of the SMC, ฮป is the wavelength of the emitted signal,
and [G_AB]_n,m, [g_AC]_m, and [g_CB]_n are the (n,m)^th element of G_AB,
m^th element of g_AC, and n^th element of g_CB, respectively. The path-gain coefficients are defined as
ฮฒ_m,n = 1/d_m,n^2, ฮฒ_m,n^'= 1/(d_m,n^')^2,
ฮฒ_m = 1/d_m^2, ฮฒ_m^'= 1/(d_m^')^2,
ฮฒ_n = 1/d_n^2, ฮฒ_n^'= 1/(d_n^')^2,
where d_m,n, d_m, and d_n stand for the LoS path lengths between the m^th antenna in PanA - the n^th antenna in PanB, the m^th antenna in PanA - the BD, and the n^th antenna in PanB - the BD, respectively. The non-LoS path lengths d_m,n^', d_m^', and d_n^' are defined similarly.
We choose a uniform linear array at both PanA and PanB.
Unless otherwise stated, we use the following parameters: J_d=2, ฯ_d=16, J_p=1, ฯ_pย =ย 16, M=16, N=16, ฮป=0.1 m, and d_ant=0.5, where d_ant denotes the inter-antenna distance normalized by the carrier wavelength. We select the reflection coefficients at the BD as ฮณ_j = 0 for j โ๐ฎ_p in P1. In P2, ฮณ_j โ{0,1} for j โ๐ฎ_d, and we have the same number of ฮณ_j=0 and ฮณ_j=1, i.e., |๐ฎ_d^0|=|๐ฎ_d^1|=J_d/2. The SNR during the channel estimation phase is defined as SNR_p=ฮฒฬ
_BA p_t J_p ฯ_p, where p_t is the transmit power and ฮฒฬ
_BA=G_AB^2/MN. The SNR during the detection of BD symbol is SNR_d=ฮฒฬ
_CBฮฒฬ
_AC p_t J_d ฯ_d ฮณฬ
, where ฮณฬ
=0.5 is the average value of the reflection coefficients in P2, and ฮฒฬ
_AC=g_AC^2/M and ฮฒฬ
_CB=g_CB^2/N.
The centers of PanA and PanB are located at positions with coordinates (0,0) and (6,0) in meters, respectively. The specular reflector is located along the x-axis at y=-4 m. Table <ref> lists all simulation parameters.
In Figs. <ref> and <ref>, we investigate the total radiated energy from PanA during a slot duration in P2, which is given by
E_t(ฮธ) โ ||g(ฮธ)^T P_s ฮจ||^2=ฮฑ_d M/M-K||g(ฮธ)^T P||^2,
where g(ฮธ) โโ^M ร 1 is the steering vector defined as follows:
g(ฮธ)=
[ 1; exp(j2ฯ d_antsin(ฮธ) ); โฎ; exp(j(M-1) 2ฯ d_antsin(ฮธ)) ],
and ฮธ is the angle of departure of the transmitted signal.
We select SNR_d=2 dB, and assume that ฮฒฬ
_AC=1 and ฮฒฬ
_CB=1. The projection matrix is designed based on PCSI, i.e., ฤ_AB = G_AB, by PanA for all scenarios except the no-projection case, i.e., P_s=I_M. Note that the number of antennas in PanA and PanB does not affect the antenna radiation pattern for the no-projectionย case.
In Fig. <ref>, we compare the antenna radiation patterns for the three different cases: (1) M=8, N=16, (2) M=16, N=16, and (3) M=16, N=8. PanB is located at 0^โ, and in the first and third cases, the singular values of G_AB are (1.81, 0.59, 0.47, โฆ), and we select K=2. In the second case, the singular values of G_AB are (2.33, 1.25, 0.79, 0.30, โฆ), and we select K=3.
As shown in the figure, the direct link, which is the LoS link plus SMC, is canceled due to the use of P_s except for the no projection case. In addition, with increasing M, the beamforming accuracy and consequently the coverage area after projection increase in this particular setup.
In Fig. <ref>, we compare the antenna radiation patterns for varying values of K for M=N=16.
As seen in the figure, we can cancel more components of the direct link channel with increasing K. For example, the dominant directions of the LoS link are canceled for K=1 and K=2, and the dominant direction of the SMC is also canceled for K=3. This is because the first two dominant right singular vectors of G_AB are mostly associated with the LoS link, while the third dominant right singular vector is mostly associated with the SMC. Note that the dominant K right singular vectors of G_AB are used for designing the projection matrix.
In Fig. <ref>, we show the ratio ฮถ given in Eq. (<ref>) for different BD locations to investigate the change in the dynamic range in P2 for varying K and SNR_p values. The BD is located at (3,y), where we change the vertical position of the BD, y, between 0 and 30 meters. For the perfect-projection (PePj) case, we assume that ฤ_A B = G_A B, for the no-projection case, P_s=I_M, and for the imperfect-projection (ImPj) case, we design P_s based on ฤ_A B.
As seen in Fig. <ref>, for y=0, the projection matrix cannot affect the dynamic range as intended since the BD is located on the broadside direction of the PanA. However, the dynamic range, ฮถ, decreases with increasing K when y>0 because we can cancel more components of the direct link channel.
For example, at y=10 m, ฮถ is 28.32 dB, 23.10 dB, 18.18 dB, 10.14 dB, and 2.94 dB for the no projection, K=1, K=2, K=3 (PePj), and K=4 cases, respectively.
For y<23 m, there are fluctuations in the dynamic range curves due to the constructive and destructive interference between the LoS link and the SMC. However, after about 23 m, d_m^'โ d_m + c_1 and d_n^'โ d_n + c_2, and consequently [g_CB]_n โ c_3 e^-j 2ฯ/ฮป d_n and [g_AC]_n โ c_4 e^-j 2ฯ/ฮป d_m, where c_1,โฆ,c_4 are some constants. As a result, the channels g_CB and g_AC behave like a LoS channel without multipath. Therefore, the dynamic range curves are smoother beyond y=23 m for this particular setup.
In Fig. <ref>, we also show the effect of the imperfect projection on the dynamic range for K=3. As seen from the figure, the dynamic range, ฮถ, decreases with increasing SNR_p when y>0.
For example, at y=10 m, ฮถ is 25.39 dB at SNR_p = 5 dB and 14.38 dB at SNR_p = 20 dB.
This is because the projection matrix designed with high SNR_p values has a better capability to decrease the DLI.
In practice, with a decreased dynamic range, it is possible to use low-resolution ADCs which are more cost and energy efficient than high-resolution ADCs.
In Fig. <ref>, simulation results for the hypothesis test in Eq. (<ref>) are shown. We use the GLRT detector in Eq. (<ref>).
A triangular setup is used with the BD located at (3,3) meters. We consider three different scenarios: (1) perfect projection, i.e., P_s is designed based on ฤ_AB = G_AB, (2) imperfect projection, i.e., P_s is designed based on the estimated channel, and (3) no projection, i.e., P_s=I_M. In all scenarios, we select K=3, SNR_d=2 dB, and we use two different SNR_p values as 5 dB and 20 dB in P1 to investigate the effect of the scaled projection matrix P_s on the probability of detection (P_D) and the probability of false alarm (P_FA) of the BD. Note that, although we do not estimate G_AB using the pilot signal for the perfect projection and no projection cases, we send the pilot signal in P1 to use it in the detection phase (P2), for a fair comparison.
Compared to the system which works at low SNR_p values, e.g., 5 dB, the system which works at high SNR_p values, e.g., 20 dB, is superior in terms of projecting the transmitted signal onto the nullspace of the dominant direction of G_AB because ฤ_A B in P1 is more accurate at high SNR_p values, and this affects the projection matrix accuracy.
As seen in the figure, the detection performance of the perfect and imperfect projection cases are almost the same at SNR_p=20 dB, while there is a performance difference between these two cases at SNR_p=5 dB.
In addition, the performance of both the perfect and imperfect projection cases is better than that of the no projection case in each given SNR_p value. This is because the radiated power in the directions which are close to the dominant directions of G_AB decreases while the emitted power in all other directions increases due to the use of P_s. This phenomenon can also be seen in Fig.ย <ref>. Compared to the no-projection case, for K=3 with the perfect projection case, the radiated power in all directions except the direct link is higher. For example, in Fig. <ref> at P_FA=0.1, the imperfect projection case has almost 0.09 and 0.19 gain in the probability of detection when compared to the no projection case at SNR_p=5 dB and SNR_p=20 dB, respectively.
ยง CONCLUSION
In this paper, we proposed a novel transmission scheme to be used in a bistatic BC setup with multiple reader and CE antennas.
The transmission scheme first estimates the channel between the CE (PanA) and the reader (PanB), and then uses this estimated channel to beamform the transmission from PanA.
In this beamforming, we propose to apply a specially designed projection matrix whose effect is to mitigate the DLI and decrease the required dynamic range of the reader circuitry.
We showed that the dynamic range is significantly decreased by introduction of the projection matrix in the beamforming, which in turn
enables the use of low-resolution ADCs which are low-cost and energy-efficient.
Furthermore, we derive a detection algorithm based on a GLRT to detect the symbol/presence of the BD at the reader.
Joint usage of the proposed transmission scheme and detection algorithm results in a BiBC system with improved detection performance. This can be seen as a baseline approach to operate BiBC systems where both the CE and the reader are equipped with multiple antennas.
ยง PROOF OF PROPOSITION 1
Since ฮจฮจ^H = ฮฑ_d I_M, the following two minimization problems are equivalent:
min_G_A B
||Y_j - G_A BP_s ฮจ||^2
=
min_G_A B ||Y_2,j - โ(ฮฑ_d)G_A BP_s||^2.
To prove that, we first express
||Y_j - G_A BP_s ฮจ||^2 =
||๐_j||^2 + ||G_A BP_s ฮจ||^2
-2Re{Tr{๐_jฮจ^H P_s G_A B^H }},
||Y_2,j - โ(ฮฑ_d)G_A BP_s||^2 = ||Y_jฮจ^H / โ(ฮฑ_d) - โ(ฮฑ_d)G_A BP_s||^2
= ||Y_jฮจ^H||^2/ฮฑ_d + ฮฑ_d ||G_A BP_s||^2 -2Re{Tr{Y_jฮจ^H P_s G_A B^H }}.
Therefore,
||Y_j - G_A BP_s ฮจ||^2 =
||Y_2,j - โ(ฮฑ_d)G_A BP_s||^2 + c,
where c= ||Y_j||^2-||Y_jฮจ^H||^2/ฮฑ_d depends on Y_j and ฮจ^H, but not on G_AB, and hence does not effect the minimizer in Eq.ย (<ref>). We can also apply the same calculation to ||๐_j^p - G_B Aฮฆ||^2 and ||๐_1,j - โ(ฮฑ_p)G_AB ||^2 since ฮฆฮฆ^H = ฮฑ_p ๐_N.
In addition, one can show that the elements of ฮฆ^*(W_j^p)^T / โ(ฮฑ_p) and W_j ฮจ^H / โ(ฮฑ_d) matrices are i.i.d. ๐๐ฉ(0,1). Therefore, the problems in Eqs. (<ref>) and (<ref>) are equivalent.
ยง ESTIMATION OF UNKNOWN PARAMETERS
When PanB does not know the received signal in P1, the hypothesis test becomes
โ_0 : Y_j=G_A BP_s ฮจ + W_j
โ_1 : Y_j =G_A BP_s ฮจ +ฮณ_j g_C Bg_A C^TP_s ฮจ+ W_j,
where j โ๐ฎ_d.
The GLRT to detect the symbol/presence of the BD is as follows:
max_๐_DL, ๐_BLโ_jโ๐ฎ_d p(๐_j |โ_1, ๐_DL, ๐_BL, ฮณ_j)/max_๐_DLโ_jโ๐ฎ_d p(๐_j |โ_0, ๐_DL)โ_0โ_1โทฮท,
where
H_DL=G_ABP_s
is an N ร M matrix.
Under โ_0, G_AB is the unknown parameter, but the estimate of G_AB is not unique because P_s is not a full-rank matrix. We require, however, only an estimate of H_DL=G_A BP_s, rather than of G_A B, to find the maximum of the denominator in Eq.ย (<ref>).
We express the scaled projection matrix as
P_s= ฮP = ฮQQ^H.
To estimate H_DL, we first estimate G_A BQ as follows:
G_A BQ =max_G_ABQโ_jโ๐ฎ_d p(๐_j |โ_0, H_DL)
=min_G_A BQโ_jโ๐ฎ_d
||Y_j - ฮG_A BQQ^H ฮจ||^2
=min_G_A BQโ_jโ๐ฎ_d( ||Y_j||^2+||ฮG_A BQQ^H ฮจ||^2
-2Re{Tr{ฮฮจ^HQQ^HG_AB^H Y_j }})
=min_G_A BQโ_jโ๐ฎ_d( Tr{ฮ^2 G_ABQQ^H ฮจฮจ^H QQ^H G_AB^H}
-2Re{Tr{ฮY_j ฮจ^HQQ^HG_AB^H }})
=min_G_A BQโ_jโ๐ฎ_d(ฮ^2 ฮฑ_d ||G_A BQ||^2
-2Re{Tr{ฮY_j ฮจ^HQQ^HG_AB^H }})
=min_G_A BQ{J_d ||G_A BQ||^2
-2 โ_jโ๐ฎ_dRe{Tr{ฮY_j ฮจ^HQQ^HG_AB^H }}/ฮ^2 ฮฑ_d}
=min_G_A BQG_A BQ
-โ_jโ๐ฎ_dY_j ฮจ^H Q/J_d ฮฮฑ_d^2
=1/J_d ฮฮฑ_dโ_jโ๐ฎ_dY_j ฮจ^H Q.
We need to estimate H_DL = G_A BP_s = ฮG_A BQQ^H subject to H_DL^H โC(P) under โ_0. Using Eq. (<ref>), the estimate of H_DL that maximizes the denominator in the GLRT is givenย by
ฤค_DL^โ_0 = G_A BP_s=ฮG_A BQQ^H = 1/J_d ฮฑ_dโ_jโ๐ฎ_dY_j ฮจ^H P,
where (ฤค_DL^โ_0)^H lies in C(P).
The estimate of H_DL under โ_1 needs to be calculated similar to Eqs. (<ref>) and (<ref>) with small modifications. The estimation algorithm for H_BL in Section <ref> can be directly used.
IEEEtran
[
< g r a p h i c s >
]Ahmet Kaplan
(S'20) was born in Istanbul, Turkey, in 1994. He received the B.Sc. and M.Sc. degrees (Hons.), in electronics and communication engineering, from the Istanbul Technical University, Istanbul, Turkey, in 2017 and 2020, respectively. From 2017 to 2019, he was a 5G Research Engineer with Turkcell, Istanbul, Turkey. He is currently a Research and Teaching Assistant with the Linkรถping University. His research interests include MIMO, backscatter communication, and low-density parity-check coding. He received the ICTC 2020 excellent paper award.
[
< g r a p h i c s >
]Joao Vieira
received the PhD degree from Lund University, Sweden, in 2017. Since then, he is with Ericsson Research investigating different 6G candidate technologies, especially those comprising large antenna arrays such as co-located and distributed massive MIMO.
[
< g r a p h i c s >
]Erik G. Larsson
(S'99โM'03โSM'10โF'16) received the Ph.D. degree from Uppsala University, Uppsala, Sweden, in 2002. He is currently Professor of Communication Systems at Linkรถping University (LiU) in Linkรถping, Sweden. He was with the KTH Royal Institute of Technology in Stockholm, Sweden, the George Washington University, USA, the University of Florida, USA, and Ericsson Research, Sweden. His main professional interests are within the areas of wireless communications and signal processing. He co-authored Space-Time Block Coding for Wireless Communications (Cambridge University Press, 2003) and Fundamentals of Massive MIMO (Cambridge University Press, 2016).
He served as chair of the IEEE Signal Processing Society SPCOM technical committee (2015โ2016),
chair of the IEEE Wireless Communications Letters steering committee (2014โ2015),
member of the IEEE Transactions on Wireless Communications steering committee (2019-2022),
General and Technical Chair of the Asilomar SSC conference (2015, 2012),
technical co-chair of the IEEE Communication Theory Workshop (2019),
and member of the IEEE Signal Processing Society Awards Board (2017โ2019).
He was Associate Editor for, among others, the IEEE Transactions on Communications (2010-2014), the IEEE Transactions on Signal Processing (2006-2010), and the IEEE Signal Processing Magazine (2018-2022).
He received the IEEE Signal Processing Magazine Best Column Award twice, in 2012 and 2014, the IEEE ComSoc Stephen O. Rice Prize in Communications Theory in 2015, the IEEE ComSoc Leonard G. Abraham Prize in 2017, the IEEE ComSoc Best Tutorial Paper Award in 2018, and the IEEE ComSoc Fred W. Ellersick Prize in 2019.
|
http://arxiv.org/abs/2306.01487v2
|
20230602122329
|
Quantitative Graded Semantics and Spectra of Behavioural Metrics
|
[
"Jonas Forster",
"Lutz Schrรถder",
"Paul Wild"
] |
cs.LO
|
[
"cs.LO",
"03B45, 03B52, 68Q85",
"F.4.1"
] |
Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems
Pedro Reviriego, Ziheng Wang, รlvaro Alonso, Zhen Gao, Farzad Niknia, Shanshan Liu and Fabrizio Lombardi
P. Reviriego is with Universidad Politรฉcnica de Madrid, 28040 Madrid, Spain. Email: [email protected].
Z. Wang is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected].
A. Alonso is with Universidad Politรฉcnica de Madrid, 28040 Madrid, Spain. Email: [email protected].
Z. Gao is with Tianjin University, Tianjin 300072, China. Email: [email protected].
F. Niknia is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected].
S. Liu is with New Mexico State University, Klipsch School of ECE, Las Cruces, NM 88003, USA. Email: [email protected].
F. Lombardi is with Northeastern University, Dept. of ECE, Boston, MA 02115, USA. Email: [email protected].
July 31, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Behavioural metrics provide a quantitative refinement of classical two-valued behavioural equivalences on systems with quantitative data, such as metric or probabilistic transition systems. In analogy to the classical linear-time/branching-time spectrum of two-valued behavioural equivalences on transition systems, behavioural metrics come in various degrees of granularity, depending on the observer's ability to interact with the system. Graded monads have been shown to provide a unifying framework for spectra of behavioural equivalences. Here, we transfer this principle to spectra of behavioural metrics, working at a coalgebraic level of generality, that is, parametrically in the system type. In the ensuing development of quantitative graded semantics, we discuss presentations of graded monads on the category of metric spaces in terms of graded quantitative equational theories. Moreover, we obtain a canonical generic notion of invariant real-valued modal logic, and provide criteria for such logics to be expressive in the sense that logical distance coincides with the respective behavioural distance. We thus recover recent expressiveness results for coalgebraic branching-time metrics and for trace distance in metric transition systems; moreover, we obtain a new expressiveness result for trace semantics of fuzzy transition systems. We also provide a number of salient negative results. In particular, we show that trace distance on probabilistic metric transition systems does not admit a characteristic real-valued modal logic at all.
ยง INTRODUCTION
While qualitative models of concurrent systems are
traditionally analysed using various notions of two-valued process
equivalence, it has long been recognized that for systems involving
quantitative data, notions of behavioural distance play a
useful role as a more fine-grained measure of process
similarity. Well-known examples include behavioural distances on
probabilistic transition
systemsย <cit.>,
on systems combining nondeterminism and
probabilityย <cit.>, and on metric transition
systemsย <cit.>. Like in
the two-valued case, where a wide range of process equivalences of
varying granularity is arranged on the so-called
linear-time/branching-time
spectrumย <cit.>, one has a spectrum of
behavioural metrics on a given system that vary in granularity (with
greater distances thought of as having finer
granularity)ย <cit.>. In the present
work, we provide a general framework for behavioural metrics on
quantitative systems that is parametric on the one hand in the
type of systems (e.g. probabilistic, weighted, fuzzy), and on
the other hand in the quantitative semantics of a systems,
i.e. in the granularity of behavioural distance. For
parametricity in the system type, we rely on universal
coalgebraย <cit.>, in which the
transition type of systems is abstracted as an endofunctor on a
suitable base category. Parametricity in the system semantics, on the
other hand, is based on the framework of graded
monadsย <cit.>, which handles additional semantic
identifications (beyond branching-time equivalence) by algebraic
means, using grades to control the depth of look-ahead. For the
syntactic treatment of spectra of behavioural distances, we introduce
a graded extension of quantitative
algebraย <cit.> that allows describing
graded monads on the category of metric spaces by operations and
approximate equations.
We then focus on providing characteristic real-valued modal
logics for behavioural distances, in the sense that logical
distance should coincide with the respective behavioural distance;
this amounts to a quantitative form of the Hennessy-Milner property,
which we briefly refer to as expressiveness. A prototypical
example is quantitative probabilistic modal logic, which is
characteristic for branching-time behavioural distance on
probabilistic transition systemsย <cit.>,
so that high behavioural distance may be certified by means of
distinguishing modal formulasย <cit.>. We consider a
notion of logical distance induced by a general form of
quantitative graded logic, building on previous work in the
two-valued
settingย <cit.>. Quantitative
graded logics are always invariant under the underlying
behavioural distance in the sense that formula evaluation is
nonexpansive, so that logical distance is below behavioural
distance. We provide a general criterion for the reverse inequality, i.e.for expressiveness of quantitative graded logics; this
criterion builds on similar criteria for the two-valued
caseย <cit.> but
needs to deal with issues that are quite specific to the metric
setting. In particular, it needs to be parametric in a strengthening
of the inductive hypothesis in the induction on depth of look-ahead
that it encapsulates; indeed, this happens already in strikingly
simple cases. We develop a number of example applications: We
partially recover results on expressiveness of quantitative modal
logics for (finite-depth) branching-time
distances
ย <cit.>,
as well as a recent result on expressiveness of a quantitative modal
logic for trace distance in metric transition
systemsย <cit.>; and we establish a
new expressiveness result for a modal logic for fuzzy trace distance.
Beyond these positive results, we establish salient negative results:
* Characteristic modal logics for branching-time behavioural
metrics are often dense in the space of all nonexpansive
properties with bounded look-ahead; this is based on variants of the
Stone-Weierstraร
theoremย <cit.>. In
the two-valued setting, Stone-Weierstraร-type properties still apply to logics for
coarser equivalences, e.g. trace equivalence; however, we show that
they fail may fail for behavioural distance coarser than
branching-time.Maybe skip this bit
Quantitative probabilistic modal
logicย <cit.> does not have a
compositionally defined fragment that
is expressive for probabilistic trace distance. Moreover,
for probabilistic metric trace distance (in
which the set of labels carries a metric), there is no
expressive quantitative modal logic, in a broadly defined sense, at
all.
Related Work
We have already mentioned previous work
on coalgebraic branching-time behavioural
distancesย <cit.>
and on graded semantics for two-valued behavioural equivalences and
preordersย <cit.>. Kupke
and Rotย <cit.> study logics for
coinductive predicates, which generalize branching-time
behavioural distances. Expressiveness of the logic for trace distance
in metric transition systems has been shown recently using Galois
connectionsย <cit.>. We leave an
in-depth comparison of this approach and ours to future work; roughly
speaking, the setting of Galois connections is very general but leaves
more work to the instance logics than the framework of graded
monads.
Alternative coalgebraic approaches to process equivalences coarser
than branching time include coalgebraic trace semantics in
Kleisliย <cit.> and Eilenberg-Moore
categoriesย <cit.>, which are both
subsumed by the paradigm of graded
monadsย <cit.>, as well as an approach in
which behavioural equivalence are defined via characteristic
logicsย <cit.>. The Eilenberg-Moore and
Kleisli setups can be unified using corecursive
algebrasย <cit.>. The Eilenberg-Moore
approach has been applied to linear-time behavioural
distancesย <cit.>. De Alfaro et
al.ย <cit.> introduce a linear-time logic
for (state-labelled) metric transition systems.
The semantics of this logic is defined by first computing the set of
paths of a system, so that propositional operators and modalities have
a different meaning than in corresponding branching-time logics, while
our graded logics are fragments of branching-time logics. Castiglioni
et alย <cit.> consider a two-valued
logic for probabilistic trace semantics (for a discrete set of
labels), from which they induce a trace distance by defining a
distance on formulas. Fahrenberg et al.ย <cit.>
present a game-based approach to a spectrum of behavioural distances
on metric transition systems.
ยง PRELIMINARIES
We briefly recall some background on the category of metric spaces and
on graded monads.
The category of metric spaces
We assume basic
familiarity with category theory
(e.g.ย <cit.>).
The real unit interval [0,1] will serve as the domain of
both distances and truth values. Under the usual ordering โค,
[0,1] forms a complete lattice; we write โ,โ for
joins and meets in [0,1] (e.g. โ_i x_i=sup_i x_i), and
โจ,โง for binary join and meet, respectively. More, we write
โ andย โ for truncated addition and subtraction,
respectively; that is, xโ y=min(x+y,1) and
xโ y=max(x-y,0). These operations form part of a structure of
[0,1] as a (co-)quantale; for readability, we refrain from working
with more general quantalesย <cit.>.
A pseudometric space is a pair (X, d) consisting
of a set X and a function d Xร X โ [0,1] satisfying
the standard conditions of reflexivity (d(x,x)=0 for all
xโ X), symmetry (d(x,y)=d(y,x) for all x,yโ X), and
the triangle inequality (d(x, z)โค d(x,y) + d(y, z) for
all x, y, z โ X); if additionally separation holds (for
x,y โ X, if d(x, y) = 0 then x = y), thenย (X,d) is a
metric space. A function f X โ Y between
pseudometric spaces (X, d_X) and (Y, d_Y) is nonexpansive
if d_Y(f(x), f(y)) โค d_X(x, y) for all x, y โ X. Metric
spaces and nonexpansive maps form a category . A set
X_0โ X is dense in (X,d_X) if for
everyย ฯต>0 and every xโ X, there exists x'โ X_0 such
that d_X(x,x')โคฯต.
We often do not distinguish notationally between a
(pseudo-)metric space (X,d) and its underlying setย X. If multiple
(pseudo-)metric spaces are involved, we sometimes denote the
respective (pseudo-)metric with subscripts, so d_X is the
(pseudo-)metric of the space with carrier X. The categorical
product (X,d_X) ร (Y, d_Y) of (pseudo-)metric spaces equips
the Cartesian product X ร Y with the supremum (pseudo-)metric
d_X ร Y((a, b), (a', b')) = d_X(a, a') โจ d_Y(b,
b'). Similarly, the Manhattan tensor โ equips
Xร Y with the Manhattan (pseudo-)metric
d_Xโ Y((a,b), (a',b')) = d_X(a, a') โ d_Y(b, b'). We may
occasionally write elements of the product A^n+m as vw if
vโ A^n and w โ A^m. A key role in the treatment of
expressiveness of logics will be played by the notion of
initiality. A cone in a category is a family of
morphisms f_i Aโ B_i with joint domainย A. A cone of
nonexpansive maps is initial ifย A carries the (pseudo-)metric
induced from the (pseudo-)metrics d_i onย B_i viaย f_i,
explicitly: d(x,y)=โ_i d_i(f_i(x),f_i(y)). Given
(pseudo-)metric spaces X,Y, the set of nonexpansive functions
Xโ Y forms a (pseudo-)metric space under the standard supremum
distance.
We recall some key examples of functors on the categoryย of
sets and maps, and associated functors onย .
* We write for the finite powerset functor onย ,
andย for the lifting of to given by the
Hausdorff metric. Explicitly,
d_ X(A, B) =
(โ_1ptaโ Aโ_bโ B d_X(a, b)) โจ
(โ_1ptbโ Bโ_1ptaโ Ad_X(a, b))
for a metric space (X, d_X) and A, B โ X.
* Similarly, denotes the functor on that maps a
set X to the set of finitely supported probability distributions
on X, and denotes the lifting of to
that equips X with the Kantorovich metric. Explicitly,
d_ X(ฮผ, ฮฝ) = โ_f โ_xโ X f(x)
(ฮผ(x) - ฮฝ(x))
for a metric space (X,d) and ฮผ, ฮฝโ X, where f
ranges over all nonexpansive functions X โ [0, 1]. We often
write elements of X as finite formal sums
โ p_iยท x_i, with x_iโ X and โ p_i=1.
* The finite fuzzy powerset functor _ฯ is given
on setsย X by
_ฯ X={A Xโ [0,1]| A(x)=0 for almost all
xโ X}, and on maps f Xโ Y
by
_ฯ f(A)(y)=โ{A(x)| f(x)=y} for Aโ_ฯ X.
We liftย _ฯ to a functor _ฯ on metric
spaces
that equips _ฯ X with the fuzzy Hausdorff
distanceย <cit.>. Explicitly,
d__ฯ X(A,B)=d_0(A,B)โจ d_0(B,A) for a metric space (X,d) and
A,Bโ_ฯ X, where
d_0(A,B)=โ_xโ_y (A(x)โ B(y))โจ (A(x)โง d(x,y) ).
Coalgebra
Universal
coalgebraย <cit.> has established itself
as a way to reason about state-based systems at an appropriate level
of abstraction. It is based on encapsulating the transition type of
systems as an endofunctor Gโ on a base
categoryย . Then, a G-coalgebra (X, ฮณ) consists of a
-object X, thought of as an object of states, and a
morphism ฮณ X โ G X, thought of as assigning to each
state a collection of successors, structured according toย G. A
-morphism h X โ Y is a morphism
(X, ฮณ)โ(Y, ฮด) of G-coalgebras if
G h โฮณ = ฮดโ h.
We give some examples of coalgebras on , restricting to
finitely branching systems for brevity. Throughout the paper, we fix
a metric spaceย of labels. In some examples,ย
will be required to be discrete.
* Finitely branching metric transition systems with transition
labels inย are coalgebras for the functor
(ร(-)). (More precisely, a metric transition
system is usually assumed to have a discrete state space, while
(ร(-))-coalgebras more generally carry a metric on
the state space; we will apply this more general understanding
to all examples for brevity.)die Metrik auf dem statespace รคndert die verhaltens/logische Distanz nicht, schrรคnkt nur das mรถgliche branching ein. Mรถchte man evtl. dazuschreiben um missverstรคndnisse zu vermeiden.
* A fuzzy -labelled transition system (fuzzy LTS)
(e.g.ย <cit.>) consists of
a set (or metric space)ย X of states and a fuzzy transition
relation R Xรร Xโ[0,1], withย
discrete. A fuzzy LTS (X,R) is finitely
branching if {(a,y)| R(x,a,y)>0} is finite for
everyย xโ X. Equivalently, a finitely branching fuzzy LTS is a
coalgebra for the functor _ฯ(ร (-)).
* Similarly, finitely branching
probabilistic metric transition systems with transition
labels inย are coalgebras for the functor
(โ(-)). Whenย is discrete (in which
case โ (-)=ร(-)), then we speak of
probabilistic labelled transition systems.
Graded monads and graded algebras
The framework of
graded semanticsย <cit.> is based on the central notion of graded
monads, which describe the structure of observable behaviours at each
finite depth, with depth understood as look-ahead, measured in
terms of numbers of transition steps.
A graded monad ๐ on a category consists
of a family of functors M_nโ for n โ
and natural transformations ฮทโ M_0 (the
unit) and ฮผ^n,k M_nM_k โ M_n+k for all
n, k โ (the multiplications), subject to essentially
the same laws as ordinary monads up to the insertion of grades
(specifically, one has unit laws
ฮผ^0,nฮท M_n=๐_M_n=ฮผ^n,0M_nฮท and an associative
law ฮผ^n+k,mฮผ^n,kM_n=ฮผ^n,k+mM_nฮผ^k,m).
In particular, (M_0, ฮท, ฮผ^00) is an ordinary (non-graded)
monad.
We give examples of graded monads for later use in graded
semantics. We concentrate on the branching-time and linear-time ends
of the spectrum but note that graded monads cover also intermediate
semantics, involving readiness, failures
etc.ย <cit.>.
* Any functor G yields a graded
monad given by iterated application of G, that is M_n = G^n and by unit and multiplication being
identityย <cit.>.
* A Kleisli distributive law is a
natural transformation ฮป FT โ TF where F is a
functor and T a monad. This yields a graded monad with
M_n = TF^nย <cit.>. We will use
instances of this construction as running examples:
* Put F = ร- and
T =. Then ฮป(a, U) = {(a, x) | x โ U}
defines a distributive law
ฮปร(-) โ(ร-). We obtain the graded metric trace monad
M_n =(^n ร (-)).
* Similarly, for
F = ร-, T=_ฯ, we have a distributive law
ฮปร_ฯ(-) โ_ฯ(ร-) given by ฮป(a, U)(a,x)= U(x) and
ฮป(a, U)(b, x) = 0 for b โ a. We thus obtain the
graded fuzzy trace monad M_n =_ฯ(^n ร (-)).
* Taking T to be the finite
Kantorovich monad , F = โ- (we explain
in Sectionย <ref> why we use โ instead
ofย ร), and defining
ฮปโ(-) โ(โ (-)) by ฮป(a, ฯ)(a, x) = ฯ(x) and
ฮป(a, ฯ)(b, x) = 0 for b โ a, we obtain the
graded probabilistic (metric) trace monad
M_n= (^โ nโ (-)), where
^โ n denotes the n-fold Manhattan tensor
โโฆโ.
Graded monads come with a graded analogue of Eilenberg-Moore
algebras, which play a central role in the semantics of graded
logicsย <cit.>.
Let ๐ be a graded monad in . A graded
M_n-Algebra ((A_k)_k โค n, (a^mk)_m+k โค n) consists
of a family of -objects A_i and morphisms
a^mk M_mA_k โ A_m+k satisfying essentially the same
laws as a monad algebra, up to insertion of the
grades. Specifically, we have a^0mฮท_A_m = ๐_A_m for
m โค n, and whenever if m + r + k โค n, then
a^m+r,kฮผ^m,r_A_k=a^m,r+kM_ma^r,k.
An M_n-homomorphism of M_n-algebras A and B is a
family of maps f_k A_k โ B_k such that whenever
m + k โค n, then f_m+ka^m,k=b^m,kM_mf_k. Graded
M_n-Algebras and their homomorphisms form a category
n๐.
(In fact, the canonical choice of Eilenberg-Moore algebra of
a graded monad are M_ฯ-algebras, which have carriers A_k for
all
k<ฯย <cit.>;
however, for our present purposes we mainly just need M_1-algebras.)
It is easy to see that ((M_kX)_kโค n, (ฮผ^m,k)_m+k โค n)
is an M_n-algebra for every -object X. Again, M_0-algebras are just
(non-graded) algebras for the monad (M_0, ฮท, ฮผ^00).
For n = 1, this definition instantiates as follows: An M_1-algebra
is a tuple (A_0, A_1, a^00, a^01, a^10), withย a^10 termed
the main structure map, such that
* (A_0, a^00) and (A_1, a^01) are M_0-algebras.
* (Homomorphy) a^10 M_1A_0 โ A_1 is an
M_0-homomorphism (M_1A_0, ฮผ^01)โ(A_1, a^01).
* (Coequalization)
a^10โ M_1a^00 = a^10โฮผ^10, i.e. the
following diagram commutes (without necessarily being a
coequalizer):
M_1M_0A_0 [r,swap, "ฮผ^10", shift right] [r, "M_1a^00", shift left] M_1A_0 [r, "ฮผ^10"] A_1
The semantics of modalities will later need the
following property:
Let
(-)_01๐โ0๐ be
the functor taking an M_1-algebra
A=((A_k)_k โค 1, (a^mk)_m+k โค 1) to the M_0-algebra
(A_0, a^00), the 0-part ofย A. An M_1-algebra A is
canonical if it is free over (A)_0, i.e. if for every
M_1-algebra B and M_0-homomorphism
f: (A)_0 โ (B)_0, there is a unique M_1-homomorphism
g: A โ B such that (g)_0 = f.
(<cit.>)
An M_1-algebraย A is canonical iffย (<ref>) is a
coequalizer diagram in the category of M_0-algebras.
ยง GRADED QUANTITATIVE SEMANTICS
We proceed to introduce the framework of graded
quantitative semantics. Generally, a graded
semanticsย <cit.> of a functor
Gโ is given by a graded monad ๐ and a
natural transformation ฮฑ G โ M_1. Intuitively, M_n1
(whereย 1 is a terminal object ofย ) is a domain of observable
behaviours afterย n transition steps, withย ฮฑ determining
behaviours after one step. For a G-coalgebra (X, ฮณ), we
define maps ฮณ^(n) Xโ M_n1 assigning to a state inย X
its behaviour after n steps by
ฮณ^(0) X M_01 ฮณ^(n+1) X M_1X
M_1M_n1
M_n+11.
From these maps, we induce a notion of behavioural
distance:
Given a graded semantics ฮฑ G โ M_1 of a functorย G
onย , the depth-n behavioural distance of states
x, y โ X in a G-coalgebra (X, ฮณ) is defined as
d^ฮฑ, n(x, y) = d_M_n1(ฮณ^(n)(x), ฮณ^(n)(y)).
(Graded) behavioural distance on (X,ฮณ) is the
pseudometric given by
d^ฮฑ(x, y) =โ_n โ d^ฮฑ, n(x, y).
* In general, the finite-depth
branching-time semantics of a G-coalgebra (X, ฮณ) is
defined via its canonical cone
(p_i X โ G^i1)_i<ฯ into the final sequence
1 G1 G^21 โโฆ
ofย G. The p_i are defined inductively by
p_0 = ! Xโ 1 and p_i+1 = Gp_i โฮณ. This
semantics is captured by the graded monad M_n = G^n and
ฮฑ=๐ย <cit.>
(Exampleย <ref>.<ref>)Hier sieht es so aus als wรผrde das beispiel sich auf das zitierte papier beziehen..
More specifically, the finite-depth branching-time
behavioural distance of states x,yโ X is
โ_i<ฯ d(p_i(x),p_i(y)), and thus agrees with the graded
behavioural distance obtained via the graded semantics in the
graded monad M_n = G^n.
* For F=ร(-), graded monads M_n=TF^n induced by
Kleisli distributive laws
(Exampleย <ref>.<ref>), in
combination with ฮฑ=๐, capture various forms of trace
semantics of G=TFย <cit.>. We consider
the concrete cases from Exampleย <ref>:
* The metric trace semantics of metric
transition
systemsย <cit.>
is captured by the graded metric trace monad
M_n =(^n ร-)
(Exampleย <ref>.<ref>); it
calculates, at each depthย n, sets of length-n traces, equipped
with the Hausdorff distance induced by the supremum metric on
traces.
* A natural notion of
fuzzy trace semantics of fuzzy transition systems is given
by assigning to each stateย x of a fuzzy LTS (X,R) a fuzzy
trace set (x)โ_ฯ (^*) where
(x)(a_1โฆ a_n)=โ{โ_i=1^n
R(x_i-1,a_i,x_i)| x=x_0,x_1,โฆ,x_nโ X}.
This notion of trace relates, for instance, to a notion of fuzzy
path that is implicit in the semantics of fuzzy computation tree
logicย <cit.> and to notions of fuzzy language accepted by
fuzzy automataย (e.g.ย <cit.>). We equip the
setย ^* of traces with the discrete metric, and then obtain a
notion of fuzzy trace distanceย d^T of statesย x,y, given
by the distance of (x), (y) in _ฯ(^*),
which simplifies to
d^T(x,y)=โ_wโ^*|(x)(w)-(y)(w)|. Fuzzy
trace distance is captured by the graded fuzzy trace monad
_ฯ(^nร (-))
(Exampleย <ref>.<ref>).
* The probabilistic (metric) trace
semanticsย <cit.> of a
probabilistic transition system calculates, at each depthย n, a
distribution over length-n traces. This semantics is captured as
the graded semantics induced by the graded probabilistic (metric)
trace monadย M_n= (^โ nโ (-)). One
obtains a notion of depth-n probabilistic trace distance
d_n^๐๐๐๐บ๐ผ๐พ, which takes Kantorovich distances of
depth-n trace distributions under the Manhattan distance on
traces. The arising notion of behavioural distance is then
probabilistic trace distance
d^๐๐๐๐บ๐ผ๐พ=โ_n<ฯd_n^๐๐๐๐บ๐ผ๐พ.
ยง GRADED QUANTITATIVE THEORIES
Monads on are induced by equational theories <cit.>. By equipping each operation with an assigned depth and requiring each axiom to be of uniform depth, one obtains a notion of graded equational theory which, with some care applied to size issues, can be brought into bijective correspondence with graded monads <cit.>. On the other hand, Mardare et al.ย <cit.> introduce a system of quantitative equational reasoning, with formulas of the form s =_ฯต t understood intuitively as โs differs from t by at most ฯตโ. These quantitative equational theories induce monads on the category of metric spaces. We use a graded version of this system of quantitative equational reasoning to present graded monads in .
A graded similarity type consists of an algebraic similarity
type ฮฃ and a function ฮด: ฮฃโ
assigning a depth to each algebraic operation
ฯโฮฃ. The concept of uniform depth is then
defined inductively: variables have uniform depthย 0, and a term of
the form ฯ(t_1, โฆ, t_n) has uniform depth n+k if
ฮด(ฯ) = n and allย t_i have uniform depthย k. Thus,
constants c โฮฃ have uniform depthย n for all
n โฅฮด(c). We write ๐^ฮฃ_nX for the set of
terms of uniform depth n over X, or ๐_nX ifย ฮฃ
is clear from the
context.
A substitution of uniform depth n is a function
ฯ X โ๐_nY. Such a substitution
extends to a map
ฯ: ๐_kX โ๐_k+nY on terms for
all k โ, where as usual one defines
ฯ(f(t_1, โฆ, t_m)) = f(ฯ(t_1), โฆ,
ฯ(t_m)). We will usually write application of substitutions in
postfix notation, i.e. tฯ applies ฯ to a term t. A
substitution is uniform-depth if it is of uniform depthย n
for someย n.
Let X be a set of variables. By (X) we denote the set of
quantitative equalities x =_ฯต y where x, y โ X and
ฯตโ. Moreover, we write (๐_n(X)) for
the set of quantitative equalities s=_ฯต t where
ฯตโ[0,1] and s,tโ๐_n(X). and
(๐(X)) = โ_nโ(๐_n(X));
that is, (๐(X)) is the set of uniform-depth
quantitative equalities among ฮฃ-terms overย X. A
quantitative theory ๐ฏ = (ฮฃ, ฮด, E)
consists of a graded similarity type (ฮฃ, ฮด) and a set
E โ๐ซ((X)) ร(๐X).
Elements of ๐ซ((X)) ร(๐X) are
termed basic quantitative inferences, and written in the form
ฮโข s=_ฯต t; the depth of
ฮโข s=_ฯต t is that of s=_ฯต t. We say
thatย ๐ฏ is depth-1 if all its operations and basic
quantitative inferences have depth at mostย 1.
Derivablility of quantitative equalities
inย (๐(X)) over a graded quantitative
theoryย ๐ฏ and a context
ฮ_0 โ๐ซ((X)) is defined inductively by the
following rules:
triangt=_ฯต s s=_ฯต'ut=_ฯต+ฯต'u refls=_0s symt=_ฯต ss=_ฯตt wkt=_ฯต st=_ฯต's (ฯต' โฅฯต)
arch{t =_ฯต's|ฯต' > ฯต}t=_ฯต s axฮฯtฯ=_ฯต sฯ ((ฮ, t=_ฯต s)โ E) assnฯ (ฯโฮ_0)
nexpt_1=_ฯต s_1 โฆ t_n=_ฯต s_nf(t_1,โฆ,t_n)=_ฯต f(s_1,โฆ,s_n)
where ฯ is a uniform-depth substitution. (Note the difference
between rules (๐๐ฑ) and (๐๐ฌ๐ฌ๐ง): Quantitative
equalities from the theory can be substituted into, while this is not
sound for quantitative equalities from the context.) A graded
quantitative equational theory induces a graded monadย
on where M_nX is the set of terms of uniform depth n over
the variables in X quotiented by the equivalence relation that
identifies terms s,t if s=_0t is derivable in contextย X, with
the distance d_M_n([s], [t]) = ฯต of equivalence classes
[s], [t] โ M_nX being the infimum over all ฯต such that
s =_ฯต t is derivable. As usual, multiplication collapses
terms-over-terms, and the unit maps an element of x โ X to
[x] โ M_0X.
The above system for quantitative reasoning follows Ford et
al.ย <cit.> in slight modifications to the
original (ungraded) systemย <cit.>. In
particular, we make do without a cut rule, and allow substitution
only into axioms (substitution into derived equalities is then
admissibleย <cit.>).
We recall that a graded monad is
depth-1ย <cit.> if ฮผ^nk and
M_0ฮผ^1k are epi-transformations and the diagram below is a
coequalizer of M_0-algebras for allย X and n < ฯ:
M_1M_0M_nX [r,swap, "ฮผ^10M_n", shift right] [r, "M_1ฮผ^0n", shift left] M_1M_nX [r, "ฮผ^1n"] M_1+nX.
We show that for ๐=, this is implied by
Definitionย <ref>.
Graded monads induced by depth-1 theories are depth-1.
The following is then immediateย <cit.>:
If ๐ is a depth-1 graded monad, then for every nโ and every objectย X, the M_1-algebra with carriers M_nX, M_n+1X and multiplications as algebra structure is canonical.
We briefly refer to canonical algebras as per the above proposition as
being of the form M_nX.
Presentations of graded trace monads
We proceed to investigate the quantitative-algebraic presentation of
graded trace monads, given by a Kleisli-distributive law of a metric
space of action labels over some monad.
Let ฯ = (a_1, โฆ, a_n) = (a_1, ฯ') and ฯ = (b_1, โฆ, b_n) = (b_1, ฯ') be traces of labels in a metric space (, d_). There are several ways to define the distance d(ฯ, ฯ), depending on the application. Some obvious choices would be the following:
Trace Metric d(ฯ, ฯ) k(x, y)
Supremum distance d_(a, b) โจฮด d(ฯ', ฯ') x โจฮด y
Manhattan distance d_(a, b) โฮด d(ฯ', ฯ') x + ฮด y
Euclidean distance โ(d_(a, b)^2 โ (ฮด d(ฯ', ฯ'))^2) โ(x^2 + (ฮด y)^2)
Each of these distances is parametric in a discount factor ฮดโ (0, 1]. Unless otherwise mentioned, we will assume that ฮด = 1 for the rest of this discussion. All of these metrics are similar in the sense that they are depth-1, i.e. there is a (not necessarily nonexpansive) function k [0,1]^2โ [0,1] such that d(ฯ, ฯ) = k(d(a, b), d(ฯ', ฯ')). To ensure that k yields a metric, it is sufficient to require the following properties:
* x + y โฅ z and x' + y' โฅ z' implies k(x, x') + k(y, y') โฅ k(z,z').
* k(x, y) = 0 if and only if x = y = 0.
We will writeย โ for the tensor that equips the
Cartesian product of a set with the metric generated byย k . So
d_Aโ B((a, b), (a', b')) = k(d(a,a'), d(b, b')). The fact
that k computes distances of traces recursively โone symbol at a
timeโ translates into uniform depth-1 equations, guaranteeing that the
resulting theory as a whole is depth-1:
Let ๐ฏ = (ฮฃ, โฐ) be a quantitative
algebraic presentation of a (plain) monadย T onย . We define a
graded quantitative theory ๐ฏ[] by including the
operations and equations ofย ๐ฏ at depthย 0, along with
unary depth-1 operationsย a for all labels a โ, and as depth-1
axioms the distributive laws
โข a(f(x_1, โฆ, x_n)) =_0 f(a(x_1), โฆ, a(x_n)) for all
aโ and fโฮฃ, as well as the distance axioms
x =_ฯต y โข a(x) =_k(d(a, b), ฯต) b(y).
One would ideally define a Kleisli distributive law
ฮปโ T-โ T(โ-) by
ฮป_X(a, t) = Tโจ a, ๐_Xโฉ_โ(t), where
โจ a, ๐_Xโฉ_โ takes x โ X to
(a, x) โโ X, and then obtain that the graded monads
induced by the theory ๐ฏ[] and the Kleisli law are
isomorphic. However, non-expansiveness of ฮป is not guaranteed
in general:
Take the finite distribution monad , and construct
ฮป as above, using supremum distance. Put
s = 0.5ยท x + 0.5ยท y, t = 1ยท xโ{x,y} where
d(x, y) = 1. Clearly d(s, t) = 0.5. Given a, b โ with
d(a, b) = 0.5, we have d((a, s), (b, t)) = 0.5 while
d(ฮป(a, s), ฮป(b, t)) = d(0.5ยท(a, x) + 0.5ยท(a, y), 1ยท(b, x))
= 0.75
In the special case of Manhattan distance, on the other
hand, things do work as expected:
The maps
ฮป_Xโ TX โ T(โ X) as above
are nonexpansive.
In caseย ฮป as above is nonexpansive, we do in
fact have that the distributive lawย ฮป and the algebraic theory
defined above induce the same graded monad:
Let ฮปโ T-โ T(โ-)
be defined as above. If all components
ฮป_X are nonexpansive, then the quantitative equational
theory ๐ฏ[] is a presentation of the graded
monad defined by the distributive law ฮป.
It is easy to check, respectively guaranteed by
Lemmaย <ref>, that the distributive laws claimed in
Examplesย <ref>.<ref>โ<ref>
are indeed non-expansive, so the respective graded monads are, by
Lemmaย <ref>, presented by the corresponding theory
as per Definitionย <ref>, in particular are
depth-1. We give explicit descriptions for our running examples:
* Metric trace semantics:
Recallย <cit.> that
is a monad, presented in quantitative algebra by the usual axioms
of join semilattices, i.e. by a binary operationย + and a
constantย 0, with basic `quantitative' inferences
โข x + 0 =_0 x, โข x + x =_0 x, x + y =_0 y + x,
โข (x + y) + z =_0 x + (y + z) (non-expansiveness ofย + is
enforced by the deduction rules). The quantitative graded theory
presenting the graded metric trace monad (^nร(-))
(Exampleย <ref>.<ref>) according
to Lemmaย <ref> givesย + andย 0 depthย 0 and
adds unary depth-1 operationsย a for all a โ, subject
to additional axioms
โข a(0) =_0 0 โข a(x+y) =_0 a(x) + a(y) x =_ฯต y โข a(x) =_{ฯตโจ d_(a, b) b(y)
for a,b โ and ฯตโ [0, 1].
* Fuzzy trace semantics: We show in the appendix
(Sectionย <ref>) that _ฯ is a monad
presented by the following quantitative equational theory. Take a
binary operation +, a constantย 0, and unary operations r for
every rโ[0,1]. Impose strict equations (=_0) saying
thatย +,ย 0 form a join semilattice structure and that the
operationsย r define an action of the monoid ([0,1],โง)
(e.g. r(s(x))=_0(rโง s)(x)). Finally, impose basic
quantatitive inferences
x=_ฯต yโข r(x)=_ฯต s(y)
for r,sโ[0,1] such that |r-s|โคฯต. By
Lemmaย <ref>, the graded fuzzy trace monad
_ฯ(^nร X) (withย discrete; cf. Exampleย <ref>.<ref>) is
presented by the above algebraic description ofย _ฯ at
depthย 0, with additional depth-1 unary operationsย a for
aโ and strict depth-1 equations a(x+y)=a(x)+a(y),
a(0)=0, and a(r(x))=r(a(x)).
* Probabilistic (metric) trace semantics:
The monad onย may be presented by the quantitative
theory of interpolative barycentric
algebrasย <cit.>, a quantitative version
of the theory of convex algebras that has binary operationsย +_p
for all pโ[0,1] (and axioms that we do not repeat here).
By Lemmaย <ref>, the graded probabilistic (metric)
trace monad (^n โ-)
(Exampleย <ref>.<ref>) is
presented by this theory at depthย 0 and additional depth-1
operationsย a for all aโ, subject to the axioms
โข a(x +_p y) =_0 a(x) +_p a(y) x =_ฯต y โข
a(x) =_ฯตโ d(a, b) b(y)
for a,bโ. (Replacing the second axiom with
x =_ฯต y โข a(x) =_ฯตโจ d(a, b) b(y) will
induce a graded monad ๐ such that M_nX is carried by
(^nร X) but carries, by Example
<ref>, a metric strictly smaller than that
of the space (^nร X).)
ยง QUANTITATIVE GRADED LOGICS
We proceed to introduce our notion of quantitative graded logic. While
previous presentations of graded
logicsย <cit.>
worked with a semantics defined on the codomainsย M_n1 of the graded
semantics, guaranteeing invariance under the graded semantics by
construction, we instead opt for a semantics defined directly on the
original coalgebra. We show that this semantics agrees with the
original one. Our new approach identifies (quantitative) graded logics
as fragments of the corresponding branching-time logics, whose
definitionย <cit.>
we recall first.
Syntactically, a modal logic is a triple
โ = (ฮ, ๐ช, ฮ) whereย ฮ is a set
of truth constants,ย ๐ช is a set of propositional operators, each
with associated finite arity, andย ฮ is a set of modal
operators, also each with an associated finite arity. For readability,
we restrict to unary modalities; extending our (positive) results to
modalities of higher arity is simply a matter of adding indices. The
set of formulas of โ is then given by the grammar
ฯ ::= c | p(ฯ_1, โฆ, ฯ_n) |ฮปฯ
where p โ๐ช is n-ary, ฮปโฮ and c โฮ
In the semantic framework of (quantitative) coalgebraic modal logicย <cit.>, formulas are interpreted in G-coalgebras for a given functor G๐โ๐ (we will be interested in ๐=), taking values in a truth value object ฮฉ of ๐. We assume that ๐ has finite products and a terminal object. The evaluation of a formula ฯ on a G-coalgebra (X, ฮณ) is a morphism ฯ_ฮณ X โฮฉ. The semantics is parametric in the following components:
* For every c โฮ, a ๐-morphism ฤ 1 โฮฉ.
* For p โ๐ช with arity n, a ๐-morphism pฮฉ^n โฮฉ
* For ฮปโฮ, a ๐-morphism ฮป Gฮฉโฮฉ
Formula evaluation is then defined inductively by
c_ฮณ = X 1 ฮฉ
p(ฯ_1, โฆ, ฯ_n)_ฮณ = pโโจฯ_1_ฮณ, โฆ, ฯ_n_ฮณโฉ
ฮปฯ_ฮณ = ฮปโ Gฯ_ฮณโฮณ
Formula evaluation is always invariant under branching-time
semantics, e.g. non-expansive under branching-time behavioural
distanceย <cit.>.
We next identify criteria for invariance under a graded semantics
(ฮฑ,๐) for a
functorย Gโ that we fix from now on; also, we
fix [0,1] as the underlying metric space ofย ฮฉ, and occasionally writeย ฮฉ for [0,1].
Let o M_0ฮฉโฮฉ an M_0-algebra structure on
ฮฉ. A logic โ is a graded logic (for
(ฮฑ, ๐)) if the following hold:
* For every n-ary pโ๐ช, p is an
M_0-algebra homomorphism (ฮฉ, o)^nโ(ฮฉ, o).
* For every ฮปโฮ,
ฮป factors as ฮป = f โฮฑ_ฮฉ
such that the tuple (ฮฉ, ฮฉ, o, o, f) constitutes an
M_1-algebra (that is,ย f satisfies homomorphy and coequalization,
cf. Sectionย <ref>).
In many examples, ฮฑ = id, in which case
conditionย <ref> just states that
(ฮฉ, ฮฉ, o, o, ฮป) is an M_1-algebra.
Invariance additionally relies on the requirement that formulas have
uniform depthย <cit.> in essentially
the same sense as for terms (Definitionย <ref>), with nullary
propositional operators having every depth โฅ 0, nullary modalities
having every depth โฅ 1, and truth constants having only
depthย 0. The set of โ-formulas of uniform depthย n is
denoted by โ_n.
Logical distance under the logic โ on a
G-coalgebra (X, ฮณ) is the pseudometricย d^โ
given by
d^โ(x, y) = โ{ d_ฮฉ(ฯ_ฮณ(x), ฯ_ฮณ(y)) | nโ, ฯโโ_n}.
Let โ be a graded logic for (ฮฑ, ๐). Then
the evaluation mapsย ฯ_ฮณ of โ-formulae
ฯ on G-coalgebras (X,ฮณ) are non-expansive w.r.t. behavioural distance d^ฮฑ, and hence
d^โโค d^ฮฑ.
The proof is based on showing, by induction onย ฯ, the stronger
property that the evaluation functions ฯ_ฮณ of
depth-n formulaeย ฯ factor through M_0-homomorphisms
ฯ_๐ M_n1โฮฉ,
as used in earlier formulations of the
semanticsย <cit.>,
with canonicity ofย M_n1 (Lemmaย <ref>) being the key
property in the step for modalities.
* Branching-time graded monads G^n have M_0=๐, so that the
corresponding graded logics are just branching-time logics without
further restrictionย <cit.>. Coalgebraic quantitative logics of
this kind have received some recent attention,
e.g.,ย <cit.>.
* We have a graded logic for
metric trace semantics
(Exampleย <ref>.<ref>)
featuring modalitiesย a for all aโ, a single
truth constantย 1, and no propositional operators. For the
semantics, we take the truth objectย ฮฉ to be the
-algebra (quantitative join semilattice) ([0,1],โจ),
with 1ฬ 1 โ [0,1] being constantlyย 1 and
a(รฮฉ)โฮฉ given by
a(U) = โ_(b, v)โ U (1- d(a,b))
v. The logic remains invariant under metric trace semantics when
extended with propositional operators that are non-expansive
join-semilattice morphisms, such as joinย โจ.
* We use the same syntax in a
graded logic for fuzzy trace distance
(Exampleย <ref>.<ref>). Again,
we useย [0,1] as the truth value object, interpreting the
additional algebraic structure
(Exampleย <ref>.<ref>) by
meets inย [0,1]. We define the interpretation
a M_1[0,1]โ[0,1] by
a(A)=โ_vโ [0,1]A(a,v)โง v, thus
capturing the usual fuzzy diamond modality
(e.g.ย <cit.>). Again, the logic remains invariant when
extended with additional non-expansive propositional operators
that are homomorphic w.r.t. the operations in the presentation of
_ฯ
(Exampleย <ref>.<ref>).
* Again using the same syntax, we
obtain a graded logic for probabilistic trace semantics (with
discrete label setย ;
Exampleย <ref>.<ref>) defining the
interpretation
a(รฮฉ)โฮฉ by
a(ฯ)=โ_vโ[0,1]ฯ(a,v)ยท v, so that a
formula aฯ evaluates, roughly speaking, to the
probability thatย a is executed, reaching a state
satisfyingย ฯ (more precisely, the contribution of each state
is weighted by the truth value ofย ฯ). Again, the logic
remains invariant when extended with non-expansive propositional
operators that are M_0-homomorphisms; since M_0-algebras are,
in this case, barycentric interpolative algebras, this means we
can allow affine maps [0,1]^nโ[0,1], such as fuzzy negation
xโฆ 1-xย <cit.>.
Get rid of this remark now?
In (quantitative branching-time) coalgebraic modal
logicย <cit.>,
a modalityย Lโฮ is interpreted as a predicate
lifting. For purposes of the present discussion, we use a version
of the framework that lives natively on metric
spacesย <cit.>. In this setting, a
predicate lifting for an endofunctorย G on metric spaces is a
natural transformation of typeย (-,ฮฉ)โ(G-,ฮฉ).
By the Yoneda Lemma, a predicate lifting in this sense is
equivalently given as a nonexpansive map
Gฮฉโฮฉย <cit.> (maps of this
type have been termed evaluation
functionsย <cit.>), which for
simplicity we also denote byย L. Formulasย ฯ are formed using
the modalities and suitable sets of propositional connectives; their
semantics on a coalgebra ฮณ Xโ GX is a nonexpansive
functionย ฯ Xโฮฉ, the extension
ofย ฯ. In particular, the semantics of the modalityย L is given
in terms of the evaluation function L Gฮฉโฮฉ by
Lฯ=Lโ Gฯโฮณ.
One has a canonical notion of (branching-time) behavioural distance
on G-coalgebras, with the distance of two states in a
coalgebraย C=(X,ฮณ Xโ GX) defined as the infimum over
the distance of their images under all coalgebra morphisms
onย Cย <cit.>. One shows easily
that the semantics ฯ Xโฮฉ of a formulaย ฯ
onย C is always nonexpansive w.r.t. this behavioural distance;
under suitable conditions, it is even expressive in the sense
that the cone of all formula extensions onย C is initial w.r.t. behavioural distance.
Comparing a given graded logic with setย ฮ of modalities to
this framework, we note that the interpretation
L M_1ฮฉโฮฉ of a modalityย L prolongs, via
the graded semantics ฮฑ Gโ M_1, to an evaluation
function L Gฮฉโฮฉ. This conversion identifies
graded logics as fragments of the corresponding branching-time
quantitative coalgebraic modal logics, in the sense that the
reinterpretation of the modalities preserves the semantics of
formulas. This implies that graded logics have a
compositional semantics defined directly on the coalgebra,
contrasting with logics such as linear-time temporal logic, whose
semantics requires a preceding global computation of the set of
paths, on which the actual semantics of the logic is then defined.
ยง EXPRESSIVITY CRITERIA
We proceed to adapt expressivity criteria appearing in
previous work on two-valued
settingsย <cit.> to
the quantitative setting, which poses quite specific
challenges. Explicitly, we employ the following notion of
expressiveness.
We say that โ is expressive (for
(๐,ฮฑ)) if d^ฮฑ = d^โ on all
G-coalgebras.
Equivalently, โ is expressive if for every
G-coalgebraย (X,ฮณ), the cone of all evaluation
mapsย ฯ_ฮณ is initial (cf. Sectionย <ref>)
onย (X,d^ฮฑ).
In the branching-time case, a stronger notion of expressiveness,
roughly phrased as density of the set of depth-n formulae
in the set of non-expansive properties at depthย n, follows from
expressiveness under certain additional
conditionsย <cit.>, using
lattice-theoretic variants of the Stone-Weierstraร theorem. We show
in the appendix that the analogue of the Stone-Weierstraร theorem in
general fails for coarser semantics. Also, for semantics coarser
than branching time, expressiveness in the sense of
Definitionย <ref> can often be established using more
economic sets of propositional operators (e.g. no propositional
operators at all), for which density will clearly fail.
An initiality invariant is a propertyย ฮฆ of sets
A โ(X, ฮฉ) of nonexpansive functions (such sets
are a special form of cones) such thatย (i) every cone
satisfyingย ฮฆ is initial, andย (ii)ย ฮฆ is upwards closed
w.r.t. subset inclusion. A graded logicย โ consisting
of ฮ, ๐ช, ฮ is ฮฆ-type depth-0
separating if
the family of maps
{ c | c โฮ}
has propertyย ฮฆ. Moreover,ย โ
is ฮฆ-type depth-1 separating if, whenever A is a
canonical M_1-algebra of the form M_n1
(Propositionย <ref>) andย ๐ is a set of
M_0-homomorphisms A_0 โฮฉ that has
propertyย ฮฆ and is closed under the propositional operators
inย ๐ช, then
the set
ฮ(๐) := {ฮปg: A_1 โฮฉ|ฮปโฮ, g โ๐, }
has property ฮฆ, where ฮปg
is the (by canonicity, unique) morphism making the following diagram
commute:
M_1A_0 [r, "M_1g"] [d, "a^10"] M_1ฮฉ[d, "f"]
A_1 [r, "ฮปg"] ฮฉ
* Initiality itself is an initiality invariant. Ifย ฮฆ is
initiality, then we say `initial-type' for `ฮฆ-type'.
* We say that ๐โ(X,ฮฉ) is
normed isometric if whenever d(x,y)>ฯต for
x,y โ X and ฯต>0, then there is some
f โ๐ such that |f(x)-f(y)|> ฯต and
f(x)โจ f(y)=1. Normed isometry is an initiality invariant.
Letย ฮฆ be an initiality invariant, and suppose that a graded
logic โ is both ฮฆ-type depth-0 separating and
ฮฆ-type depth-1 separating. Then โ is expressive.
Our definition of separation differs from that of previous
instantiations of the framework of graded semantics in several
ways. Most obvious is the parametrization by a property ฮฆ.
Choosing a ฮฆ other than initiality amounts to strengthening the
induction hypothesis in the inductive proof of
Theoremย <ref>. We will see that this is needed even in very
simple examples.
Moreover, we have phrased separation in terms of the specific
canonical algebras M_n1 on which it is needed, rather than on
unrestricted canonical algebras like in previous
workย <cit.>. This
phrasing is maybe less elegant but on the other hand occasionally
allows exploiting additional properties of M_n1. In particular, we
will use that for graded monads M_n=TF^nX arising from Kleisli
distributive laws
(Exampleย <ref>.<ref>), M_n1 is
free as an M_0-algebra.
* Branching time semantics:
Supposeย ฮ is a finite separating set of
modalities, i.e. the maps ฮปโ Gf GXโฮฉ,
withย ฮป ranging over modalities andย f over non-expansive
maps Xโฮฉ, form an initial cone. Moreover,
letย ๐ช contain meetย , joinย , fuzzy
negation (i.e. x=1-x), and truncated addition of
constants (-)โ c. Then one shows using a variant of the
Stone-Weierstraร theoremย <cit.> that the
graded logicย โ given by ฮ, ๐ช, and
ฮ={โค}, where โคฬ is the constant function 1, is initial-type depth-0 separating and
initial-type depth-1 separating. By Theoremย <ref>, we
obtain thatย โ is expressive. Previous work on
quantitative branching-time
logicsย <cit.> discusses, amongst other things,
conditions onย G that allow concluding expressiveness even for
infinite-depth behavioural distance.
* Metric Streams: The reason we hardwire an initiality
invariant into Theoremย <ref> (nothing comparable being
needed in the two-valued
caseย <cit.>)
is that initial-type separation may fail in the quantitative
setting in even very basic examples, such as the following: Metric
streams, i.e. streams over a metric space of labels
(, d_), are coalgebras for the functor
G = ร-. The behavioural distance on streams is
captured by the `branching'-time graded semantics that uses the
graded monad G^n = ^n ร{-}
(Exampleย <ref>.<ref>). As
recalled in the previous item, expressivity of a modal logic that
includes the mentioned propositional base and
modalitiesย a for all aโ, with interpretation
aร [0,1] โ [0,1] given by
(b, v) โฆ (1- d_(a, b)) โง v follows from
established results. Using the results of the present work, we can
go further and show expressivity of a modal logic that lacks the
usual propositional operators (i.e. has
๐ช = โ
), and instead includes only the truth
constantย 1. This logic is ฮฆ-type depth-0 separating and
ฮฆ-type depth-1 separating forย ฮฆ being normed isometry,
and therefore expressive by Theorem <ref>. On the other
hand, the logic fails to be initial-type depth-1 separating,
illustrating the necessity of the general form of
Theoremย <ref>
It is even possible to restrict the
set of modal operators to ฮ = {a| aโ A'}
where A' is a dense subset of A. In this case, the normed isometry
property needs to be adapted to approximative normed isometry, where
the valueย 1 is merely required to be approximated arbitrarily
closely by the relevant functions and not necessarily realized
on-the-nose.
* Metric transition systems: The graded logic for metric
trace semantics
(Exampleย <ref>.<ref>), in
the version with no propositional operators, is ฮฆ-type
depth-0 separating and ฮฆ-type depth-1 separating for ฮฆ
being normed-isometry, and hence is expressive by
Theoremย <ref>. We thus improve on an example from recent
work based on Galois connectionsย <cit.>,
where application of the general framework required the
inclusion of propositional shift operators (which were subsequently eliminated in an ad-hoc manner).
* Fuzzy trace semantics: The
graded logic for fuzzy trace semantics
(Exampleย <ref>.<ref>,
again with ๐ช=โ
) is iniital-type depth-0
separating and initial-type depth-1 separating, and hence
expressive for fuzzy trace distance by Theoremย <ref>, a
result that appears to be new.
ยง.ยง Stone-Weierstraร Theorems on Eilenberg-Moore Algebras
The key observation of graded semantics is, that restricting evaluations to
algebra homomorphisms is a sufficient condition to
guarantee invariance under the semantics while retaining compositionality. To recall a standard
example <cit.>, on labeled transition systems in
, the formula ฯ = aโคโงbโค is invariant
under trace semantics, in the sense that two states with the same trace set
will yield the same evaluation for this formula. This holds despite the fact
that ฯ is not a -algebra homomorphism. The invariance however breaks
when we compose ฯ with a modal operator. The formula
a(aโคโงbโค) for instance is not invariant
under trace semantics. This issue is resolved by noting that
โง (2, max)^2 โ (2, max) is not a -homomorphism
and thus the above formula is not part of any graded logic for this semantics.
Our above definition of expressivity only requires enough formulas to
approximate distances of pairs of observable behaviours. Alternatively
one might consider a logic expressive only if the semantics of
formulas are dense in the set of all invariant properties that are
compositional in this sense. We briefly consider the question of when
there are sufficiently many M_0-homomorphic propositional operators
to ensure this property (which does often hold in the branching-time
caseย <cit.>).
โ is strongly expressive for a graded semantics on M if {ฯ|ฯโโ_n} is a dense subspace of 0๐((M_n1, ฮผ^0n) ,(ฮฉ, o)) for all n โ
In previous work on quantitative modal logics <cit.>, variants of the Stone-Weierstraร Theorem played a crucial role in proving expressivity by taking an initial cone to a set of dense properties via closure under propositional operators. In this section we discuss adaptations of the theorem to our setup and highlight its availability or non-availablity in some key examples.
Let (M, ฮท, ฮผ) be a monad on , ๐ช a set of propositional operators and C a class of M-algebras. We say that M has the ๐ช-SW property with respect to C if for all M-algebras (A, a) โ C and all
๐โ^M((A, a) ,(ฮฉ, o)) such that ๐ is initialdoes not quite match the standard SW theorem, which uses two-point approximation as a set of -morphisms, the closure of ๐ under ๐ช is dense in ^M((A, a) ,(ฮฉ, o)).
We say that M has the SW property if M has the ๐ช-SW property for someย ๐ช.
Even though concepts like density or initial cone do not have native meaning in the category , we will apply the above definition to functors on by considering them as functors on that restrict to the full subcategory of discrete spaces, which is isomorphic to . For a given functor F, this is always possible by constructing the functor DFU, where Uโ is the forgetful functor and Dโ denotes the functor that equips a set with the discrete metric.
It is immediate that strong expressivity follows from combining expressivity and an SW property if all (M_n1, ฮผ^0n) are inย C.
If โ is an expressive logic with propositional operators ๐ช for a semantics on the graded monad ๐ and M_0 has the ๐ช-SW property with respect to {(M_n, ฮผ^0n) | n โ}, then โ is strongly expressive.
By definitions.
We note some of the key examples for monads central to the theory of graded semantics.
* On , Id has the ๐ช-SW property with respect to finite spaces and (ฮฉ, o) = ({0, 1}, id) for propositional operations ๐ช = {โจ, โง, }. This is precisely the well-known functional completeness of Boolean logic.
* On , Id has the ๐ช-SW property with respect to algebras with totally bounded carrier for ๐ช = {โจ, โง, -โ c, -โ c}, where โ, โ are truncated addition and subtraction on the unit interval and (ฮฉ, o) = ([0,1], id)
* On , the monad does have the {โจ}-SW property with respect to algebras with finite carrier and (ฮฉ, o) = ({0,1}, max).
* On , does not have the SW property with respect to finite spaces and (ฮฉ, o) = ([0,1], โจ).mention of ๐ช missing
In particular this allows us to strengthen the results of <cit.> where a logic for trace equivalence semantics with M_0 = is shown to be expressive. By combining <Ref> and the second item in the example above, we obtain that the logic with ๐ช = {โจ} is strongly expressive, meaning that any homomorphism of the join semilattices ((^n), โช) and ({, โค}, โจ) can be expressed as a modal formula.
ยง NEGATIVE RESULTS
For probabilistic metric trace semantics, we have the following
negative result:
Let โ = (ฮ, ๐ช, ฮ) be a modal logic
with unary modalities, equipped with a coalgebraic semantics in the
sense recalled in Sectionย <ref> for the functor
(โ-) defining probabilistic metric
transition systems
(Exampleย <ref>.<ref>), over a space of labels that is not discrete. Then logical
distance underย โ fails to coincide with probabilistic
metric trace distance.
The proof uses the probabilistic metric transition systems shown in Figureย <ref>. In other words, no branching-time quantitative modal logic
for probabilistic metric transition systems has a compositionally
defined fragment that characterizes probabilistic metric trace
distance, as long as modalities are restricted to be unary. We
emphasize that the above result does not assume thatย โ is
a graded logic in the sense of Definitionย <ref>. We
leave the question of whether a characteristic logic with higher-arity
modalities exists as an open problem.
For probabilistic trace semantics (with discrete label space), we have
the following more restricted negative result:
Letย โ = (ฮ, ๐ช, ฮ) be a modal logic,
equipped with a coalgebraic semantics in the sense recalled in
Sectionย <ref> for the functor
(ร-) defining probabilistic metric
transition systems (Exampleย <ref>.<ref>),
and with ฮ={a| aโ} interpreted as per
Exampleย <ref>.<ref>. Then
logical distance underย โ fails to coincide with
probabilistic trace distance.
In other words, the standard branching-time quantitative probabilistic modal
logicย <cit.>, whose fragment without
propositional operators characterizes probabilistic trace
equivalenceย <cit.>, fails to have a
fragment that characterizes probabilistic trace distance.
ยง CONCLUSIONS
We have developed a generic framework for
linear-time/branching-time spectra of behavioural distances on
state-based systems in coalgebraic generality, covering, for instance,
metric, probabilistic, and fuzzy transition systems. The key
abstraction in the framework is the notion of a graded monad on the
category of metric spaces, and an arising notion of quantitative
graded semantics. We have provided a graded quantitative algebraic
system for the description of such graded monads (extending the
existing non-graded systemย <cit.>), as well as
a notion of graded logic that is invariant under the given graded
semantics, in the sense of being nonexpansive w.r.t. behavioural
distance. We have shown that such graded logics are expressive
under natural conditions, a criterion that we have exploited to show
expressiveness of logics for metric traces and fuzzy traces,
respectively. On the other hand, we have established a number of maybe
unexpected negative results; in particular, we have shown that the
expected logic for probabilistic trace distance fails to be
expressive, and that there is no expressive logic at all, in a precise
sense, for probabilistic metric trace distance.
One important next step in the development will be to identify a
generic game-based characterization of behavioural distances in the
framework of graded semantics, generalizing work specific to metric
transition systemsย <cit.> and building
on game-based concepts for two-valued graded
semanticsย <cit.>.
plainurl
ยง DETAILS AND OMITTED PROOFS
ยง.ยง Proof of Proposition <ref>
Let ๐ฏ = (ฮฃ, ฮด, E) be a depth-1 graded
quantitative equational theory and ๐ the graded monad it
induces.
* We show that Diagramย (<ref>) is a
coequalizer. Elements of M_1M_nX are represented as equivalence
classesย [s] of depth-1-terms over depth-n terms over
variables fromย X; we briefly refer to such termsย s as
layered terms. More presicely speaking, the element of
[s]โ M_1M_nX represented byย s is formed by first taking the
equivalence classes of the lower depth-n terms, and then taking
the equivalence class of the depth-1 tern over M_nX thus
obtained. We writeย sฬ
for the collapse ofย s into a
deptn-n+1 term overย X, soย sฬ
represents the element
ฮผ^10([s])โ M_n+1X. Given layered termsย s,t,
d(ฮผ^1n_X([s]), ฮผ^1n_X([t])) = ฯต implies that
there is a proof for sฬ
=_ฯตtฬ
in the
contextย ฮ_X={x=_ฮด y| x,yโ X, d(x,y)=ฮด}.
Approximate equalities of layered terms inย M_1M_nX are derived
by regarding the lower depth-n terms as variables, assembled in
a contextย ฮ_0 containing all derivable approximate
equalities among then, and then applying the usual derivation
system to the upper depth-1 terms. The coequalizer of
M_1ฮผ^0n_X and ฮผ^10M_nX as inย (<ref>) is
formed by additionally allowing deriviation steps using strict
equalities where depth-0 operations are shifted from the upper
depth-1 layer to the lower depth-n layer. Equivalently (by
(๐ง๐๐ฑ๐ฉ), which on strict equalities acts like a
congruence rule), this amounts to using as assumptions, instead of
justย ฮ_0, all approximate equalities u=_ฯต v among
depth-0 termsย u,v over depth-n terms overย X (i.e. representatives of elements of M_0M_nX) that hold in M_nX
(i.e. the elements of M_0M_nX represented byย u,v are
identified up toย ฯต by ฮผ^0n); that is, we assume, in
the upper-layer depth-1 deriviations, not only the metric
structure ofย M_nX but also the structure of M_nX as an
M_0-algebra. We writeย ฮ for the set of these assumptions.
We now claim that whenever sฬ
=_ฯตtฬ
is derivable
in contextย ฮ_X for layered termsย s,t, then
s=_ฯต t is derivable, as a depth-1 approximate equality,
from the assumptions inย ฮ (of course, like the usual
assumptions on variables, the assumptions inย ฮ cannot be
substituted into). By the above discussion, it follows
thatย (<ref>) is indeed a coequalizer. We proceed by
induction on the derivation of sฬ
=_ฯตtฬ
,
distinguishing cases on the last step. We note that
sinceย sฬ
, tฬ
have depth at leastย 1, the case for
(๐๐ฌ๐ฌ๐ง) does not occur. The remaining cases are as
follows.
* (๐ซ๐๐๐ฅ): In this case, the layered terms s
andย t collapse into syntactically identical terms, which
implies that they differ only by shifting depth-0 operations
between the upper depth-1 layer and the lower depth-n
layer, and hence are derivably equal underย ฮ.
* Steps using (sym), (triang),
(wk), and (arch) can just be copied. As an
example, we treat the case for (๐ญ๐ซ๐ข๐๐ง๐ ) in detail:
Since approximate equalities derivable in contextย ฮ_X are
uniform-depth and use only variables fromย ฮ_X, the
intermediate term used in the last step is necessarily a
depth-n+1 term overย X, and therefore a collapseย wฬ
of
some layered termย w. That is, we have concluded
sฬ
=_ฯตtฬ
from sฬ
=_ฮด_1wฬ
and
wฬ
=_ฮด_2tฬ
where
ฯต=ฮด_1+ฮด_2. By induction, s=_ฮด_1w and
w=_ฮด_2t are derivable underย ฮ, and then the same
holds for s=_ฯต t.
* (๐ง๐๐ฑ๐ฉ): Nonexpansiveness of an operationย f is
equivalently phrased as a basic quantitative inference
x_1=_ฯต y_1,โฆ,x_n=_ฯต y_nโข
f(x_1,โฆ,x_n)=_ฯต f(y_1,โฆ,y_n), so we omit this
case, referring to the case for (๐๐ฑ).
* (๐๐ฑ): In this case, we have applied an axiom
ฮโข u=_ฯต v, and derived
uฯ=_ฯต vฯ from ฯ(x)=_ฮดฯ(y)
for all (x=_ฮด y)โฮ. First we consider the case
thatย u,v are depth-0. Then for (x=_ฮด y)โฮ,
ฯ(x) and ฯ(y) have depth n+1, and hence arise by
collapsing layered terms w_x, w_y; by induction,
w_x=_ฮด w_y is derivable underย ฮ, and again
applying ฮโข u=_ฯต v, we obtain that
uฯ'=vฯ' is derivable underย ฮ where ฯ'
is the substitution given by ฯ'(x)=w_x. Since the layered
terms uฯ' and vฯ' collapse to sฬ
andย tฬ
, respectively, they differ fromย s andย t,
respectively, only by shifting depth-0 operations between the
layers, and hence s=_ฯต t is derivable underย ฮ.
The remaining case is that u,v have depthย 1. In this case,
for (x=_ฮด y)โฮ, ฯ(x) and ฯ(y) have
depthย n, and ฯ(x)=_ฮดฯ(y) is derivable, hence
inย ฮ. We write u_ฯ andย v_ฯ for the layered
terms with depth-1 layer given byย u andย v, respectively,
and depth-n layer given byย ฯ. Then
u_ฯ=_ฯต v_ฯ is derivable underย ฮ. The
layered termsย u_ฯ and v_ฯ differ fromย s andย t,
respectively, at most by shifting depth-0 operations between
the depth-1 layer and the depth-n layer, so s=_ฯต t
is also derivable underย ฮ.
* Second, we show that the ฮผ^nk are epi. The ฮผ^1k
are (regular) epimorphisms since diagram <ref> is a
coequalizer diagram. This implies that all ฮผ^nk are epi. To
see this, first note that each ฮผ^0k is split epi by the unit
law ฮผ^0kฮท M_k = id_M_k. We show that ฮผ^nk is epi
by induction on n. The base cases n โค 1 are already
discharged. For the step fromย n to n+1, recall the associative
law ฮผ^n+1,kฮผ^1nM_k = ฮผ^1,m+kM_1ฮผ^nk. We have
that ฮผ^1,m+k is epi, and by induction, so is
M_1ฮผ^nk. Thus, both sides of the associative law are epi. It
follows that ฮผ^(n+1)k is epi as desired.
ยง.ยง Details on Presentations of Trace Monads (<Ref>)
Fix k [0,1]^2 โ [0,1] such that
* x + y โฅ z and x' + y' โฅ z' implies k(x, x') + k(y, y') โฅ k(z,z').
* k(x, y) = 0 if and only if x = y = 0
Put d_A โ B((a, b),(a', b')) = k(d(a, a'), d(b, b')).
The function d_Aโ B defined above is a metric on the set
A ร B.
For reflexivity, we have that d_Aโ B((a,b), (a,b)) = k(d_A(a, a), d_B(b,b)) = k(0, 0) = 0.
For positivity, suppose that d_Aโ B((a,b), (a',b')) = 0. Then k(d(a, a'), d(b, b')) = 0, implying d(a, a') = 0 and thus a = a' and similarly b = b', therefore (a,a') = (b, b').
For the triangle inequality, we have that
d_Aโ B((a, b), (a', b')) + d_Aโ B((a', b'), (aโ, bโ))
= k(d(a, a'), d(b,b')) + k(d(a', aโ), d(b', bโ))
โฅ k(d(a, aโ), d(b, bโ))
= d_Aโ B((a, b), (aโ, bโ)).
The p-norms for 1 โค pโ with discount factor ฮดโ (0, 1], i.e. k(x, y) = โ(x^p + (ฮด y)^p), satisfy these conditions.
For the second condition: It is clear that k(0, 0) = 0. So suppose
k(x, y) = 0. This is the case if and only if
x^p+ (ฮด y)^p = 0, which in turn implies x = y = 0. For the
first condition, suppose that x + y โฅ z and x' + y'โฅ z'. Then
k(x, x') + k(y, y')
= โ(x^p + (ฮด y)^p) +โ(x'^p + (ฮด y')^p)
โฅโ((x + x')^p + ((ฮด y) + (ฮด y'))^p) (Minkowski inequality)
โฅโ(z^p + (ฮด z')^p) (Monotonicity of +, โ(-) and ฮดยท)
= k(z, z')
The sup norm k(x, y) = xโจฮด y satisfies these conditions.
Satisfaction of the second condition is obvious, we show the first condition. Assume x + y โฅ z and x' + y'โฅ z'. Then k(x, x') + k(y, y') = (xโจฮด x') + (yโจฮด y') โฅ (x+y)โจ (ฮด x' +ฮด y') โฅ zโจฮด z' = k(z, z')
ยง.ยง Proof of Lemma <ref>
Let (a, s), (b,t) โโ TX where
s = g(x_1,โฆ, x_n) and t = h(y_1, โฆ, y_m) are terms of
TX written in form of a algebraic equational theory associated to
T. Then we have that d((a,s), (b,t)) = d(a, b) โ d(s,
t). Therefore
d(ฮป_X(a, s), ฮป_X(b, t))
= d(g((a,x_1), โฆ, (a,x_n)), h(((b, y_1), โฆ, (b, y_m))))
โค d(g((a,x_1), โฆ, (a,x_n)), g(((b, x_1), โฆ, (b, x_n))))
โ d(g((b,x_1), โฆ, (b,x_n)), h(((b, y_1), โฆ, (b, y_m))))
โค d(a, b) โ d(g((b,x_1), โฆ, (b,x_n)), h(((b, y_1), โฆ, (b, y_m))))
= d(a, b) โ d(s, t) = d((a,s), (b,t))
ยง.ยง Proof of Lemma <ref>
We proceed in two parts. First, we show that (M_nX)_nโ is a
quantitative graded ๐ฏ[] algebra, i.e. a quantitative
algebra <cit.>, extended with grades in the
obvious way (generalized from the set-based
conceptย <cit.>). Second, we show that
(M_nX)_nโ is generated fromย X vie the operations of
๐ฏ[]. Lastly, we prove completeness of the axioms with
respect to this algebra. We interpret depth-0 operations on
M_nX=T(^โ nโ X), ensuring satisfaction
ofย ๐ฏ, using the fact thatย T is the monad generated
byย ๐ฏ, so that every setย TY is a free
๐ฏ-algebra. Depth-1 operations a are given by the
functions
a_n T(^โ nโ-) โ T(^โ
n+1โ-) defined by
a_n(s) = ฮป_^โ nโ X(a,s). For satisfaction
of the distributivity axiom, let
v_1,โฆ,v_n โ T(^โ nโ X), represented as
๐ฏ-terms, and letย g be an n-ary operation
ofย ๐ฏ; then
a(g(v_1,โฆ,v_n))=ฮป(a,g(v_1,โฆ,v_n))= Tโจ a,
๐_^โ nโ Xโฉ(t) = g((a, v_1), โฆ, (a,
v_n)). For the distance axiom
x =_ฯต y โข a(x) =_k(d(a, b), ฯต) b(y), let
s, t โ T(^โ nโ-).
Then we have
d(a(s), b(t)) = d(ฮป_X(a, s), ฮป_X(b, t))
โค d_โ T(^โ nโ X)((a,s),(b,t))
= k(d(a, b), d(s,t)),
using non-expansiveness ofย ฮป in the second step.
Generatedness ofย M_nX under these operations is then clear. For
completeness, we show that low distances in M_nX are derivable using
quantitative equational reasoning: Let
s, t โ M_nX=T(^โ mโ X). By generatedness, we can
writeย s,t as terms s=g(ฯ_1(x_1), โฆ, ฯ_n(x_n)),
t=h(ฯ_1( y_1), โฆ, (ฯ_m(y_m)) where g,h are
๐ฏ-terms, ฯ_i,ฯ_iโ^โ n, and
x_i,y_iโ X for i=1,โฆ,n. Letย ฮ contain all distances
among x_1,โฆ,x_n,y_1.โฆ,y_n. Assume d(s, t) โคฯต,
then we need to show that ฮโข s =_ฯต t. By
completeness of the theoryย ๐ฏ forย T, it is clear that
ฮ' โข s =_ฯต t where ฮ' contains all distances
among the trace-poststate pairs
ฯ_1(x_1),โฆ,ฯ_n(x_n)),ฯ_1(y_1),โฆ,ฯ_n(y_n)). So
we reduce to proving that distances of these traces can be derived,
i.e.
ฮโข (a_1, โฆ, a_n, x) =_ฯต' (b_1, โฆ, b_n,
y) if
d((a_1, โฆ, a_n, x), (b_1, โฆ, b_n, y)) โคฯต'. This
is immediate by induction over n, where the inductive step applies
the distance axiom.
ยง.ยง Proof of Theorem <ref>
We define for each depth-n formulaย ฯ an M_0-algebra
homomorphism ฯ_๐ M_n1 โฮฉ and show
that on a coalgebra (X, ฮณ), we have
ฯ_ฮณ = ฯ_๐โฮณ^(n). The
claim then follows from the fact that the ฯ_๐ are
nonexpansive and
d^ฮฑ(x, y) โค d_M_n1(ฮณ^(n)(x), ฮณ^(n)(y)) for
x, y โ X. We define ฯ_๐ recursively as
follows.
* c_๐ = M_01 M_0ฮฉฮฉ for c โฮ;
* p(ฯ_1, โฆ, ฯ_k)_๐ = pโโจฯ_1_๐, โฆ, ฯ_k _๐โฉ for p โ๐ช k-ary;
* ฮปฯ_๐ = f(ฯ'_๐) for
ฮปโฮ and ฮป = f โฮฑ_ฮฉ as
per Definitionย <ref>.
Here, the definition of modal operators is by freeness of the canonical algebra, that is,
f(ฯ'_๐) is the unique morphism that makes the following square commute:
M_1M_n1 [r, "M_1ฯ'_๐"] [d, "ฮผ^1n"] M_1ฮฉ[d, "f"]
M_n+11 [r, "f(ฯ'_๐)"] ฮฉ
It is straightforward to show by induction on the depth of ฯ that
the morphism ฯ_๐ defines a homomorphism of
M_0-algebras (M_n1, ฮผ^0n) and (ฮฉ, o), which is needed
for ฮปฯ_๐ to be defined.
Given a coalgebra (X, ฮณ) and a depth-n formula ฯ of
โ, we now show by induction on ฯ that
ฯ_ฮณ = ฯ_๐โฮณ^(n).
For the case of ฯ = c โฮ we have, by unfolding
definitions, that c_ฮณ = ฤโ !_X and
ฯ_๐โฮณ^(n) = oโ M_0ฤโ
M_0!_X โฮท_X, which are the outer paths in the following
diagram:
X [r, "!"] [d, "ฮท_X"] 1 [d, "ฮท_1"] [r, "ฤ"] ฮฉ[d, "ฮท_ฮฉ"] [rd, "id"]
M_0X [r, "M_0!"] M_01 [r, "M_0 ฤ"] M_0ฮฉ[r, "o"] ฮฉ
The squares commute by naturality of ฮท, while commutativity of the triangle is implied by o being an M_0-algebra.
The step for formulas of the form ฯ = p(ฯ_1, โฆ, ฯ_n)
is immediate from the definitions. For ฯ = ฮปฯ' with
ฯ' being of uniform depthย n, we have
ฯ_ฮณ = ฮปโ Gฯ'_ฮณโฮณ
= f โฮฑ_ฮฉโ Gฯ'_ฮณโฮณ
= f โ M_1ฯ'_ฮณโฮฑ_Xโฮณ naturality of ฮฑ
= f โ M_1ฯ'_๐โ M_1ฮณ^(n)โฮฑ_Xโฮณ induction
=f(ฯ'_๐) โฮผ^1nโ M_1ฮณ^(n)โฮฑ_Xโฮณ <ref>
= ฯ_๐โฮณ^(n+1) definitions
ยง.ยง Details for Remark <ref>
We make explicit the fact that given an initial cone ๐ of -homomorphisms (i.e. quantitative join semilattice homomorphisms) Aโ [0,1], the closure of ๐ under all admissible propositional operators (i.e. -homomorphisms [0,1]^kโ[0,1]) may not be dense in the space of all -homomorphisms Aโ [0,1], even whenย A is free.
Take for instance the -algebra (X, d_X) where X = {a, b, c} with d(a,b) = d(b,c) = 0.3 and d(a,c) = 0.4.
We attempt to recover the -homomorphism f (X, d_X) โ [0,1] uniquely defined by f({a}) = 0.6, f({b}) = 0.8, f({c}) = 1 and f({d}) = 0.
We define a set ๐ as above as consisting of all join semilattice homomorphisms g where either a) g โฐf or b) g({c}) < 1. This set of functions is initialTODO: Das sollte man noch genauer argumentieren.
For all g_U,V is chosen in all functions g such a way that g_(A) = f(A), g_U, V(B) = f(B) and either a)
Such a g_A, B can always be found:
g_U,V is chosen in all functions g such a way that g_(A) = f(A), g_U, V(B) = f(B) and either a)
* If c โA and c โB then there must be some function g_A,B with the required properties such that g_A,B({c}) < 1.
* If cโ A and c โ B then we can assume that g_A,B(A) = g_A,B(B) = 1 (otherwise g_A,B({c}) < 1 by join continuity). Thus the constant function g_A,B(x) = 1 satisfies the required properties and g_A,Bโฐf
* If (w.l.o.g.) c โ A and c โB then we can assume that g_A,B(A) = g_A,B({c}) = 1.
* If b โB we can choose g_A,B such that g_A,B({b}) > 0.8, thus g_A,Bโฐf
* If b โ B we know that g_A,B(B) โฅ 0.7 (by nonexpansivity and join continuity). We can then choose g_A,B such that g_A,B({a}) = g_A,B(B) > 0.7, thus g_A,Bโฐf.
We show that ๐ is also closed under the propositional
operators mentioned above: Let o [0, 1]^n โ [0, 1]
be a join semilattice homomorphism, and let
g_1,โฆ,g_nโ๐. We have to show that
oโจ g_iโฉโ๐. It is clear that
oโจ g_iโฉ is a join semilattice morphism. For the
remaining property, we distinguish cases as follows.
* Suppose that o(1,โฆ,1) < 1. Then by monotonicity of join
semilattice morphisms,
o(โจ g_i โฉ_i โค n({c})) < 1, so
oโจ g_i โฉโ๐.
* Otherwise, o(1^n) = 1. By join continuity, the
setย J={jโ{1,โฆ,n}| o(0^j-110^n-j-1) = 1} is
nonempty. Since o preserves the empty join, i.e. o(0^n) = 0, we
know by nonexpansiveness that
o(0^j-1v0^n-j-1) = v for all
jโ J.
If g_j({c}) < 1 for all j โ J, then
o(โจ g_i โฉ_i โค n({c})) < 1 by join continuity,
so oโจ g_i โฉโ๐ as claimed. Otherwise, we
have j โ J such that g_j({c}) = 1. Since
g_jโ๐, this means that we have z โ X such that
g_j(z) > f(z), implying byย (<ref>) that
o(0^j-1g_j(z)0^n-j-1) = g_j(z) > f(z) and thus
o(โจ g_i โฉ_i โค n(z)) > f(z) by monotonicity. Thus,
oโจ g_i โฉโฐf, so
oโจ g_i โฉโ๐.
On the other hand, it is clear that ๐ is not dense in
0M((X), [0,1]).
ยง.ยง Details for Theorem <ref>
We have to show that the closure of the family of
maps
{ฯ: M_n1 โ [0, 1] |ฯ is
a depth-n formula}
under the propositional operators in ๐ช has property
ฮฆ and is thus initial for eachย n. We proceed by
induction on n. The base case n = 0 is immediate by ฮฆ-type
depth-0 separation. For the inductive step let ๐ denote
the set of evaluations M_n1 โฮฉ of depth-n
formulas. By the induction hypothesis, ๐ has property
ฮฆ. By definition,ย ๐ is closed under propositional
operators in
๐ช.
By ฮฆ-type depth-1 separation, it follows that set
{ L (ฯ) | L โฮ, ฯ a depth-n formula}
has property ฮฆ, proving the claim.
ยง.ยง Details on Metric Streams
In the following, we write โ^ for the logic of
metric streams featuring only the truth constantย 1 and
modalitiesย a for aโ.
โ^ is normed isometric depth-0 and normed isometric depth-1 separating.
For normed isometric depth-0 separation, note that M_01 = 1 and
that โค = M_0โคฬ is defined by
* โฆ 1, so that {โค} is normed
isometric. For normed isometric depth-1 separation, note that
since M_0=๐ in the present case, canonical M_1-algebras have
the form ๐ GA_0โ A_1=GA_0=ร A_0. Letย A be
such a canonical M_1-algebra, and let ๐ be a normed
isometric set of morphisms A_0 โฮฉ. Let
(a, v), (b, w) โ M_1A_0=ร A_0. Sinceย ๐ is
normed isometric, there is f โ๐ such that f(v) = 1
and f(w) = 1- d(v, w). To show that ฮ(๐) is
normed isometric, we show that
a(f)โฮ(๐) exhibits the required
properties. On the one hand, we have
a^#(f)((a, v))
= a((a, f(v))
= min{1- d(a, a), f(v)}
= f(v)
= 1
and on the other hand,
a^#(f)((b, w))
= a((b, f(w)))
= min{1- d(a, b), f(w)}
= min{1- d(a, b), 1- d(v, w)}
= 1 - max{d(a, b), d(v, w)}
= 1- d((a, v), (b, w)).
โ^ is not initial-type depth-1 separating.
Let = {a, b} with d(a, b) = 0.8. Let A be a canonical
M_1-algebra (described as in the proof of
Lemmaย <ref>) with A_0 = {v, w} where
d(v, w) = 0.5. The map f A_0โฮฉ defined by
f(w) = 0.25 and f(v) = 0.75 is thus initial; we show that
ฮ({f})={a^#(f),b^#(f)} (a
set of maps A_1=ร A_0โฮฉ) is not, thus proving the
claim (recall that there are no propositional operators). For
a^#(f), we have
|a^#(f)((a, v)) - a^#(f)((b, w))|
=|a((a, f(v))) - a((b, f(w))|
=|min{1- d(a, a), f(v)} - min{1- d(a, b), f(w)}|
=|f(v) - min{1- d(a, b), f(w)}|
=|0.75 - min{0.2, 0.25}| = 0.55
Moreover, for b^#(f), we have
|b^#(f)((a, v)) - b^#(f)((b, w))|
=|b((a, f(v))) - b((b, f(w))|
=|min{1- d(b, a), f(v)} - min{1- d(b, b), f(w)}|
=|min{1- d(b, a), f(v)} - f(w)}|
=|min{0.2, 0.75} - 0.25}| = 0.05.
Since d((a,v), (b,w)) = 0.8, this shows that
ฮ(๐) is not initial.
ยง.ยง Details on Metric Transition Systems
The theory in
Exampleย <ref>.<ref> induces the
graded monad (L^n ร-)
By <Ref>, it suffices to show that the natural
transformation
ฮปร-โ(ร-)
given by ฮป_X(a, S) = {(a, s) | sโ S} is
nonexpansive. Let (a, S), (b, T) โร X for a
metric spaceย X. Then
d(ฮป_X(a, S), ฮป_X(b, T))
= (โ_sโ Sโ_tโ Td((a, s), (b, t)))โจ( โ_tโ Tโ_sโ Sd((a, s), (b, t)))
= (โ_sโ Sโ_tโ Td(a, b) โจ d(s, t))โจ( โ_tโ Tโ_sโ Sd(a,b) โจ d(s, t))
= d(a, b) โจ(โ_sโ Sโ_tโ Td(s, t))โจ( โ_tโ Tโ_sโ Sd(s, t))
= d((a, S), (b, T)).
The logic for trace semantics given in
Exampleย <ref>.<ref> is
expressive.
As indicated in
Exampleย <ref>.<ref>, we use
Theoremย <ref>, with normed isometry as the initiality
invariant. Depth-0 separation is straightforward. For depth-1
separation, we proceed according to
Remarkย <ref> and exploit that canonical
M_1-algebras of the formย M_n1 are free as M_0-algebras, which
in the case at hand means they are finite-powerset quantitative join
semilattices carrying the Hausdorff metric. That is, we are given a
canonical M_1-algebraย A such that A_0=(X) whereย X is a
metric space, and then A_1=(ร X). Let ๐
be a normed isometric set of nonexpansive join-semilattice morphisms
Aโฮฉ. We have to show that ฮ(๐) is normed
isometric. So let S,Tโ(ร X) such that
d(S,T)>ฯต. Without loss of generality this means that we
have (a,x)โ S such that d((a,x),T)>ฯต. Put
C_a,ฯต(T)={yโ X|โ (b,y)โ T. d(a,b)โคฯต}.
Then d({x},C_a,ฯต(T))>ฯต inย A_0=(X). To see
this, let yโ C_a,ฯต(T), i.e. (b,y)โ T for someย b
such that d(a,b)โคฯต. Then d((a,x),(b,y))>ฯต
because d((a,x),T)>ฯต, and hence d(x,y)>ฯต, as
required.
It follows that
d({x}โช C_a,ฯต(T),C_a,ฯต(T))>ฯต.
Sinceย ๐ is normed isometric, we thus have
fโ๐ such that
|f({x}โช C_a,ฯต(T))-f(C_a,ฯต(T))|>ฯต
and moreover
f({x}โช
C_a,ฯต(T))โจ f(C_a,ฯต(T))=1. Sinceย f, being a join
semilattice morphism, is monotone w.r.t. set inclusion, this
implies that f({x}โช C_a,ฯต(T))=1 and
f(C_a,ฯต(T))<1-ฯต, and moreover
f({x}โช C_a,ฯต(T))=f(x)โจ f(C_a,ฯต(T)), so
f({x})=1. It follows that a(f)(S)=1; it remains to
show that a(f)(T)<1-ฯต. So let (b,y)โ T; we
have to show that f(y)โง (1-d(a,b))<1-ฯต. We distinguish
cases:
* d(a,b)โคฯต: Then yโ C_a,ฯต(T). Since
f(C_a,ฯต(T))<1-ฯต andย f presevres joins, we have
f(y)<1-ฯต, which implies the claim.
* d(a,b)>ฯต: Then 1-d(a,b)<1-ฯต, which implies
the claim.
ยง.ยง Details on Fuzzy Trace Semantics
ยง.ยง.ยง The Fuzzy Finite Powerset Monad
The fuzzy finite powerset monad _ฯ on is given as
follows. For a setย X, _ฯ X is the set of maps
Xโ[0,1] with finite support. For a mapย f Xโ Y,
_ฯ f takes fuzzy direct images; explicitly, for
Sโ_ฯ X and yโ Y,
_ฯ f(S)(y)=โ_xโ X| f(x)=yS(x). The unit
ฮท takes singletons; explicitly, ฮท_X(x)(x)=1, and
ฮท_X(x)(x)=0 for yโ x. Finally, the multiplicationย ฮผ
takes fuzzy big unions; explicitly,
ฮผ_X(๐)(x)=โ_Sโ_ฯ(X)๐(S) S(x).
We liftย _ฯ to a functor _ฯ on metric spaces by means of the
Kantorovich lifting
constructionย <cit.>, applied to the
evaluation function __ฯ[0,1]โ[0,1]
given by
_ S=โ_vโ [0,1]S(v) v,
equivalently presented as the predicate
liftingย [0,1]^-โ [0,1]^_ฯ given by
_X(f)(S)=โ_xโ XS(x) f(x)
for fโ[0,1]^X and Sโ_ฯ(X). Explicitly, for a metric
spaceย X=(X,d), the metric on X is thus given by
d(S,T)=โ_fโ( X, [0,1])|_X(f)(S)- _X(f)(T)|.
This metric has an equivalent Hausdorff-style characterizationย <cit.> as
d(S,T)=(โ_xโ_y (S(x)โ T(y))โจ (S(x)โง d(x,y)))โจ
(โ_yโ_x (T(y)โ S(x))โจ (T(y)โง d(x,y)))
(in loc. cit., only the asymmetric case is mentioned, but
noteย <cit.>[In more detail, one applies <cit.>
to obtain that the Kantorovich lax
extensionย <cit.> agrees with the
Kantorovich liftingย <cit.> on
pseudometrics when we additionally consider the dual modalityย
ofย ; this Kantorovich lifting, in turn, is easily seen to
coincide withย (<ref>). On the other hand, applying the
characterization given
inย <cit.> both
toย and, dualizing appropriately, toย in the
Kantorovich lax extension yieldsย (<ref>).]).
This metric in fact lifts the monadย _ฯ to a monad
onย : The unit is clearly nonexpansive. Nonexpansiveness of the
multiplication, i.e. of fuzzy big union, is seen by means of the
Kantorovich descriptionย <ref>; in fact, it suffices to
show that every map
_X(f)_ฯ Xโ[0,1]โ
_ฯ1 is a
morphism of _ฯ-algebras. Indeed, we can then argue as
follows: By definition, the cone of all _X(f) is initial, so
it suffices to show that _X(f)ฮผ_X is nonexpansive for
allย f; but _ฯ-homomorphy of _X(f) means that
these maps equal ฮผ_1_ฯ(f). The
mapย _ฯ(f) is nonexpansive because (f)
is nonexpansive andย _ฯ liftsย _ฯ;
nonexpansiveness of ฮผ_1 follows from the fact that join and meet
are nonexpansive operations on [0,1].
So we check that _X(f) is a morphism of
_ฯ-algebras. Note that under the isomorphism
_ฯ(1)โ
[0,1], ฮผ_1 corresponds to _,
and under the usual Yoneda
correspondenceย <cit.> between
_ and
,
_X(f)=__ฯ f. But _ฯ f is a
morphism of free -algebras, and so is _,
sinceย ฮผ_1 is.
We claim that _ฯ is presented by the following
quantitative algebraic theory. We have unary operationsย r for all
rโ[0,1], a constantย 0, and a binary operationย +. These are
subject to strict equations saying that 0,+ form a join semilattice
structure; strict equations stating that the operationsย r form an
action of ([0,1],โง), withย 0 as zero element:
1(x)=x r(s(x))=(rโง s)(x) 0(x)=0;
and the following basic quantitative inferences governing distance:
x=_ฯต yโข r(x)=_ฯต s(y)
under the side condition that |r-s|โคฯต. Using
nonexpansiveness of joins, one derives
x_1=_ฯต y_1,โฆ,x_n=_ฯต y_nโขโ_i r_i(x_i) =_ฯตโ_i s_i (y_i),
again under the side condition thatย |r_i-s_i|โคฯต for allย i,
where we use big sum notation in the evident sense.
It is clear that the operations and strict equations induce
_ฯ. To see that the full theory induces _ฯ, it remains to
show two things:
Axiomย (<ref>) is sound overย _ฯ X:
Since we have already shown that _ฯ is a monad, it
suffices to show thatย (<ref>) holds
inย {x_1,โฆ,x_n,y_1,โฆ,y_n} for
x_1,โฆ,x_n,y_1,โฆ,y_nโ X (with every variable interpreted by
itself, and distances inherited fromย X). But this is immediate from
the Hausdorff description of the distance: By symmetry, it suffices to
show that the left-hand term in the maximum inย (<ref>)
is at mostย ฯต. This follows from the fact that by the side
conditions ofย (<ref>),
(r_iโ s_i)โจ (r_i d(x_i,y_i))โค(r_iโ s_i)โจ
d(x_i,y_i)โคฯต for allย i.
Completeness: Low distances inย _ฯ X are derivable.
Let S,Tโ_ฯ X such that d(S,T)โคฯต; read S,T
as terms S=โ_xS(x)(x), T=โ_xT(x)(x). We have to show that
ฮโข S=_ฯต T is derivable where ฮ records the
distances of all elements of the (finite) supports ofย S andย T. Now
d(S,T)โคฯต implies that
โ_xโ_y (S(x)โ T(y))โจ (S(x)โง
d(x,y))โคฯต. That is, for everyย x there exists y_x such
that (S(x)โ T(y_x))โจ (S(x)โง d(x,y_x))โคฯต. This
means that
* S(x)โค T(y_x)โฯต, and
* either d(x,y_x)โคฯต or S(x)โคฯต. In the latter
case, we may take y_x=x, ensuring both
S(x)โค T(y_x)โฯต and, again, d(x,y_x)โคฯต.
By the strict equational laws, it follows that
ฮโข S=โ_d(x,y)โคฯต(S(x) (T(y)โฯต))(x).
Analogously, we obtain
ฮโข T=โ_d(x,y)โคฯต((S(x)โฯต) T(y))(y).
Usingย (<ref>) and equationsย (<ref>)
andย (<ref>), we obtain that
ฮโข S=_ฯต T is derivable as desired.
ยง.ยง.ยง Graded Logic for Fuzzy Trace
Semantics
To see that the logic as given in
Sectionย <ref> is indeed a graded logic, we have to
show that a G[0,1]=M_1[0,1]โ[0,1] is an
M_1-algebra. In terms of the algebraic description ofย ๐,
this means that a should represent term evaluation
w.r.t. the M_0-algebra structure of [0,1] (which consists in the
usual join semilattice structure of [0,1] and additionally the
interpretation of unary operationsย r, for rโ[0,1], as taking
meets withย r) and suitably chosen nonexpansive interpretations
b^a [0,1]โ[0,1] of the unary depth-1
operationsย b for bโ; we take a^a to be the
identity map onย [0,1] for allย a, and b^a to be
constantlyย 0 for bโ a. For depth-0 operations, the fact that
a agrees with term evaluation then just amounts to
the homomorphy condition, which is checked straightforwardly. For the
depth-1 operationย a, we calculate as follows (exploiting that every
depth-0 term over ฮฉ=[0,1] can be normalized to have the form
โ r_i(v_i) with r_i,v_iโ[0,1]):
a(a(โ r_i(v_i)))
= a(โ r_i(a(v_i))) equivalence of depth-1 terms
= โ_i r_iโง v_i definition of a
= a^a(โ_i r_iโง v_i),
and โ_i r_iโง v_i is the evaluation of the term
โ_i r_iโง v_i in [0,1], Finally, for bโ a, we
similarly have
a(b(โ
r_i(v_i)))=a(โ
r_i(b(v_i)))=0=b^a(โ_i r_iโง v_i).
It is straightforward to check that the logic is initial-type depth-0
separating. To show that it is also initial-type depth-1 separating,
let A be a canonical M_1-algebra with free 0-part A_0= X
(so A_1=(ร X)), and let ๐ be an initial
cone of maps A_0โ [0,1]. We have to show that the cone
ฮ(๐) of maps A_1โ [0,1] is again initial. So let
S,Tโ_ฯ(ร X) such that
d(S,T)โฅฯต. W.l.o.g. this means that there exist a,x such
that
โ_b,y(S(a,x)โ T(b,y))โจ (S(a,x)โง
d((a,x),(b,y)))โฅฯต.
All elements whose meet is taken in the above expression are below
S(a,x), and those for bโ a equal S(a,x) and can hence be
omitted. That is, we have
โ_y(S(a,x)โ T(a,y))โจ (S(a,x)โง
d((a,x),(a,y)))โฅฯต.
Now define T_aโ_ฯ X by
T_a=โ_y T(a,y)(y).
By (<ref>),
d(S(a,x)(x),T_a)โฅฯต.
By the Hausdorff-style description of the metric on
(ร X), it is immediate that
d(S(a,x)(x)+T_a,T_a)โฅ d(S(a,x)(x),T_a), so
d(S(a,x)(x)+T_a,T_a)โฅฯต.
Sinceย ๐ is an initial cone and the maps inย ๐
are M_0-morphisms, in particular monotone w.r.t. the ordering of
_ฯ X as a join semilattice, it follows that we have
fโ๐ such that f(T_a)โค c-ฯต where
c=f(S(a,x)(x)+T_a). Sinceย f is an M_0-morphism, it follows that
f(S(a,x)(x))=c, and f(S(a,x)(x))=S(a,x)โง f(x), so
af(S)โฅ S(a,x)โง f(x)=c.
We are thus done once we show that
af(T)โค c-ฯต. But
af(T)=โ_yโ X T(a,y)โง f(y), and for
yโ X, we have
T(a,y)โง f(y)=f(T(a,y)(y))โค f(T_a)โค c-ฯต.
ยง.ยง Proof of <Ref>
Consider the transition systems pictured in
Figureย <ref>. Here, a, b โ with a โ b
and d(a,b) = v < 1. The depth-2 behaviour ofย x is
0.5(a, a) + 0.5(b, b), and that ofย y is 0.5(a, b) + 0.5(b, a).
It is easy to see that the behavioural distance of these length-2
trace distributions is v. We show that the deviations of truth
values of โ-formulas betweenย x andย y stay away from
that value, specifically belowย v^2.
First, note thatย x andย y are behaviourally equivalent up to
depth-1, and hence agree on formulae of depth at mostย 1; we thus
restrict attention to formulas of depth at leastย 2 (where we do not
insist on uniform depth but exclude top-level depth-1 subformulas).
Letย ฮปโฮ
be a modal operator, interpreted as
ฮป(โ [0, 1]) โ [0,1].
The interpretationย ฮป satisfies the equation
ฮป( โ_i โ Ip_i (a_i, โ_jโ J_i q_ij v_ij )) = ฮป( โ_i โ I, jโ J_i p_i q_ij (a_i, v_ij ))
where the outer sum on the left hand side and the sum on the right
hand side are formal sums describing probability distributions while
the inner sum on the left hand side is an arithmetic one.
We show the claim first for all values on v_ijโ (0,1). Let x_0 and x_1 be states in coalgebras with probabilistic trace distance 1. By expressiveness, there is a formula ฯ_ฯต for any ฯต > 0 such that d(ฯ_ฯต_ฮณ(x_0), ฯ_ฯต_ฮณ(x_1)) > 1 - ฯต (w.l.o.g. we assume ฯ(x_0) < ฯต and ฯ(x_1)> 1-ฯต. Now it is possible to create states x_p in a new coalgebra with successor distribution ฮณ(x_p) = ฮณ(x_1) +_p ฮณ(x_0). The probabilistic trace distance of x_0 and x_p is p, while the probabilistic trace distance between x_p and x_1 is 1-p. By adequacy, p - ฯต < ฯ(x_p) < p + ฯต. We construct a coalgebra (X,ฮณ) from the disjoint union of the
(X_ij,ฮณ_ij) and two new states l and r, which receive
successor distributions
ฮณ(l) = โ_iโ Ip_i(a_i, โ_jโ J_i q_ij x_v_ij) ฮณ(r) = โ_iโ I, jโ J_i p_iq_ij(a_i, x_v_ij).
It is clear that l and r are probabilistically trace
equivalent. Now (โฯ_ฯต)(l) has a distance of at most ฯต from the left hand side of (<ref>), and symmetrically (โฯ_ฯต)(r) has distance of at most ฯต from the right hand side. Sinceย ฮป is by
assumption non-expansive, we have that
ฮป( โ_i โ Ip_i (a_i, โ_jโ J_i q_ij v_ij )) =
ฮป(lim_ฯตโ 0(โฯ_ฯต)(โ_iโ Ip_i(a_i, โ_jโ J_i q_ij x_v_ij) ) ) =
ฮป( lim_ฯตโ 0(โฯ_ฯต)(โ_iโ I, jโ J_i p_iq_ij(a_i, x_v_ij) ) ) =
ฮป( โ_i โ I, jโ J_i p_i q_ij (a_i, v_ij ))
We construct a coalgebra (X,ฮณ) from the disjoint union of the
(X_ij,ฮณ_ij) and two new states l and r, which receive
successor distributions
ฮณ(l) = โ_iโ Ip_i(a_i, โ_jโ J_i q_ij x_v_ij) ฮณ(r) = โ_iโ I, jโ J_i p_iq_ij(a_i, x_v_ij).
It is clear that l and r are probabilistically trace
equivalent. Now ฮปฯ(l) evaluates to the left hand
side of (<ref>) while ฮปฯ(r)
evaluates to the right hand side; sinceย ฮปฯ is, by
assumption, non-expansive under probabilistic trace distance, the
two sides of (<ref>) are thus equal.
Now any distribution โ_I p_i (a_i, v_i) can be transformed using equationย (<ref>) to take the form โ_I p_iv_i (a_i, 1) + p_i(1-v_i)(a_i, 0).
Let ฯ() โ [0,1] be the interpretation of a depth-1 formula, a nonexpansive map. Then we have ฯ(ฮด_a) = v_a and ฯ(ฮด_b) = v_b with |v_a-v_b| โค v.
We call two distributions in (ร[0,1])
equivalent if they are identified underย ฮป. Then
any distribution โ_I p_i (a_i, v_i) is, by
equationย (<ref>), equivalent to one of the form
โ_I p_iv_i (a_i, 1) + p_i(1-v_i)(a_i, 0)Needs
adaptation. Let ฯ() โ [0,1] be the
interpretation of a depth-1 formula, a nonexpansive map. Then we have
ฯ(ฮด_a) = v_a and ฯ(ฮด_b) = v_b with
|v_a-v_b| โค v. Then we have
ฮปฯ(x) = ฮปโ(โฯ)(0.5(a, ฮด_a) + 0.5(b, ฮด_b))= ฮป(0.5(a, v_a) + 0.5(b, v_b)).
Symmetrically,
ฮปฯ(y) = ฮปโ(โฯ)(0.5(a, ฮด_b) + 0.5(b, ฮด_a)) = ฮป(0.5(a, v_b) + 0.5(b, v_a))
The distribution in the second line x is equivalent to
ฮผ = 0.5v_a(a, 1) + 0.5(1-v_a)(a, 0) + 0.5v_b(b, 1) + 0.5(1-v_b)(b,
0), while the second one is equivalent to
ฮฝ = 0.5v_b(a, 1) + 0.5(1-v_b)(a, 0) + 0.5v_a(b, 1) + 0.5(1-v_a)(b,
0)Instead need to work with ฯต, 1-ฯต here;
what if, say, v_a=0, though?. We give an upper bound on
d(ฮผ, ฮฝ). Without loss of generality assume v_a โค v_b. Then
the following distribution ฮณ is a coupling of ฮผ and ฮฝ:
3ฮณ = 0.5v_a((a,1),(a,1)) + 0.5(1-v_b)((a,0),(a,0))
+ 0.5v_a((b,1),(b,1)) + 0.5(1-v_b)((b,0),(b,0))
+ 0.5(v_b-v_a)((b,1), (a,1)) + 0.5(v_b-v_a)((a,0), (b,0))
By the Kantorovich-Rubinstein duality, d(ฮผ, ฮฝ) is bounded
byย ๐ผ_ฮณ d. In the calculation of ๐ผ_ฮณ d,
the first two lines in <Ref> contriubteย 0. Thus,
๐ผ_ฮณ d = 0.5(v_b-v_a)d((b,1), (a,1)) + 0.5(v_b-v_a)d((a,0), (b,0))
= 0.25(v_b-v_a) + 0.25(v_b-v_a)
โค 0.25v + 0.25v
โค 0.5v
๐ผ_ฮณ d = 0.5(v_b-v_a)d((b,1), (a,1)) + 0.5(v_b-v_a)d((a,0), (b,0))
โค 0.5vd((b,1), (a,1)) + 0.5vd((a,0), (b,0))
โค 0.5v^2 + 0.5v^2
โค v^2
Therefore, since d(ฮผ, ฮฝ) โค v^2 and v < 1, there is no nonexpansive morphism h(ร{0,1}) โ [0,1] (and by extension no modal operator) that separates these distributions in the sense that |h(ฮผ) - h(ฮฝ)| approximates v.
We note that for coalgebraic logics in the sense we employ
here, which live natively in the category of metric spaces,
interpretationsย ฮป(โ[0,1])โ[0,1]
of modalitiesย ฮป are nonexpansive by definition. We can
strengthen our above negative result by letting the modal logic
live overย , in the sense that interpretationsย ฮป
can be unrestricted maps; for purposes of the present discussion, we
refer to such modalities as set-based. Under a mild
additional assumption on the logic, we can show that this relaxation
does not actually provide additional room for maneuvering:
If โ is an adequate and expressive set-based
coalgebraic logic for probabilistic metric trace semantics, and
there is a formula ฯ and states x_0 and x_1 in a
coalgebra (X, ฮถ) such that ฯ(x_0) = 0 and
ฯ(x_1) = 1, then ฮป is nonexpansive.
Assume for the sake of contradiction, there are
s, t โ(โ [0,1]), such that
d(ฮป(s), ฮป(t)) > d(s,t). Since the logic is
assumed to be expressive, x_0 and x_1 must have probabilistic
trace distance 1 (and since trace distance is a lower bound of
behavioural distance also behavioural distance 1). We define a new
coalgebra
(X โช{x_p | p โ (0,1)}โชxฬ
, yฬ
, ฮถ'),
where ฮถ'(x) = ฮถ(x) if x โ X. For states x_p we
define ฮถ'(x_p) = ฮถ(x_1) +_p ฮถ(x_0). The state x_p is
easily seen to have trace distance p (respectively 1-p) from
x_0 (x_1). Therefore, because of adequacy ฯ(x_p) =
p. For the states xฬ
,yฬ
we define ฮถ(xฬ
) = s'
and ฮถ(yฬ
) = t', where
s',t' โ(โ{x_p | p โ [0,1]}) are
defined by substituting all values p โ [0,1] in s,t by
x_p. We know that the behavioural distance of x and y is
d(s, t). We then have
d(ฮปฯ(xฬ
),ฮปฯ(yฬ
))
=d(ฮป((โฯ)(s')), ฮปฯ((โฯ)(t'))
= d(ฮป(s), ฮป(t))
>d(s,t),
soย โ is not adequate with respect to behavioural
distance. and hence also not adequate with respect to
probabilistic trace distance.
ยง.ยง Proof of Theorem <ref>
We denote by (X, ฮณ) the system in Figureย <ref>, encoded as a coalgebra, with labels a and b having distance 1.
We show that there is no uniform depth formula of type aฯ that separates the states x and y up to their behavioural distance of 1. To this effect, fix a uniform depth formula formula aฯ
We make the final calculation explicit, showing that logical distance is indeed strictly less then behavioural distance. For this we utilise the semantics -_๐ introduced in the proof of Theoremย <ref>. We assume without loss of generality that ฯ_๐ separates the observable behaviours we reach after one step, in the sense that ฯ_๐(ฮด_aaโฆ) = v_a and ฯ_๐(ฮด_bbโฆ) = v_b where |v_a-v_b| โ [0,1]. If this is not the case, the attempt of separating the states failes even more drastically.
We have:
|aฯ_๐(ฮณ^(n)x)- aฯ_๐(ฮณ^n(y))|
|a^#(ฯ_๐)(ฮณ^(n)(x))- a^#(ฯ_๐)(ฮณ^(n)(y))|
=|aโ M_1ฯ(1/2(a, ฮด_aaโฆ) + 1/2(b, ฮด_bbโฆ))
- aโ M_1ฯ(1/2(a, ฮด_bbโฆ) + 1/2(b, ฮด_aaโฆ))|
=|aโ (1/2(a, ฯ(ฮด_aaโฆ)) + 1/2(b, ฯ(ฮด_bbโฆ)))
- aโ (1/2(a, ฯ(ฮด_bbโฆ)) + 1/2(b, ฯ(ฮด_aaโฆ)))|
=|aโ (1/2(a, v_a) + 1/2(b, v_b))
- aโ (1/2(a, v_b) + 1/2(b, v_a))|
= |1/2v_a -1/2v_b| โค1/2
|
http://arxiv.org/abs/2306.03806v1
|
20230606155034
|
Effects of Markovian noise and cavity disorders on the entanglement dynamics of double Jaynes-Cummings models
|
[
"Harsh Rathee",
"Kishore Thapliyal",
"Anirban Pathak"
] |
quant-ph
|
[
"quant-ph"
] |
QOLS, Blackett Laboratory, Imperial College London, London SW7 2BW, United Kingdom
Joint Laboratory of Optics, Faculty of Science,
Palackรฝ University, Czech Republic, 17. listopadu 12, 779ย 00
Olomouc, Czech Republic
Department of Physics and Materials Science & Engineering, Jaypee Institute of Information Technology, A 10, Sector 62, Noida, UP 201309, India
Dynamics of double Jaynes-Cummings models are studied in the presence of Markovian noise and cavity disorders with specific attention to entanglement sudden death and revivals. The study is focused on the glassy disorders, which remain unchanged during the observations. The field is initially assumed to be in a vacuum state, while the atoms are considered to be in a specific two-qubit superposition state. Specifically, the study has revealed that the presence of noise, or a nonlinear pump results in interesting behaviors in the entanglement dynamics. Further, entanglement sudden death is observed in the presence of Markovian noise and nonlinear pump. The presence of entanglement sudden deaths and revivals have also been observed in cases where they were absent initially for the chosen states.
The effect of noise on the dynamics of the system is to decay the characteristics, while that of the disorder is to wash them out. On the other hand, the introduction of nonlinearity is found to cause the dynamics of the system to speed up.
Effects of Markovian noise and cavity disorders on the entanglement dynamics of double Jaynes-Cummings models
Anirban Pathak
July 31, 2023
=============================================================================================================
ยง INTRODUCTION
Interest in the field of quantum computation and quantum information theory has been rising rapidly in the recent years. This is essentially due to the promise that quantum computers are substantially more efficient in solving certain type of problems compared to their classical counterparts <cit.>. The advantage that the power of quantum mechanics brings to the fields of computation, communication, and metrology has been well established by the works of Feynman <cit.>, Bennett and Brassard <cit.>, Deutsch <cit.>, and Shor <cit.> among others. An increasing interest is emerging in solidifying this phenomenon of quantum advantage and the study of its utilization in a diverse range of fields <cit.>. The forward to this field is backed by academia and industry alike. While a lot of the proposed applications rely on the lofty goal of building a fault tolerant quantum computer, there have been demonstrated use cases for noisy intermediate-scale quantum (NISQ) devices as well. These range from simulating many body physics <cit.> and studying open quantum systems <cit.> to the study of fundamental symmetries in physics <cit.>.
There are various criteria that the underlying architecture for a quantum computer must be able to follow to allow for universal quantum computation <cit.>. One of these criteria is the ability to prepare fiducial initial states, especially nonclassical states. Proper utilization of these nonclassical states is one of the ways quantum computing is able to provide this advantage over its classical counterpart <cit.>. Lucidly speaking, nonclassical states are the states that do not have a classical analogue. Of these states, one of particular interest is the class of entangled states. One common way of generating these entangled states is through a rather simplistic process of interaction of a two level atom with a monochromatic light field described by the Jaynes-Cummings (JC) model. This model has been studied extensively <cit.> since its development in 1963 <cit.>.
Interestingly, JC model is still relevant and an important tool for studying many aspects of quantum information processing. It forms the cornerstone of cavity quantum electrodynamics (QED) which is used to describe the interaction of qubits with driving microwave fields in the study of superconducting qubits <cit.>. It has also been widely used for the purpose of generation of engineered quantum states <cit.> and the study of non-Markovian evolution of open quantum systems <cit.>. In fact, the JC model (in its original form, as well as various generalized forms <cit.>) is a quite effective way to test and study various quantum phenomena including, but not limited to, generation of Schrรถdinger-cat states <cit.>, entanglement protection <cit.>, catalysis <cit.>, multiphoton blockade <cit.>, fractional revivals <cit.>, quantum state engineering <cit.>, strong squeezing <cit.>, entanglement generation <cit.>, state discrimination <cit.>. Motivated by these observations, here we plan to study the rich dynamics of entangled atoms in the cavities under the influence of noise due to interaction with ambient environment. This investigation will help us to understand how well the resource entangled states retain their entanglement properties under noisy conditions, and determine whether working in certain noise regimes allows the entanglement resource to be revived. We also try to study whether introducing a nonlinear atom-field interaction leads to robustness against noise. We focus on studying the entanglement in the Double JC model which has been a topic of interest in various previous studies <cit.>. To do so by performing simulation, we have performed simulations in a systematic manner using QuTiP <cit.>.
The rest of the paper is organized as follows. In Section <ref>, we discuss linear and nonlinear double JC models with significant attention to nonlinear JC models involving multiphoton interaction and nonlinear driving field. Origin and possible ways of modelling various types of disorder and noise has been discussed in Section <ref>. In Section <ref>, results and analysis related to the dynamics of the linear and nonlinear JC models of our interest are reported in detail with a focus on impact of noise and disorder, with specific attention to their impact on entanglement (produced via the JC type interaction) leading to its sudden death followed by sudden revivals. Finally, the paper is concluded in Section <ref>.
ยง SYSTEM AND MATHEMATICAL DESCRIPTION
ยง.ยง The Jaynes-Cummings Model and its generalizations
Interaction of a two level quantum systems (like an atom) with a quantized electromagnetic field is described by the JC Hamiltonian. As mentioned in the previous section, in its original form and several variants, this is one of the most well-studied models of the matter-field interaction. In this section, we aim to briefly introduce the variants of JC model that will be studied in this work. To begin with, we may mention that in the simplest form of JC model, an atom (or equivalently a two-level quantum system) interacts with the field through a dipole interaction term given by - dฬยทร, where dฬ is the dipole moment operator and ร is the electric field operator. In the quantized case, a single mode electric field in a cavity is given by (after the dipole approximation)
ร = E_0 (รข + รข^โ ),
and the dipole moment operator is
dฬ = d(ฯฬ_+ + ฯฬ_-).
Here, ฯฬ_+ = |eโฉโจl| and ฯฬ_- = |lโฉโจe| with |lโฉ and |eโฉ representing the ground and the excited states of the atom, respectively. Thus, the total Hamiltonian for this case can be defined as <cit.>
ฤค = 1/2ฤงฯ_0
ฯฬ_3 + ฤงฯรข^โ รข + ฤง g (ฯฬ_+ + ฯฬ_-)(รข+รข^โ ),
where ฯฬ_3 = |eโฉโจe| - |lโฉโจl|, ฯ_0 is the frequency of the transition between the ground and excited states of the atom, and ฯ is the frequency of the optical (electric) field. Also, g is the coupling strength that parameterizes the interaction of the atom with the field in the cavity. After making the rotating wave approximation (RWA) and moving to the interaction picture (rotating frame of the field), Hamiltonian (<ref>) simplifies to <cit.>
ฤค_I,RWA = (ฯ_0 - ฯ) 1/2ฤงฯฬ_3 + ฤง g (ฯฬ_+ รข + ฯฬ_-รข^โ ).
In the case of resonance (i.e., for ฯ_0 = ฯ), the Hamiltonian further simplifies to
ฤค_I,RWA = ฤง g (ฯฬ_+ รข + ฯฬ_-รข^โ ).
Such elementary model is still relevant and is often discussed in various contexts, e.g., in conservation of the amount of nonclassicality or transfer of coherence from field to atom (see <cit.> and references therein).
This simple model of matter-field interaction can be generalized in various ways. One such possibility is to consider that two atoms are interacting with two cavities. Such a consideration leads to double JC model and its generalizations to be discussed here.
ยง.ยง.ยง Double JC model
The Hamiltonian of the double JC model under RWA can be obtained easily using Eq. (<ref>) and the same may be explicitly written as <cit.>
โฬ_RWA = 1/2ฤงฯ_0
ฯฬ_3^A + ฤงฯรข^โ รข + ฤง G_A (ฯฬ_+^A รข + ฯฬ_-^A รข^โ ) + 1/2ฤงฯ_0
ฯฬ_3^B + ฤงฯbฬ^โ bฬ + ฤง G_B (ฯฬ_+^B bฬ + ฯฬ_-^A bฬ^โ ).
Here, ฯ_0 is the frequency associated with the |lโฉโ|eโฉ transition for both the atoms, รข (รข^โ ) and bฬ (bฬ^โ ) are the annihilation (creation) operators for the single mode fields coupled with atoms A and B with coupling constants G_A and G_B, respectively. The superscripts A and B on the raising and lowering operators refer to the two atoms.
ยง.ยง.ยง Multi-photon double JC models
We can extend the single and double JC models to study multi-photon interactions and refer to such JC models as multi-photon JC models. Introduction of N-photon interaction in the single JC model transform the Hamiltonian in Eq.ย (<ref>) to (after making the rotating wave approximation) <cit.>
ฤค_RWA = 1/2ฤงฯ_0ฯฬ_3 + ฤงฯรข^โ รข+g(ฯฬ_+รข^N + ฯฬ_- (รข^N)^โ ).
Here, the interaction may be visualized as a physical process in which transition of an atom from the ground to excited (excited to ground) state happens in N steps by absorbing (emitting) N photons of equal frequency. Realization of such multi-photon optical processes became possible with the advent of laser, and now these processes are frequently realized in the laboratories (see <cit.> for details). In a similar manner, we can also extend double JC model to obtain a nonlinear version of the same. The Hamiltonian describing a multi-photon double JC model under RWA (i.e., multi-photon variant of (<ref>)) can be expressed as
โฬ_RWA = 1/2ฤงฯ_0
ฯฬ_3^A + ฤงฯรข^โ รข + ฤง G_A (ฯฬ_+^A รข^N + ฯฬ_-^A (รข^N)^โ ) + 1/2ฤงฯ_0
ฯฬ_3^B + ฤงฯbฬ^โ bฬ + ฤง G_B (ฯฬ_+^B bฬ^N + ฯฬ_-^A (bฬ^N)^โ ).
For the resonant case, we must have ฯ_0 = Nฯ. Another possible route for generalisation or extension of the JC model arises from considering a nonlinearly interacting driving field.
ยง.ยง.ยง JC model in presence of a nonlinearly interacting driving field
The cavity may be driven externally by an interaction of a probe field with either the cavity field or atom. In case of high-intensity lasers, this probe field may also interact with the cavity field through a nonlinear process
<cit.>. The nonlinear driving field can be described by ฤค_P = ฯต(t)[รข^Me^i(Mฯ_Pt-ฯ) + H.c.]. We use ฯต(t) to refer to the strength of the diving field. Here, รข is the cavity mode, ฯ is the phase, M represents the nonlinearity of the field, and ฯ_P is the single photon frequency. M=1 yields the linearly driven cavity case.
Inclusion of the pump changes the Hamiltonian of single JC model in the interaction picture (rotating frame of the pump) to <cit.>
ฤค_I,RWA = ฮ_P(รข^โ รข + Nฯ_3/2)
+ [ฯต(t)รข^Me^-iฯ + gรข^Nฯ_+ + H.c.],
where ฮ_P = ฯ-ฯ_P and H.c. corresponds to the Hermitian conjugate terms. The resonant case is ฯ_P = ฯ when M<N, and Nฯ_P = Nฯยฑ gโ(N!) when M = N. Externally driven double JC model can also be written in the same way.
So far we considered some ideal descriptions of double JC models. In what follows, we include some imperfections in the model, such as cavity disorders or noise.
ยง.ยง JC models in presence of cavity disorders and noise
Cavity disorders and noise are two practical aspects that may be incorporated in the studies of JC models. Noise models are frequently discussed, but cavity disorders are relatively less discussed. The origin of cavity disorder can be visualized, for instance, as a manifestation of the effect of the position of an atom inside a cavity leading to fluctuations in coupling strength g. Here, we focus on the glassy JC model or quenched disorder. In this case, the disorder remains effectively unchanged during the observations are performed. The quantity of interest is obtained after quenched averaging of numerous realizations of the disordered system.
ยง.ยง.ยง JC models with cavity disorders
Disorders modelled as fluctuations in the cavity coupling strength (g) can be included in JC models. Thus, the interaction Hamiltonian (for single JC resonant case) in the presence of a cavity disorder becomes <cit.>
ฤค_I,RWA = ฤง g(1+ฮด)(ฯฬ_+รข+ฯฬ_ฬ-ฬรข^โ ).
A simple comparison between Eqs.ย (<ref>) and (<ref>) would reveal that ฮด in Eq.ย (<ref>) quantifies the fluctuation in the coupling strength g.
Interestingly, there are different types of cavity disorders, which primarily differ from each other in the probability distribution from which ฮด is chosen. However, mainly two types of cavity disorders, namely Gaussian quenched disorder and uniform quenched disorder, have recently been studied <cit.>. These two types of cavity disorders can be briefly described as follows.
* Gaussian quenched disorder: In this case, ฮด is chosen at random from a Gaussian distribution with zero mean and a standard deviation of s. The probability distribution function is given by
P(ฮด) = 1/sโ(2ฯ) e^-1/2(ฮด/s)^2.
* Uniform quenched disorder: In this case, ฮด is distributed uniformly between [-s/2,s/2]. The probability distribution is given as
P(ฮด) =
1/s, when -s/2โคฮดโคs/2
0, otherwise.
Here, we study the impact of these two types of quenched disorders on the dynamics of double JC model. To do so, we require quenched averaging of the quantities under consideration.
Specifically, the averaging is carried out after all other operations have been executed. For example, for quantifying the atom-atom entanglement (say in terms of concurrence C) in double glassy JC system, the concurrence C_ฮด(t) is calculated for the atom-atom state ฯ_ฮด(t) at a fixed time t. The quenched averaging of concurrence can be defined as
C(t) = โซ_-โ^+โC_ฮด(t)P(ฮด)dฮด.
Numerically, N instances of the disorder ฮด are Harr uniformly generated, and are labeled as ฮด_i and the quenched average value is calculated as
C(t) = 1/Nโ_1^NC_ฮด_i(t).
This approach allows us to investigate the dynamics of the process in the presence of probabilistic disorders/defects in the setup. Yet another non-ideal behaviour in the JC system can be its inevitable interaction with the ambient environments leading to dissipation and decoherence of the quantum features of the system, viz. entanglement. This can be studied using master equation in open quantum system formalism
ยง.ยง.ยง Open quantum system
The system and environment/reservoir, initially uncorrelated, get correlated due to combined unitary evolution governed by a system-environment Hamiltonian. The system dynamics can be obtained by tracing out the environment as ฯฬ = Tr_env[ฯฬ_tot].
In the case of white noise limit (i.e., after making Born, rotating-wave and Markov approximations in the system-environment interaction), the reduced dynamics of the system (ฯ) can be described by the master equation <cit.>
dฯฬ(t)/dt = -i/ฤง[ฤค(t),ฯฬ(t)] + โ_n 1/2 [2ฤ_nฯฬ(t)ฤ_n^โ - ฯฬ(t)ฤ_n^โ ฤ_n - ฤ_n^โ ฤ_nฯฬ(t)],
where ฤ_n = โ(ฮณ_n)ร_n are the collapse operators, ร_n are the operators through which the system couples to the environment and ฮณ_n are the corresponding rates. Markov approximation entails that the time scale of the decay of the correlations of the environment is much smaller than the time scale of the dynamics of the system (ฯ_sysโซฯ_env).
In case of the double JC model (say in the model described by Eq.ย (<ref>)), decoherence in the system could be induced due to a set of processes.
* Leakage of photons from the cavity into its surroundings with the average photon number n_th: ร_j=รข and ฮณ_j=ฮบ(1+n_th).
* Photons leaking into the
cavity due to thermal excitation: ร_j=รข^โ and ฮณ_j=ฮบ n_th.
* The polarization decay of the atom from the exited state to the ground state: ร_j=ฯฬ_- and ฮณ_j=ฮณ.
* The dephasing of the energy states of the atom: ร_j=ฯฬ_z and ฮณ_j=ฮณ_ฯ.
We have already mentioned that the effects of noise on the entanglement in JC models is a promising avenue of study. Specifically, the possibility of noise in the system resulting in entanglement sudden deaths and revivals in the dynamics is of particular interest. Before, we proceed to describe the results of such studies, it will be apt to briefly define a measure of entanglement and the meaning of entanglement sudden death. It should be noted that there is no unique measure of entanglement, but concurrence is a widely used <cit.> measure of entanglement. The same is used in this paper to measure the correlation between the two atoms when studying the dynamics of the system evolution.
ยง.ยง Atom-atom entanglement and initial atom-cavity states
The two qubit mixed state ฯ_AB(t) is obtained by tracing out the cavity part of the system density operator as
ฯ_AB(t) = Tr_cav [ฯ_sys(t)].
Concurrence is calculated using the formula
C = max{0,โ(ฮป_1) - โ(ฮป_2) - โ(ฮป_3) - โ(ฮป_4)},
where, ฮป_1, ฮป_2, ฮป_3, and ฮป_4 are the eigenvalues of matrix ฯ_ABฯฬ_AB (arranged in the descending order) with
ฯฬ_AB = (ฯ_y โฯ_y)ฯ^*_AB(ฯ_y โฯ_y)
ฯ_y being the Pauli operator.
Once entanglement is quantified through a measure (concurrence in the present case), its dynamics can be studied and it may be checked whether entanglement vanishes at a point with a discontinuous derivative (as will be seen when discussing the results of our study). Such a disappearance of entanglement refers to as entanglement sudden death. This phenomenon is sometimes followed by the revival of entanglement in the system with a discontinuous gradient, known as sudden revival. In what follows, we will show that noise may cause entanglement sudden death in JC models of our interest described in the previous section.
For the present study, we assume both the cavities in double JC model in the vacuum states |0_Aโฉ and |0_Bโฉ. We further assume that the two atoms are initially in the entangled states. Specifically, we choose two such initial states of atoms <cit.> which (i) does not show entanglement sudden death and (ii) shows entanglement sudden death in the dynamics governed by the Hamiltonian (<ref>).
The initial state of the total system in the former case is
|ฯ_0โฉ = (sin(ฮฑ)|l_A,e_Bโฉ + cos(ฮฑ)|e_A,l_Bโฉ) โ|0_A,0_Bโฉ,
while in the latter case it is
|ฯ_0โฉ = (sin(ฮฑ)|l_A,l_Bโฉ + cos(ฮฑ)|e_A,e_Bโฉ) โ|0_A,0_Bโฉ.
ยง RESULTS AND DISCUSSION
Here, we investigate the evolution of entanglement characteristics between two atoms in different cavities that share the entanglement resource initially. We study this evolution under different variants of the double JC model and the effects of imperfections and/or open quantum system described in the previous section. We focus on the emergence of entanglement sudden deaths and revivals as well as the effect of nonlinearity in influencing the time scale of the dynamics. To begin with, we discuss the case with double JC model described through Eq. (<ref>).
ยง.ยง Double JC model
We begin with Case (i) where there are no entanglement sudden deaths and revivals present in the entanglement dynamics (under Hamiltonian (<ref>)) in the absence of noise.
In this case, we studied the dynamics of the system for different choices of coupling parameters with the initial state of the total system described by Eq.
(<ref>). Here, we first discuss a symmetric case (G_A =G_B) and an asymmetric case G_A โ G_B. For illustration, we have specifically chosen G_A = 0.9G_B for the asymmetric case. Evolution of the amount of entanglement (quantified in terms of entanglement measure concurrence) is shown in Figureย <ref> (a) and (b) for symmetric and asymmetric cases, respectively, both in the absence and presence of various sources of dissipation discussed in Section <ref>. Here, we have used the same decay rates for both cavities, i.e., ฮบ_A = ฮบ_B = ฮบ, ฮณ_A = ฮณ_B = ฮณ and ฮณ_ฯ^A = ฮณ_ฯ^B = ฮณ_ฯ. The regime considered here is the strong coupling regime (as G_j > ฮบ,ฮณ,ฮณ_ฯ) <cit.>. Difference in the periodicity for the symmetric and the asymmetric case (in ideal case) is apparent in the figure, with the asymmetric case being periodic at a time scale of about 10 times higher compared to the symmetric case. The periodic behavior of correlations between two resonant cavities has similarity with the the periodic behavior of atom-field entanglement in the single JC model <cit.>. In the ideal asymmetric case, the dynamics reflects mirror symmetry along the one-half of the period (cf. Figureย <ref> (b)). Remnant of this behavior is visible for small decoherence rates.
In the ideal case, we cannot observe entanglement sudden deaths and revivals. However, these deaths and revivals are present in the entanglement dyanmics when the cavity interacts with a thermal reservoir, i.e., n_th > 0.
It is also noticeable (from the system Hamiltonian and dynamics) that the decay of entanglement with the dissipation of excitation through cavity (for vacuum bath) or atom (at the rate ฮบ or ฮณ, respectively) has the same effect. Depletion of entanglement due to dephasing of atomic states is more than that of dissipation of excitation at the same decay rates. Due to strong coupling regime, entanglement maxima at the consecutive peaks can be observed to decrease moderately with rescaled time. For a further small value of cavity decay rate and a thermal reservoir, maxima of the entanglement after revival in the asymmetric case is observed to be higher than that before the entanglement sudden death. Decoherence finally eliminates all the quantum correlations between two cavities (shown in inset of Figureย <ref> (a) and (b)).
We now consider Case (ii) where entanglement sudden deaths and revivals are observed (under Hamiltonian (<ref>)) even in the absence of noise. The initial state of the atoms (<ref>)
reveals similar observations to that in the previous case (shown explicitly in Figure <ref> (c) and (d)).
Though these similarities are more pronounced in the symmetric case than that in the asymmetric case.
Specifically, we observe that the addition of noise adversely affects the dynamics of the system, i.e., the maximum amount of entanglement after revival decreases. However, dephasing and dissipation were also observed to enhance the time between the entanglement sudden deaths and/or revivals.
This may be visualized (except dissipation due to thermal reservoir) as a phenomenon in which noise suppresses smaller entanglement amplitudes and consequently drowns out some of the revivals. A specific example can be seen in the asymmetric case (cf. Figure <ref> (d)) around one-half of the period in case of dissipation due to vacuum reservoir.
ยง.ยง Multi-photon double JC model
We further study the entanglement dynamics of the multi-photon double JC model described by Eq.ย (<ref>).
We begin with the ideal cases for which the results are shown in Figure <ref> (a)-(d).
It is observed that higher-order of the multi-photon pump processes leads to faster system dynamics (and thus entanglement dynamics) compared to the corresponding lower-order case. For example, the period of the complete revival of the entanglement dynamics for the multi-photon double JC model for n=3 is smaller than that for n=2. This behavior is observed in both symmetric and asymmetric cavities.
No other significant change is observed in the entanglement dynamics.
When we introduce dissipation (noise) to our model, we start seeing more interesting dynamics as illustrated in Figure <ref> (e) and (f). We restrict ourselves to the specific case where the initial state is given by Eq. (<ref>) to focus on the effects of noise, disorder and nonlinearity on introducing entanglement deaths and revivals. It is observed that the emergence of entanglement sudden deaths and revivals when setting n_th > 0 is preserved for the nonlinear cases. The onset of the sudden deaths and revivals is delayed with increasing nonlinearity (observed around scaled time G_Bt/2ฯ of 0.88 in Figure ย <ref> (e), compared to around 2.75 in Figure <ref> (f)). Unlike the linear case in Figure <ref>, where the decay of entanglement with the dissipation of excitation through the cavity (for vacuum bath) or atom (at the rate ฮบ or ฮณ, respectively) has the same effect, this is not the case here, as is evident in Figures <ref> (e)-(f). Hence, the symmetry in the correlation dynamics due to ฮบ or ฮณ is broken. This can be understood by remembering that with the introduction of nonlinearity, the energy of the photons emitted due to the deexcitation of the atom is equivalent to the energy of a single photon in the cavity times the amount of nonlinearity. The symmetry in the decay of correlation due to ฮณ and ฮบ is restored by scaling the ฮบ by 1/nonlinearity. The symmetry is also restored when scaling the ฮณ by nonlinearity.
ยง.ยง Multi-photon double JC model in presence of a nonlinearly interacting driving field
In Figures ย <ref> and <ref>, we present the dynamics of the systems in presence of a nonlinear pump. The Hamiltonian for this case under resonance is given by Eqs. <ref>. One interesting aspect that can be noted is that adding a nonlinear pump gives rise to entanglement sudden deaths and revivals in the absence of noise, even for cases where such characteristics were not observed in the linear cases. For example, in Figure ย <ref>, the characteristics of sudden death and revivals can be seen even in the absence of disorder. For a fixed strength of the driving field, increasing the nonlinearity seems to suppress the amplitude of the concurrency more. Another interesting observation to note here is that in Figure <ref>, the cases for N=M (N=2,M=2 and N=3,M=3) have a symmetry in the dynamics of the system. Although when the strength of the driving field is increased, (cf. Figure <ref>), this symmetry appears to be broken.
ยง.ยง Modelling Disorder
Now we introduce different disorders to the clean double JC model without dissipation. We observe a trend similar to one seen in Figure <ref> (a)-(d).
Increase in the nonlinearity speeds up the dynamics of the system and the entanglement seems to wash out faster as can be seen in Figure <ref>. Introducing dissipation (noise) along with the disorder also seems to have expected results as in that case the dynamics is observed to remain similar to the case with no dissipation, but the characteristic amplitudes are found to decay as is apparent in Figure <ref>.
ยง CONCLUSIONS
In this work, we have studied the entanglement dynamics of a set of double JC models, which includes models where the two atoms share an initial entanglement resource in the ideal case and under the presence of dissipation and disorders. The effect of disorder and dissipation is also studied in the case when multiphoton interactions are present between the cavity and the atom. The study is performed using the master equation solver (available in QuTiP simulator) capable of solving the time evolution of a state using the master equation in either the Liouvillian or the Lindblad form. The evolution is calculated numerically using Linear Multistep Methods. The equations can be solved using either the Adams method <cit.> or the Backward differentiation formulae <cit.>. However, the master equation solver available in QuTiP defaults to the Adams method for integration, and we have used that setting.
The study of the entanglement dynamics of the above-listed double JC models was performed using the method described above. Obtained results are primarily illustrated through Figures 1-6. From these figures, it can be lucidly inferred that the effect of noise on the dynamics of the system is to decay the characteristics, while that of the disorder is to wash them out. On the other hand, the introduction of nonlinearity is found to cause the dynamics of the system to speed up. A combination of these effects gives rise to interesting phenomena, like entanglement sudden deaths and revivals, even in the cases where that did not exist before.
As the paper reports the entanglement dynamics of a large family of double JC models in various conditions, it's expected to be of use in the study of various quantum phenomena where JC models known to have relevance. Further, the present study is restricted to Markovian noise and it's expected that future studies along the similar lines, will reveal rich dynamics of the double JC model in the presence of non-Markovian noise.
ยง ACKNOWLEDGMENTS
AP acknowledges the support from the QUEST scheme of Interdisciplinary Cyber Physical Systems (ICPS) program of the Department of Science and Technology (DST), India (Grant No.:
DST/ICPS/QuST/Theme-1/2019/14 (Q80)).
|
http://arxiv.org/abs/2306.07524v1
|
20230613033631
|
Recent Belle II Results on Hadronic $\boldsymbol{B}$ Decays
|
[
"Shu-Ping Lin"
] |
hep-ex
|
[
"hep-ex"
] |
#1#2#3#4#1 #2, #3 (#4)
Nuovo Cimento
Nucl. Instrum. Methods
Nucl. Instrum. Methods A
Nucl. Phys. B
Phys. Lett. B
Phys. Rev. Lett.
Phys. Rev. D
Z. Phys. C
J. High Energy Phys.
Prog. Theor. Exp. Phys.
Eur. Phys. J. C
Nucl. Instrum. Meth. A
ฯต^'
ฮต
โ
ฯ^+ฯ^-ฮณ
p
K^0
Kฬ
^ฬ
0ฬ
ฮฑ
ฮฑฬ
Department of Physics, National Taiwan University,
No. 1, Sec. 4, Roosevelt Rd., Taipei 106216, TaiwanRecent Belle II Results on Hadronic B Decays
Shu-Ping Lin, on behalf of the Belle II Collaboration
June 12, 2023
=========================================================
The investigation of B meson decays to charmed and charmless hadronic final states is a keystone of the Belle II program.
Analyses of such decays provide reliable and experimentally precise constraints on the weak interactions of quarks. They are sensitive to effects from non-SM physics, and further our knowledge about uncharted bโ c hadronic transitions.
We present new results from combined analyses of Belle and Belle II data to determine the quark-mixing parameter ฯ_3 (or ฮณ), and from the Belle II analyses of two-body decays that are related to the determination of ฯ_2 (or ฮฑ).
We also present recent Belle II results on branching ratios and direct CP-violating asymmetries of several B decays, which result in a competitive standard-model test based on the Kฯ isospin sum rule and first observations of three new B โ D^(*) K K_S^0 decays.
ยง INTRODUCTION
Hadronic B decays provide precise constraints on the parameters of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix and are sensitive probes of physics beyond the standard model (SM).
Measurement of new decay channels expands our knowledge of the flavour sector.
Belle IIย <cit.>, located at the SuperKEKBย <cit.> asymmetric collider at KEK, is a hermetic solenoidal magnetic spectrometer surrounded by particle-identification detectors, an electromagnetic calorimeter, and muon detectors. Optimised for the reconstruction of bottom-antibottom pairs produced at the threshold in ฮฅ(4S) decays, Belle II has competitive, unique, or world-leading reach in many key quantities associated with hadronic B decays.
The CKM angle ฯ_3/ฮณ = ( - V_udV^*_ub/V_cdV_cb^*) is a fundamental constraint for charge-parity ( CP) violation in the SM and can be reliably determined in tree-level processes, with negligible loop-amplitude contributions.
The angle ฯ_2/ฮฑ = ( - V_tdV^*_tb/V_udV_ub^*) is currently limiting the precision of non-SM tests based on global fits of the quark-mixing matrix unitarity and is best determined through measurement of Bโฯฯ and Bโฯฯ decays.
For Bโ Kฯ decays, an isospin sum ruleย <cit.> combines the branching fractions and CP asymmetries of the decays, providing a null test of the SM.
The capability of investigating all the related final states jointly, under consistent experimental settings, is unique to Belle II.
The challenge in analysing these channels lies in the large amount of e^+e^-โ qqฬ
continuum background. Binary-decision-tree classifiers C are used to discriminate between signal and continuum events using event topology, kinematic, and decay-length information.ย <cit.> The determination of the signal yields is mainly based on observables that exploit the specific properties of at-threshold production: the energy difference between the B candidate and the beam energy, ฮ E = E_B^* - E_beam^*, and the beam-constrained mass M_bc = โ(E_beam^*/c^2-(p_B^*/c)^2), where E_B^* and p_B^* are the energy and momentum of the B candidate, respectively, and E_beam^* is the beam energy, all in the center-of-mass frame.
ยง B โ D^(*) K K_S^0 DECAYS
The composition of a large fraction of the B hadronic width is unknown, which limits significantly our capability to model B decays in simulation, impacting the precision of many measurements. Belle II is pursuing a systematic program of exploration of hadronic B decays.
We report a preliminary measurement of the branching fraction of B^- โ D^0K^-K_S^0 decayย <cit.>, with a precision that is three times better than the previous best resultsย <cit.>,
and first observation of three new decay channels (Bฬ
^0 โ D^+K^-K_S^0, B^- โ D^*0K^-K_S^0, Bฬ
^0 โ D^*+K^-K_S^0)ย <cit.>.
The branching fractions
โฌ(B^- โ D^0 K^- K_S^0) = (1.89 ยฑ 0.16 ยฑ 0.10) ร 10^-4,
โฌ(Bฬ
^0 โ D^+ K^- K_S^0) = (0.85 ยฑ 0.11 ยฑ 0.05) ร 10^-4,
โฌ(B^- โ D^*0 K^- K_S^0) = (1.57 ยฑ 0.27 ยฑ 0.12) ร 10^-4,
โฌ(Bฬ
^0 โ D^*+ K^- K_S^0) = (0.96 ยฑ 0.18 ยฑ 0.06) ร 10^-4,
are extracted from a 362 fb^-1 Belle II sample using likelihood fits to the unbinned distributions of the energy difference ฮ E,
where the first uncertainties are statistical and the second are systematic.
The invariant mass m(KK_S^0) of the two kaons is investigated.
For all four channels, the m(KK_S^0) distribution exhibits a peaking structure in the low-mass region, which departs from the expected three-body phase space distribution.
Structures are also observed in the Dalitz distributions (Figureย <ref>).
ยง DETERMINATION OF CKM ANGLE ฮฆ_3/ฮ
The angle ฯ_3 is studied through the interference of bโ cuฬ
s and bโ ucฬ
s transition amplitudes in tree-level B hadron decays.
The current world average ฯ_3=( 65.9^+3.3_-3.5)^โ is dominated by LHCb measurementsย <cit.>.
ฯ_3 can be determined through different approaches featuring different D final states from B decays into charmed B final states.
The most precise Belle II result is obtained with self-conjugate D final states (D โ K_S^0 h^+ h^-, h=K,ฯ)ย <cit.>, where several intermediate resonances are involved in D decays, resulting in variation of the CP asymmetry over the phase space.
Two other approaches have also been pursued: one with Cabibbo-suppressed D decays, and the other with the D meson decaying to two-body CP eigenstates.
In both analyses it is required that M_bc > 5.27 GeV/c^2, |ฮ E| < 0.15 GeV (ฮ E>-0.13 GeV for the latter), and a loose requirement on C removes about 60% of the continuum background. The signal yields are extracted with fits of ฮ E and the continuum suppression classifier.
ยง.ยง Cabibbo-suppressed channels
Grossman, Ligeti, and Soffer (GLS)ย <cit.> proposed a method to measure ฯ_3 with singly Cabibbo-suppressed decays of D mesons, B^ยฑโ D(โ K_S^0 K^ยฑฯ^โ) h^ยฑ, where D is the superposition of D^0 and Dฬ
^0 mesons. The B^ยฑ meson can have the same sign (SS) or opposite sign (OS) with respect to the K^ยฑ from the D decay.
In this analysisย <cit.>, the information about the dynamics of the D decays is integrated with external inputs from CLEOย <cit.>.
The fitting is performed in both the full D phase space and in the K^* region, where the invariant mass m(KK_S^0) is close to that of K^*(892)^0, thus enhancing the interference and the precision on ฯ_3, due to the large strong-phase difference in Dโ K_S^0 K^ยฑฯ^โ decays.
Combining Belle (711 fb^-1) and Belle II (362 fb^-1) data, the result is consistent, though not yet competitive to the result from LHCbย <cit.>.
It can provide a constraint on ฯ_3 when combined with results from other measurements.
ยง.ยง CP eigenstates
Gronau, London, and Wyler (GLW)ย <cit.> proposed a method where the D meson decays to K^+K^- ( CP-even) or K_S^0ฯ^0 ( CP-odd) eigenstates. The K_S^0ฯ^0 eigenstate has been accessible to B-factories only.
Belle (711 fb^-1) and Belle II (189 fb^-1) data are combined to give the final result, which is consistent but not competitive with results from BaBarย <cit.> and LHCbย <cit.>.
The branching fraction ratios (โ_ CPยฑ) and CP asymmetries (๐_ CPยฑ) are
โ_CP+ = 1.164 ยฑ 0.081 ยฑ 0.036,
โ_CP- = 1.151 ยฑ 0.074 ยฑ 0.019,
๐_CP+ = 0.125 ยฑ 0.058 ยฑ 0.014,
๐_CP- = -0.167 ยฑ 0.057 ยฑ 0.060,
where the first uncertainties are statistical and the second are systematic.
The statistical and systematic precisions are significantly better than the previous Belle measurementย <cit.>, and the result could constrain ฯ_3 in combination with other measurements.
In particular, the first evidence for ๐_ CP+ and ๐_ CP- having opposite signs is observed, showing clearly the effect of CP violation.
ยง TOWARD CKM ANGLE ฮฆ_2/ฮ
The least precisely known CKM angle ฯ_2/ฮฑ is starting to limit the testing power of the CKM model.
Belle II has the unique capability of measuring all Bโฯฯ and Bโฯฯ decays, from which ฯ_2 can be determined. The combined information from these decays exploits isospin symmetry, reducing the effect of hadronic uncertainties.
The B candidates are required to have M_bc > 5.27 GeV/c^2 and |ฮ E| < 0.15 GeV (<0.30 GeV for final states involving neutral pions), followed by continuum suppression that removes 90-99% of continuum background.
ยง.ยง Bโฯฯ decays
The measurement of Bโฯฯ decays require a complex angular analysis. The fit is based on M_bc, ฮ E, the dipion masses, and the helicity angle of the ฯ candidates.
The preliminary Belle II results of B^0โฯ^+ฯ^- and B^+โฯ^+ฯ^0 decays using 189 fb^-1 of dataย <cit.> are on par with the best performances from Belleย <cit.> and BaBarย <cit.>.
Results are listed in Tableย <ref>.
ยง.ยง Bโฯฯ decays
Measurements of B^0โฯ^+ฯ^- and B^+โฯ^+ฯ^0 decays are based on 362 fb^-1 of Belle II data. The fit is performed over ฮ E and C', and the dominant systematic uncertainty arises from the ฯ^0 efficiency.
The first measurement of B^0โฯ^0ฯ^0 at Belle II is also reportedย <cit.>, using 189 fb^-1 of data. This decay is both CKM- and colour-suppressed, and has only photons in the final state, making it experimentally challenging to measure. The result obtained from a fit to M_bc, ฮ E, and C', however, achieves Belle's precision despite using a dataset that is only one third of the size. This is due to the dedicated ฯ^0 selection and continuum suppression studies that yield a much higher ฯ^0 efficiency.
Results are listed in Tableย <ref>.
ยง Kฮ ISOSPIN SUM RULE
The isospin sum rule I_Kฯ is defined by
I_Kฯ = ๐_K^+ฯ^- + ๐_K^0ฯ^+ยทโฌ_K^0ฯ^+/โฌ_K^+ฯ^-ฯ_B^0/ฯ_B^+
- 2๐_K^+ฯ^0ยทโฌ_K^+ฯ^0/โฌ_K^+ฯ^-ฯ_B^0/ฯ_B^+
-2๐_K^0ฯ^0ยทโฌ_K^0ฯ^0/โฌ_K^+ฯ^-,
where โฌ_Kฯ and ๐_Kฯ are the branching fractions and the CP asymmetries, and ฯ_B^0/ฯ_B^+ = 0.9273 ยฑ 0.0033ย <cit.> is the ratio of B^0 and B^+ lifetimes. The SM prediction of the sum rule is zero, with a precision of better than 1%, in the limit of isospin symmetry and no electroweak penguins contributions.
Any large deviation from the SM prediction is an indication of non-SM physics.
The experimental precision of the sum rule is limited by ๐_K^0ฯ^0ย <cit.>.
We studied all the final states associated with the sum rule: B^0โ K^+ฯ^-, B^+โ K_S^0ฯ^+, B^+โ K^+ฯ^0, and B^0โ K_S^0ฯ^0 using 362 fb^-1 of Belle II data.
The analyses of the various decays follow a similar strategy, with common selections applied to the final states particles.
B candidates are required to satisfy 5.272 < M_bc < 5.288 GeV/c^2, |ฮ E| < 0.3 GeV, and a loose requirement of C that suppresses 90-99% of continuum background.
A fit is performed on the ฮ E-C' distribution, where the flavour tagging algorithmย <cit.> is employed to determine the flavour of the B candidate in B^0โ K_S^0ฯ^0 decay due to absence of primary charged particles.
The ฮ E distributions are shown in Figureย <ref>.
The measured branching fractions and CP asymmetries, as well as the sum rule calculated using these measurements, are listed in Tableย <ref>.
They agree with the world averagesย <cit.> and have competitive precisions.
In particular, the time-integrated and time-dependent results of B^0โ K_S^0ฯ^0 are combined to achieve the world's best result for ๐_K^0ฯ^0, and consequentially for I_Kฯ a competitive precision that is limited by the statistical uncertainty.
ยง SUMMARY
In summary, we present five new results from Belle II: measurement of Bโ D^(*) K K_S^0 decays, analyses of ฯ_3/ฮณ with the GLS and GLW methods, precise measurements of two-body decays that contribute to the determination of ฯ_2/ฮฑ, and the Kฯ isospin sum rule with a competitive precision to the world's best result.
ยง REFERENCES
99BelleII T. Abe. et al. (Belle II Collaboration) (2010), arXiv:1011.0352.
SuperKEKB Kazunori Akai et al.9071882018.
sum_rule M. Gronau, 627822005.
BDT F. Abudinรฉn et al. (Belle II Collaboration) (2023), arXiv:2303.08354.
DKK_Belle2 Belle II Collaboration et al. (2023), arXiv:2305.01321.
DKK_Belle A. Drutskoy et al. (Belle Collaboration), 5421712002PDG_2022 R. L. Workman et al. (Particle Data Group), 2022083C012022phi3_LHCb LHCb collaboration et al., 20211412021phi3_belle2_jhep Belle and Belle II collaborations et al., 2022632022GLS_methodY. Grossman, Z. Ligeti, and A. Soffer, 670713012003phi3_GLS_Belle2 Belle and Belle II collaborations et al. (2023), arXiv:2306.02940.
D_cleo J. Insler et al. (CLEO Collaboration), 940999052016phi3_GLS_LHCb LHCb collaboration et al.2020582020GL_method M. Gronau and D. London, 2534831991GW_method M. Gronau and D. Wyler, 2651721991phi3_GLW_BaBar P. del Amo Sanchez et al. (BaBar Collaboration), 820720042010phi3_GLW_LHCb R. Aaji et al. (LHCb Collaboration), 2021812021.
phi3_GLW_Belle K. Abe et al., 730511062006.
rho+rho- Belle II Collaboration et al. (2022), arXiv:2208.03554.
rho+rho0 Belle II Collaboration et al. (2022), arXiv:2206.12362.
rho+rho-_Belle_2 P. Vanhoefer et al. (Belle Collaboration), 940999032016rho+rho0_Belle J. Zhang et al. (Belle Collaboration), 912218012003rho+rho-_BaBar B. Aubert et al. (BaBar Collaboration), 760520072007rho+rho0_BaBar B. Aubert et al. (BaBar Collaboration), 1021418022009pi0pi0_Belle2 Belle II Collaboration et al. (2023), arXiv:2303.08354.
flv_tagger F. Abudinรฉn et al. (Belle II Collaboration), 822832022.
|
http://arxiv.org/abs/2306.12324v1
|
20230621150812
|
Breaking new ground for quantum and classical simulations of $\mathrm{SU(3)}$ Yang-Mills theory
|
[
"Tomoya Hayata",
"Yoshimasa Hidaka"
] |
hep-lat
|
[
"hep-lat",
"cond-mat.quant-gas",
"cond-mat.str-el",
"hep-th",
"quant-ph"
] |
=1
./figs/
|
http://arxiv.org/abs/2306.10398v1
|
20230617175342
|
Periodic solutions in next generation neural field models
|
[
"Carlo R. Laing",
"Oleh E. Omel'chenko"
] |
nlin.AO
|
[
"nlin.AO"
] |
propositionProposition
remarkRemark
|
http://arxiv.org/abs/2306.05489v1
|
20230608182133
|
Crucial role of thermal fluctuations and vertex corrections for the magnetic pseudogap
|
[
"Mengxing Ye",
"Andrey V. Chubukov"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.supr-con"
] | |
http://arxiv.org/abs/2306.05969v1
|
20230609153511
|
Language Models Can Learn Exceptions to Syntactic Rules
|
[
"Cara Su-Yi Leong",
"Tal Linzen"
] |
cs.CL
|
[
"cs.CL"
] |
Constraints on the intergalactic magnetic field strength from ฮณ-ray observations of GRB 221009A
[
===============================================================================================
Artificial neural networks can generalize productively to novel contexts. Can they also learn exceptions to those productive rules? We explore this question using the case of restrictions on English passivization (e.g., the fact that โThe vacation lasted five daysโ is grammatical, but โ*Five days was lasted by the vacationโ is not). We collect human acceptability judgments for passive sentences with a range of verbs, and show that the probability distribution defined by GPT-2, a language model, matches the human judgments with high correlation. We also show that the relative acceptability of a verb in the active vs. passive voice is positively correlated with the relative frequency of its occurrence in those voices. These results provide preliminary support for the entrenchment hypothesis, according to which learners track and uses the distributional properties of their input to learn negative exceptions to rules. At the same time, this hypothesis fails to explain the magnitude of unpassivizability demonstrated by certain individual verbs, suggesting that other cues to exceptionality are available in the linguistic input.
ยง INTRODUCTION
Many studies have demonstrated language models' ability to extend a generalization from a small set of examples to novel lexical items, structures, and contexts, even if the models do not always do so in a human-like way <cit.>. These studies show that models can substitute novel lexical items into rules where those items were previously unseen. At the same time, language models can sometimes over-generalize, for instance by producing a literal, compositional translation of idiomatic expressions like kick the bucket when humans would not <cit.>. A full evaluation of language models' generalization abilities should thus not only measure whether models can generalize when humans do, but also whether models are able to constrain their generalizations when humans do.
We address this question by building on a line of work that probes whether human-like acceptability judgments for argument structure alternations can be predicted from the probability distribution that a from language model defines over sentences. This studies have shown, for example, that the GPT-2 language model <cit.> can match human judgments about whether the dative alternation applies to a verb <cit.>, and that information about which syntactic frames a verb can appear in (e.g. whether a verb participates in the spray/load alternation) can be recovered from the verb's contextualized representations and from sentence embeddings <cit.>.
In this work, we evaluate models' ability to identify exceptions using the case study of the English passive.[Data and code are available at <https://github.com/craaaa/exceptions>.] The passive voice is highly productive in English; most strikingly, young children exposed to novel verbs in the active voice are able to understand and produce passive constructions using those verbs <cit.>. This suggests that English speakers do not in general conclude that verbs that they have never encountered in the passive voice are unacceptable in that voice.
Yet there are limits to the productivity of the English passive; examples such as () have been reported to be unacceptable in the passive voice:
The vacation lasted five days.
* Five days was lasted by the vacation.
Sentences like (b) are unlikely to occur productively in natural speechโjust like passives of infrequent verbs. Yet even though they do not receive explicit evidence that these sentences are unacceptable, rather than simply rare, English speakers nonetheless learn that they constitute exceptions, and do not judge (b) to be acceptable.
How do humans acquire such exceptions? The entrenchment hypothesis suggests that speakers track and use the distributional properties of their input as indirect negative evidence for the existence of an exception <cit.>. For instance, if an English learner never encounters the verb last in the passive voice, despite having seen last used productively in the active voice, they may conclude that last cannot occur in the the passive voice.
Are language modelsโwhich do not have access to human feedback or syntactic supervision, and are trained solely to perform next-word predictionโattentive to the same information that humans are when determining the extent to which syntactic rules can generalize?
In this paper, we tackle these questions by comparing human acceptability judgments on sentences containing verbs that are exceptional in the passive voice, on the one hand, to the probability distribution defined by a GPT-2-like model trained on a 100-million word English corpus. We find that the language model matches human acceptability judgments on active and passive sentences to a large degree (Figure <ref>), suggesting that language models can constrain their syntactic generalizations in a human-like way.
Using our model's training corpus, we further show that there is a weak but positive correlation between the relative frequency of actives and passives in the input and their relative acceptability. Together, these empirical results suggest that the linguistic input contains useful information from which exceptions to syntactic generalizations can be learned.
ยง RESTRICTIONS ON PASSIVIZATION
Although the English verbal passive is highly productive, not all verbs can occur in the passive. For instance, intransitive and middle verbs resist passivization in general <cit.>. In this paper, we focus on passives of transitive verbs that occur with by-phrases. These long passives are clauses of the form given in (), which in most cases have an uncontroversially acceptable passive form:
The ball was hit by the boy.
A small list of lexical exceptions have been described for which the passive voice is deemed ungrammatical <cit.>. Some of these exceptions can be classed together based on the semantics of the verb or types of arguments the verb takes. For instance, verbs that take measure phrases as their object reportedly do not occur in the passive:
That house costs fifty thousand dollars.
* Fifty thousand dollars is/are cost by that house.
<cit.>
Even within a particular verb class, passivizability may also be an idiosyncratic characteristic of individual lexical items <cit.>: verbs which can be substituted for each other in any other syntactic context may differ in their ability to passivize. Thus, for instance, although in the active voice matched, mirrored, approximated and resembled can occur in the same environment, (a) is grammatical, while (b) is not.
Kim is matched/mirrored/approximated by the model in nearly every detail.
* Kim is resembled by the model in nearly every detail.<cit.>
We may thus expect differences in passivizability not only between verbs with different semantics and argument frames, but also among verbs with very similar meaning.
ยง HUMAN ACCEPTABILITY JUDGMENTS
In order to test whether language models follow a human-like generalization patterns, we need to first characterize the human judgment pattern, which will serve as the target of modeling. In this section, we report on an acceptability judgment study whose goal was to verify the judgments from the syntax literature and measure any gradient differences in the degree to which different verbs can be passivized.
ยง.ยง Materials
We identified five verb classes containing verbs that have been reported to be unpassivizable <cit.>:
* Advantage verbs: benefit, help, profit, strengthen
* Price verbs: cost, earn, fetch
* Ooze verbs: discharge, emanate, emit, radiate
* Duration verbs: last, require, take
* Estimation verbs: approximate, match, mirror, resemble
Each of these class includes verbs with similar semantics that can be substituted into the same position in a sentence in the active voice. While some of these verbs can be used in other senses, we tested the specific sense that was reported in the literature by controlling the sentence frames used. Five past-tense sentence frames were constructed for each verb class (Table <ref>).
Each of the verbs in the class was substituted into the sentence frame, resulting in 90 total test sentence pairs. Example () demonstrates a sentence pair generated from the sentence frame in Table <ref> using the verb matched:
Your friend matched my brother.
My brother was matched by your friend.
As control verbs, we also selected five agent-patient and five experiencer-theme verbs; we expect these verbs to be passivizable:
* Agent-Patient: hit, push, wash, drop, carry
* Experiencer-Theme: see, hear, know, like, remember
Because of the varied semantics of the verbs in these groups, unique sentence pairs were created for each verb, yielding 50 control sentence pairs. An example of a sentence pair for the verb push is given in ():
A boy pushed the cup.
The cup was pushed by a boy.
Each participant only saw either the active or the passive of a sentence pair.
The 140 sentence pairs (90 test + 50 control) were divided into two buckets of 70 sentence pairs each such that each bucket contained two or three sentence frames per verb. Each bucket was then further divided into groups of 70 sentences such that the active and passive forms of a sentence pair were in different groups. Each group of sentences contained one quarter of the test and control stimuli (70 sentences).
Presentation order was counterbalanced by making four ordered lists for each group. Each group was organized into two lists such that an item that appeared in the first half of of one list appeared in the second half of the other list. The order of items was pseudorandomized within those lists to ensure that not more than two active or passive sentences and no two sentences within the same verb class were seen in succession. These lists were then reversed, so that a total of four ordered sentence lists were made per sentence group.
Additionally, every experimental trial alternated with a filler sentence. Filler sentences were also used as attention checks. We used 24 grammatical and 46 ungrammatical filler sentences: since the passives of control sentences were expected to be acceptable, the greater number of ungrammatical fillers was intended to balance the experimental stimuli. The full set of materials is available in Appendix <ref>.
ยง.ยง Participants
We recruited 84 participants who had IP addresses located in the US and self-reported as native English speakers via the crowdsourcing platform Prolific. Each participant rated 140 sentences (70 test + 70 filler) and was paid US$3.50. The experiment took approximately 12 minutes to complete.
Participants were asked to rate how acceptable each sentence sounded based on their gut reaction. They were told that there were no right or wrong answers. Participants rated sentences by moving a slider from โCompletely unacceptable" to โCompletely acceptable", which corresponded to an integer score (invisible to them) between 0 and 100. They were not able to rate a sentence with a score of 50. Two practice sentences (one ungrammatical, one grammatical) were used to familiarize participants with the paradigm.
Participants were excluded from the results if they answered more than 15 filler questions unexpectedly, either by giving ungrammatical sentences scores above 50 or giving grammatical sentences scores below 50. We excluded 10 participants from analysis for this reason.
ยง.ยง Results
We calculate the passive drop of an item as the difference in mean acceptability ratings between its active and passive version. The results are reported in Figureย <ref>; a steeper downward gradient corresponds with a larger passive drop.
Since corresponding active and passive sentences contain the same lexical items except for the auxiliary was and by, which are common across all sentences, directly comparing active and passive sentences isolates the effect of the passivization from lexical effects that might increase the acceptability of sentences with more common verbs like helped over low-frequency verbs like profited.
Across all verb classes, there was a significant difference between scores given to active and passive sentences. This difference may be accounted for by pragmatic factors: although the passive construction is more pragmatically marked than the active <cit.>, each sentence in the acceptability judgment task was presented to participants without establishing a relevant context. This setting might have caused participants to rate passive sentences as worse than their active counterparts.
Although the passive drop was positive for all verbs, its magnitude differed across verb classes. The duration class showed the largest mean passive drop (59.4 points), and the ooze class showed the lowest mean passive drop (8.0 points) among the test verb classes.
We fit a linear mixed-effects model to predict sentence score using the agent-patient verb class as the baseline. We used sentence type and verb class as well as their interaction as fixed effects and frame, participant and verb as random intercepts.
We found a significant difference between agent-patient verbs and three other verb classes: estimation verbs (p= 5.74e-06), price verbs (p< 2e-16), and duration verbs (p< 2e-16). On the other hand, there was no significant difference in the sentence scores obtained from agent-patient verbs and ooze verbs, advantage verbs, or experiencer-theme verbs as a class.
Within each verb class, some verbs were more passivizable than others. For example, last was significantly less passivizable than took and required, and cost was less passivizable than fetched. Similarly, while resembled had a high passive drop, the remaining verbs in the estimation class showed relatively low passive drops.
These results validate the claim that some verbs may be more passivizable than others despite sharing similar paradigmatic relationships <cit.>.
In summary, the human acceptability judgment experiment demonstrated that some verbs in the verb classes being tested are degraded in the passive voice, and that unacceptability was gradient between verbs. For a model to adequately approximate such behaviour, it must exhibit the following characteristics:
* Exceptionality: some verbs (e.g. duration verbs) exhibit passive drops that are significantly different from the baseline passive drop expected of the canonically passivizable agent-patient verbs.
* Gradience: (un)acceptability is gradient, with some verbs on average exhibiting higher passive drop than others.
ยง COMPARISON WITH LANGUAGE MODELS
With the quantitative human acceptability judgment data in hand, we now turn to evaluating language models.
If distributional data is sufficient to learn the extent to which verbs are unacceptable in the passive, we expected GPT-2 to be able to match human judgments on both passivizable verbs and unpassivizable verbs. We also expect GPT-2 to be able to match the relative gradience of passive drop that humans display.
ยง.ยง Method
We evaluated GPT-2 <cit.>, a Transformer <cit.> language model.
We tested four different pre-trained GPT-2 models, which differed in their number of parameters and number of layers, but were trained on the same data. Each model was trained on Open-AI's WebText corpus, which contains 40GB of data โ approximately 8B words, assuming each word contains an average of 5 bytes/chars. Pre-trained GPT-2 models have performed well on targeted syntactic evaluations requiring knowledge of argument structure, such as differentiating between verbs that participate in the causative alternation and those that do not <cit.>.
The GPT-2 models available for download are trained on a much larger corpus than is realistic for any human to be exposed to <cit.>. English-speaking children are exposed to 2โ7M words per year <cit.>, or 26Mโ91M words by the age of 13. Rounding to the nearest order of magnitude, we trained a GPT-2 model on a 100M word subset of the OpenWebText corpus <cit.>, an open-source reproduction of the Web Text corpus; this simulates more closely the amount of linguistic input a human may receive (though not its genre). We trained five iterations of this model, which we call GPT2-100M, using different random seeds and report averages of the results obtained from these five models.
We adapted the targeted syntactic evaluation paradigm <cit.> to compare the language models to humans. This paradigm involves obtaining model โjudgmentsโ for minimal pairs of sentences. For each sentence, a score is obtained by summing the log-probabilities assigned to each token in the sentence, which gives the probability the model assigns to that sentence. We conclude that a model's distribution is consistent with human judgments if it assigns a higher probability to the acceptable sentence than to the corresponding unacceptable one. Unlike some prior work, we collected numeric scores instead of binary acceptability judgments: we calculated a gradient passive drop of each sentence pair by subtracting the score of the active sentence from the score of its passive counterpart.
Since we compared active sentences to long passives, which contain by-phrases, every passive sentence contained two more words than its active counterpart. A sentence with more tokens will on balance be less probable than a sentence with fewer tokens; we thus normalized each sentence score by dividing it by the number of tokens in the sentence <cit.>. Doing so accounts for the effect of sentence length on the sentence score, and also allows us to compare sentences where words are split into separate tokens during GPT-2's tokenization process, e.g. approximated โ approx + imated.
ยง.ยง Results
The four pre-trained models as well as the five GPT2-100M models showed positive correlations between mean human passive drop and mean model passive drop, reported in Table <ref>. For pre-trained GPT-2 models, we calculate mean model passive drop for each verb by averaging over the passive drop of all five sentence frames. For GPT2-100M, we calculate the average passive drop of each verb over all sentence frames across the five versions of the model (trained with different random seeds); we report these results as GPT2-100M-avg.
The results were qualitative similar for all models (Figure <ref>); in what follows, we focus on GPT2-100M-avg, whose behaviour showed the strongest correlation with human judgments. These models are also trained on the most cognitively realistic corpus.
Figure <ref> plots GPT2-100M-avg's passive drop against the passive drop observed in the human experiment. A strong correlation was found between the passive drop in the models' sentence scores and human passive drop (r_s = 0.709), suggesting that predictions learned from linguistic input match human gradient judgments on passivization relatively well.
GPT2-100M-avg also matched humans' judgments of exceptionality within verb classes: among verbs with similar meanings, both humans and the model identified the same verbs as being less passivizable.
In verbs for which humans demonstrated low passive drop, such as strengthened and discharged, close to no passive drop was observed in the model's predictions.
GPT2-100M-avg also predicted high passive drops for verbs like lasted, resembled and cost, aligning with human judgments that these verbs are unique in their verb class.
ยง DOES FREQUENCY EXPLAIN PASSIVIZABILITY JUDGMENTS?
Having established that a language model can successfully model humans' gradient passivizability judgments, we now examine the extent to which GPT2-100M's passivization judgments correlate with the distributional properties of its training data. Specifically, we explore the utility of the entrenchment hypothesis in explaining GPT2-100M's gradient judgments of passivization. Recall that this hypothesis argues that learners conclude that a verb cannot appear in a particular context if it appears in many other contexts but systematically fails to appear in the context in question.
Here, we consider a weaker version of the entrenchment hypothesis, which does not presuppose that exceptions never occur in the learner's input. Instead, we hypothesize that the less frequently a verb is used in the passive voice relative to the active voice, the less acceptable passive constructions using that verb will be.
ยง.ยง Method
We conducted a corpus study on the data that GPT2-100M was trained on. We processed each document in the corpus using the spaCy Transformer-based lemmatizer, POS tokenizer and dependency parser <cit.> and extracted all sentences that contained a verbal lemma corresponding to the test and control verbs. Sentences that contained the verbs in question and had a dependency edge to a passive auxilliary (), a passive nominal subject () or a passive clausal subject () were classified as passive sentences, while all other sentences containing the verb were classified as active sentences. We hand-checked a 1000 sentence subset of the training data to verify the accuracy of the tagging process. No sentences were incorrectly tagged in the manually verified subset, although the corpus did contain instances of typos such as () (tagged as passive):
It was fun while it was lasted.
ยง.ยง Results
Figure <ref> shows the number of active and passive sentences in GPT2-100M's training corpus.
Not all verbs appear in the same ratios in the active and passive voice. Agent-patient verbs consistently appeared in approximately 10 times as many active sentences as passive sentences, matching estimates from previous corpus studies <cit.>. On the other hand, test verbs appeared in varying amounts in the active and passive. For instance, last appeared 5666 times in the active and four times in the passive in the 100M word corpus, while cost appeared 7706 times in the active and 19 times in the passive.
This result suggests that the test verbs differ from canonically passivizable control verbs in their distribution.
Figure <ref> graphs the correlation between the ratio of active to passive sentences for a given verb, on the one hand, and that verb's mean passive drop on the other hand. We find a weak but positive correlation between the two variables (r_s= 0.212).
Two key outliers that are not well accounted for by this measure of relative frequency are last and cost. In both humans and model judgments, these verbs demonstrated high passive drops; yet, they are similar in relative frequency of active and passive to verbs like emanate, profit and resemble, whose passive drops are lower. While frequency seems to predict some amount of unpassivizability, then, it cannot account for the full magnitude of the passive drop displayed by these particular verbs.
Furthermore, entire verb classes are systematically over- or under-predicted in Figure <ref>. The duration verb class on the whole has a high passive drop relative to its frequency in the corpus, while frequency over-predicts the passive drop expected for the advantage verb class. We thus conclude that while the relative frequency of active and passive voice sentences positively correlates with passive drop, other factors are likely to also be relevant on a verb-class level.
Although take appears to be an outlier in Figureย <ref>, with an active to passive ratio similar to that of the agent-patient and experiencer-theme control verbs, the measure of frequency we used does not take into account the fact that take has multiple senses. If a different sense than the one being tested is heavily represented by passive sentences, the number of passives counted may be overestimated. For example, although we only test the duration sense of take, as given in (a), the sense used in (b) may be more prevalent in the corpus:
* Two days was taken by the meeting.
The photo was taken by the boy.
These differences in verb sense are not accounted for in the current corpus study; future work should make use of word sense disambiguation to conduct more targeted corpus analyses. Additionally, the issue of differentiating verb senses in polysemous verbs is one that both human and machine learners face, raising the question of the extent to which learners differentiate between verb senses that are more or less difficult to passivize.
Overall, while the relative frequency of a verb's occurrence in the active and passive does positively correlate with its unpassivizability, it does not account for crucial verb-level differences in the magnitude of passive drop demonstrated by GPT2-100M-avg.
ยง DISCUSSION
The goal of this study was to explore whether a language model can identify exceptions to a productive syntactic rule in a human-like way. We compared human acceptability judgments to sentence scores produced by a GPT-2 model trained on the amount of linguistic input that a human can plausibly be exposed to, and found that the model displayed human-like exceptionality and gradience in its judgments of passive sentences. The results of our study suggest that language models are able to refrain from over-generalizing to exceptions. Our results suggest that future empirical inroads may be made towards understanding the mechanisms and input required to overcome the projection problem <cit.>, i.e. the problem of acquiring arbitrary negative exceptions, using language models as experimental subjects.
We took a first step in this direction by showing a positive correlation between the relative frequency of active and passive sentences containing a given verb and the difference between that verb's acceptability in the active and passive voice (i.e. its passive drop) in GPT-2. Although our results lend some credence to the entrenchment hypothesis, they suggest that additional factors must be recruited to explain the full magnitude of exceptionality displayed by highly unpassivizable verbs such as last and cost.
Moreover, although we demonstrated that the relative frequency of a verb's occurrence in the active and passive is correlated with its passive drop, a causal relationship between the two cannot be established from our data. A single underlying factor, such as verbal semantics, may affect both the frequency of a verb in the passive in relation to the passive and its acceptability in the passive construction.
Future research should test the causal impact of a verb's absolute and relative frequency in the training corpus on its predicted passivizability. Following <cit.>, we plan to create an altered training dataset where we match the frequency of active and passive sentences containing passivizable verbs like drop to the absolute frequency of sentences containing highly unpassivizable verbs, such as last. Comparing models trained on this dataset against GPT2-100M will allow us to move beyond a correlational analysis and explore whether altering the frequency of a verb in the active and passive voice in a model's training data has a causal effect on the model's predictions of that verb's passivizability.
ยง CONCLUSION
In this paper, we explored whether a language model trained on a human-scale amount of linguistic input is able to learn lexical exceptions to a productive syntactic generalization in English. We showed that it was able to match humans' reported judgments on unpassivizable verbs like last, showing both the ability to identify exceptions as well as to identify the magnitude of an exception. We also demonstrated a weak correlation between the degree to which a model prefers active over passive sentences using a given verb, on the one hand, and the ratio between the frequencies with which sentences containing that verb occur in the active and passive voice, on the other hand. Together, these results suggest that distributional information plays a role in learning exceptions to syntactic rules.
ยง ACKNOWLEDGMENTS
We would like to thank Alec Marantz and Gary Thoms for valuable comments and suggestions related to this paper. This material is based upon work supported by the National Science Foundation (NSF) under Grant No. BCS-2114505. This work was supported in part through the NYU IT High Performance Computing resources, services, and staff expertise.
ยง STIMULI
ยง.ยง Test sentence frames
Verb class Sentence frame
5*Advantage Your investment the community.
The exercise his fitness.
Our friendship my life.
The law these workers.
The treaty both countries.
5*Price Your dish ninety dollars.
The painting a fortune.
The tickets a lot of money.
Your book thirty dollars.
His actions the medal.
5*Ooze My friend confidence.
The lightbulb some light.
That machine a sound.
The teacher wisdom.
The trash an odor.
5*Estimation Your drawing her likeness.
Your friend my brother.
The character the author.
Her son her father.
The copy the original.
5*Duration The journey three days.
My meeting two hours.
The interview some time.
Her speech seventeen minutes.
His trek a month.
ยง.ยง Agent-patient sentences
Verb Active sentence
5*hit My brother hit your friend.
Your sister hit the target.
The child hit the ball.
A boy hit my bag.
A monkey hit the toy.
5*pushed My brother pushed a child.
The mother pushed my toy.
A boy pushed the cup.
A child pushed the bag.
Your sister pushed your friend.
5*washed A boy washed the cup.
A child washed the bag.
My sister washed a towel.
My brother washed my plate.
Your mother washed my toy.
5*dropped My brother dropped my plate.
The mother dropped my toy.
A boy dropped the cup.
A child dropped the bag.
Your sister dropped a book.
5*carried A boy carried my bag.
Your mother carried the child.
My brother carried your friend.
The dog carried the toy.
The donkey carried the load.
ยง.ยง Experiencer-theme sentences
Verb Active sentence
5*saw My brother saw your friend.
Your dog saw the toy.
Your sister saw a book.
A boy saw my bag.
The child saw a monkey.
5*heard A boy heard the sound.
The child heard the rules.
My brother heard your friend.
Your dog heard the toy.
Your sister heard a squeak.
5*knew My brother knew your friend.
Your dog knew my cat.
Your sister knew my brother.
A boy knew my mother.
The mother knew the dog.
5*liked A boy liked the game.
The child liked a monkey.
My brother liked your friend.
Your dog liked the toy.
Your sister liked a book.
5*remembered My brother remembered your friend.
Your dog remembered my toy.
Your sister remembered a book.
A boy remembered the game.
The child remembered the rules.
|
http://arxiv.org/abs/2306.08519v1
|
20230614140742
|
A multi-agent targeted trading equilibrium with transaction costs
|
[
"Jin Hyuk Choi",
"Jetlir Duraj",
"Kim Weston"
] |
q-fin.MF
|
[
"q-fin.MF",
"91B24, 91B51"
] |
arabic plain
A multi-agent targeted trading equilibrium with transaction costs
Jin Hyuk Choi, Jetlir Duraj, and Kim Weston[The first author is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1C1C1A01014142 and No. 2021R1A4A1032924). The second and third authors acknowledge support by the National Science Foundation under Grant No. DMS#1908255 (2019-2022). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.]
July 31, 2023
.5in
We prove the existence of a continuous-time Radner equilibrium with multiple agents and transaction costs. The agents are incentivized to trade towards a targeted number of shares throughout the trading period and seek to maximize their expected wealth minus a penalty for deviating from their targets. Their wealth is further reduced by transaction costs that are proportional to the number of stock shares traded. The agents' targeted number of shares is publicly known, making the resulting equilibrium fully revealing. In equilibrium, each agent optimally chooses to trade for an initial time interval before stopping trade. Our equilibrium construction and analysis involves identifying the order in which the agents stop trade. The transaction cost level impacts the equilibrium stock price drift. We analyze the equilibrium outcomes and provide numerical examples.
Keywords: Transaction costs, Radner equilibrium, Targeted trading, TWAP, Trading frictions
MSC2020: 91B24, 91B51
ยง INTRODUCTION
We provide the first equilibrium existence result for a continuous-time Radner equilibrium with proportional transaction costs and an arbitrary, finite number of agents. The agents seek to acquire (or sell) shares of a stock to achieve trading targets by the end of the trading period in a time-weighted average price (TWAP) fashion. Their wealth declines with trade because of transaction costs that are proportional to the number of shares traded. The agents seek to maximize their expected wealth minus a TWAP-related penalty term. The agents' trading targets are publicly known, which leads to a fully revealing equilibrium.
Due to the inherent difficulty of studying models with proportional transaction costs, previous equilibrium existence results are limited in scope. Previous existence results fall into two categories: stylized models with only two agents and models that employ approximation or averaging to accomplish market clearing. Two-agent economies are convenient in equilibrium because market clearing with two agents dictates that each agent must take opposite trades of the other. Thus, two-agent economies lead to the study of only one agent's optimization problem. Westonย <cit.>, Noh and Westonย <cit.>, and Loewenstein and Qinย <cit.> proved equilibrium existence with proportional transaction costs in stylized models with only two agents. Equilibria with transaction costs are so complicated that even approximate equilibrium results, such as Gonon et. al.ย <cit.> and Herdegen and Muhle-Karbeย <cit.>, crucially rely on a two-agent economy. Continuum-of-agent models, where market clearing is averaged over infinitely many agents, are studied in Vayanos and Vilaย <cit.>, Vayanosย <cit.>, Huangย <cit.>, and Dรกvilaย <cit.>. Lo et. al.ย <cit.> and Buss et. al.ย <cit.> address equilibrium with transaction costs from numerical perspective and do not prove existence. Our model builds off the model in Noh and Westonย <cit.>, and we prove equilibrium existence in an economy with proportional transaction costs and an arbitrary, finite number of agents.
Our approach to construct an equilibrium starts with a conjecture that is motivated by the trading behavior seen in Noh and Westonย <cit.>. We conjecture that each agent optimally chooses to trade for an initial time interval before ceasing trade without resuming. The model of Noh and Westonย <cit.> revealed monotonic trading behavior (always buying or always selling) and a simplification of the agents' first-order conditions. In this work, our conjecture allows us to formulate candidate equilibrium quantities, including the stock's drift and the optimal trading strategies. Having more than two agents leads to richer trading behavior and a more complicated stock price drift than Noh and Westonย <cit.>. Our proposed equilibrium stock price drift is not monotonic, leading to challenges in proving monotonic trading behavior. Our analysis centers around proving crucial properties of the proposed equilibrium quantities. In particular, we order the agents according to when they cease trading. This rank-based ordering facilitates the definitions of all equilibrium-related quantities and is crucial for our verification and equilibrium existence arguments.
We show that having more than two agents affects the equilibrium stock price drift when transaction costs are proportional. Larger transaction cost proportions lead to less trading, which leads to changes in the stock price drift. Our results contrast Noh and Westonย <cit.>, who found no impact of transaction costs on the stock price drift because of simplifications due to a two-agent economy. Our results share similarities with the two-agent, ergodic-style equilibrium with transaction costs of Gonon et. al.ย <cit.>. In Gonon et. al.ย <cit.>, the stock price drift has a larger range of values as transaction costs increase due to an increase in range of an underlying doubly-reflected Brownian motion state process. We provide analytic results to describe the changes in the equilibrium drift due to transaction costs.
The outline of our paper is as follows. In Sectionย <ref>, we set up our model and state our main result on equilibrium existence, Theoremย <ref>. Sectionย <ref> provides an illustrative example that motivates our definition for a rank-based ordering of the agents. We construct the rank-based ordering and define several equilibrium-related quantities in Sectionย <ref>. Using the rank-based ordering, we also refine and restate our main result as Theoremย <ref>. In Sectionย <ref>, several properties are proven leading up to the proof of Theoremย <ref>. Finally, Sectionย <ref> analyzes the outcomes of equilibrium and provides numerical examples.
ยง MODEL SET-UP
Let T>0 be a fixed time horizon, which we think of as one trading day in length. We work in a continuous-time setting and let B=(B_t)_tโ[0,T] be a Brownian motion on a probability space (ฮฉ,,). The market consists of two traded securities: a bank account and a stock. The bank account is a financial asset in zero-net supply with a constant zero interest rate. The stock is in constant net supply with the supply denoted by nโฅ0. It pays a dividend of D at time T. The random variable D is measurable with respect to ฯ(B_u: 0โค uโค T) and [D^2]<โ. We let ฯ=(ฯ_t)_tโ[0,T] be the progressively measurable process such that โซ_0^Tฯ_u^2du<โ and
D= [D] + โซ_0^T ฯ_udB_u
guaranteed by the martingale representation theorem. The equilibrium stock price is an Itรด process that will be determined endogenously in equilibrium and is denoted S = (S_t)_tโ[0,T]. The price is constrained at time T so that S_T = D. We assume that prices of all goods are denominated in units of a single consumption good.
A finite number of investors, i=1,โฆ, I, trade in the market. They each seek to maximize expected wealth yet are subjected to inventory penalties throughout the trading period. Their wealth is further penalized by transaction costs, which are proportional to the rate of trade at the rate ฮป>0. The I=2 case is studied in Noh and Westonย <cit.>. We are mainly interested in the case when Iโฅ 3.
Each agent i has a targeted number of shares รฃ_i she wishes to acquire (or sell off) throughout the trading period. The random variables รฃ_1, โฆ, รฃ_I are assumed to satisfy [รฃ_i^2]<โ, i=1,โฆ, I, and be independent of the Brownian motion B. The trading targets รฃ_1, โฆ, รฃ_I are assumed to be known to all investors at time 0. The filtration is given by = (_t)_tโ[0,T] where
_t := ฯ(รฃ_1, โฆ, รฃ_I, B_u: uโ[0,t]), tโ[0,T],
and we assume that =_T.
A trading strategy ฮธ = (ฮธ_t)_tโ[0,T] denotes the number of shares held in the stock. For 1โค iโค I, we say that ฮธ is admissible for agent i if it is adapted to , cร dlร g, of finite variation on [0,T] -a.s., and satisfies โซ_0^T (ฯ_tฮธ_t)^2dt <โ and โซ_0^Tฮธ_t^2dt<โ. We write _i to denote the collection of admissible strategies for agent i. Agent i is endowed at the beginning of the trading period with ฮธ_i,0- shares of stock, where ฮธ_i,0- is deterministic (constant). We normalize the endowed shares in the bank account to zero. For ฮธโ_i, we allow for ฮธ_0 to differ from ฮธ_i,0-, as the agents may choose to trade a lump sum immediately.
For 1โค iโค I, since ฮธโ_i is of finite variation, we decompose ฮธ into
ฮธ_t = ฮธ_i,0- + ฮธ^โ_t - ฮธ^โ_t, tโ[0,T],
where ฮธ^โ, ฮธ^โ are adapted to , cร dlร g, nondecreasing, and
{tโ[0,T]:dฮธ^โ_t>0}โฉ{tโ[0,T]:dฮธ^โ_t>0}=โ
.
A change in trading position is possible at time 0, and we allow for ฮธ^โ_0>0 or ฮธ^โ_0>0 as long asย (<ref>) holds.
At the close of the trading period, the agents consume their acquired dividends. They are subjected through their optimization problems to inventory penalties throughout the trading period. For 1โค iโค I and a given ฮธโ_i, the penalty term, or loss term, for agent i is measured by
L_i,T^ฮธ:= 1/2โซ_0^T ฮบ(t)(ฮณ(t)(รฃ_i-ฮธ_i,0-) - (ฮธ_t-ฮธ_i,0-))^2dt.
The function ฮบ:[0,T]โ (0,โ) describes the intensity of the penalty, while ฮณ:[0,T]โ[0,1] describes the desired intraday trading target trajectory. All agents share the same deterministic functions ฮบ and ฮณ. We assume that ฮณ is continuous, strictly increasing, ฮณ(0)=0, and ฮณ(T)=1. Our main example is time-weighted average price (TWAP), where the intraday trajectory function is ฮณ^TWAP(t):= t/T. We assume that ฮบ is strictly positive and continuous.
For 1โค iโค I and a trading strategy ฮธโ_i, agent i's wealth at t is given by
X^ฮธ_i,t = ฮธ_i,0-S_0 + โซ_0^t ฮธ_u dS_u - ฮป(ฮธ^โ_t + ฮธ^โ_t),
tโ[0,T].
We recall that the decomposition of ฮธ in (<ref>) allows for ฮธ^โ_0 and ฮธ^โ_0 to differ from zero. Agent i's objective is
[X^ฮธ_i,T - L^ฮธ_i,T | _0] โถ max
over ฮธโ_i, where L^ฮธ_i,T is defined inย (<ref>) and X^ฮธ_i,T inย (<ref>).
In an equilibrium, the stock price is determined so that markets clear when all agents invest optimally.
Let ฮป>0 be a given transaction cost proportion. An Itรด process S = (S_t)_tโ[0,T] and trading strategies ฮธ_1โ_1, โฆ, ฮธ_Iโ_I form an equilibrium if
* Strategies are optimal: For 1โค iโค I, we have that
[X^ฮธ_i_i,T - L^ฮธ_i_i,T | _0]
= sup_ฮธโ_i[X^ฮธ_i,T - L^ฮธ_i,T | _0],
where L^ฮธ_i,T is defined inย (<ref>), X^ฮธ_i,T inย (<ref>), and the stock dynamics correspond to S with S_T=D.
* Markets clear: We have โ_i=1^Iฮธ_i,t = n for all tโ[0,T].
Market clearing in Definitionย <ref>(2) requires clearing of the stock market, however Walras' Law holds in our model in that the other markets (money market and real goods) clear as well. We omit the proof here, and instead refer the reader to the analogous result in Noh and Westonย <cit.> (Lemma 3.2).
Theorem <ref> is the main result of the paper.
For
any transaction cost proportion ฮป>0, there exists an equilibrium.
Sectionย <ref> motivates our equilibrium construction, and Sectionsย <ref> and <ref> provide the details on the equilibrium construction, its properties, and the proof of Theoremย <ref>.
ยง AN ILLUSTRATIVE EXAMPLE
Motivated by the trading behavior in Noh and Westonย <cit.>, we formulated the conjecture that for every agent 1โค iโค I, there exists an _0-measurable stop-trade time ฯ_i such that agent i trades monotonically on [0, ฯ_i] and does not trade on (ฯ_i, T]. In Sectionsย <ref> and <ref> we construct equilibrium quantities based on this conjecture and prove that the conjecture holds. The construction and proof require heavy notation and that several intricate properties hold. The example in this section provides evidence that the simple idea behind our conjecture is reasonable.
We consider an example with I=20 agents, trading day length T=1, transaction cost proportion ฮป = 0.2, and functions ฮบ(t) = 0.1 and ฮณ(t) = t, tโ[0, 1]. We choose the stock supply to be n=0 and the dividend as D any random variable with [D^2]<โ and [D]=0. The agents are endowed with ฮธ_1,0- = โฆ = ฮธ_I,0- = 0 shares initially and have trading targets given by
รฃ_1 = -300 รฃ_6 = -60 รฃ_11 =6 รฃ_16 =70
รฃ_2 = -202 รฃ_7 = -35 รฃ_12 =11 รฃ_17 =115
รฃ_3 = -165 รฃ_8 = -20 รฃ_13 =23 รฃ_18 =150
รฃ_4 = -102 รฃ_9 = -15 รฃ_14 =30 รฃ_19 =220
รฃ_5 = -75 รฃ_10 = 0 รฃ_15 =63 รฃ_20 =290.
We conjecture that for each 1โค i โค I=20, there exists an _0-measureable stop-trade time ฯ_i such that agent i trades on [0,ฯ_i] and does not trade at times (ฯ_i, T]. Using this conjecture, Figureย <ref> plots the trade intervals of each agent.
Based on Figureย <ref>, the stop-trade times for the agents appear to be ordered, as those agents with targets that are closest to โaverageโ target stop trading sooner than the agents with more extreme targets. Agent 10 with รฃ_10=0 chooses never to trade, while the more extreme targets choose to trade for a longer duration. Of course, what is meant by โaverageโ must be determined. It is possible to describe the ordering of the stop-trade times based on an idea of relevant averages, and this construction is detailed in Sectionย <ref>. We also notice that the last two agents to stop trade are agents 1 and 20, who choose to stop trading at the same time, in order to satisfy the market clearing condition.
A necessary property of the stop-trade time conjecture is that trading occurs in a monotone fashion โ an optimizing agent that starts the trading period by buying shares of stock should not want to sell shares of stock later on. The equilibrium stock price drift appears linearly in the optimal trading strategy formulas and, thus, plays a large role in the monotonicity of the strategies. Figureย <ref> plots the equilibrium stock price drift.
We see from Figureย <ref> that the equilibrium stock price drift is not necessarily monotone, which creates a challenge for the monotonicity of the optimal trading strategies. Nevertheless, Figureย <ref> plots a subset of the optimal trading strategies, which are monotone.
ยง FORMAL DEFINITIONS AND THE RANK-BASED ORDERING
Up until now, the agents are labeled in terms of an alphabetical ordering with index iโ{1,โฆ, I} appearing in subscripts. However, it will be convenient to introduce an alternative rank-based ordering, which we denote by superscripts in parenthesis. We seek to order the times that agents cease trading by
0 โคฯ1โคโฆโคฯI-1=ฯI<T.
The time ฯj represents the _0-measurable time when a total of j out of the I agents have stopped trading. The superscript (j) denotes the j^th agent to have dropped out of the market, whereas the subscript i in ฯ_i denotes the _0-measurable time when agent i has chosen to stop trading. If agent i is the j^th agent to stop trading, then ฯ_i = ฯj. We will see that it is possible for ฯj=ฯj+1, meaning two agents choose to stop trade at the same time. Also, we always have ฯI-1=ฯI since the last two remaining active traders must cease trade at the same time for stock market clearing to hold.
We define the rank-based ordering by backward induction starting with (I-1) and (I).
In this construction, we will give meaning to the terms: aj, aโฅ j_ฮฃ, and Aj for trading targets and their aggregate target counterparts; ฯj for stop-trade times; and j for index sets to translate between alphabetic and rank-based ordering. After defining these quantities, we define the proposed equilibrium quantities: Yj for candidate optimal first-order conditions; ฮธj for the candidate optimal trading strategies; and ฮผ for the stock drift.
For convenience, we define F:[0,T] โ as
F(t):=โซ_t^Tฮบ(u)(ฮณ(u)-ฮณ(t))du.
For 1โค iโค I, we define the relative trading targets by a_i := รฃ_i - ฮธ_i,0-. The relative trading target a_i represents how many shares agent i must obtain (or sell), relative to her initial position. As we'll see below, the terms a_i are more relevant to the rank-based definitions and results than the targets รฃ_i.
Base case: jโ{I-1,I}. We define the initial index set by I-1:={1,โฆ, I}. For i,lโI-1, we put
ฮทI-1_i,l := inf{ tโ[0,T]: |(a_i-1/2(a_l+a_i))F(t)| โคฮป}.
We choose i^*, l^* to maximize ฮทI-1_i,l so that
(i^*,l^*) โ{ฮทI-1_i,l: i,lโI-1}.
We put
ฯI-1 :=ฯI:=ฮทI-1_i^*,l^*,
รฃI-1 :=รฃ_i^*, รฃI :=รฃ_l^*,
ฮธI-1_0- := ฮธ_i^*, 0-, ฮธI_0- := ฮธ_l^*, 0-,
aI-1 := a_i^* = รฃI-1 - ฮธI-1_0-, aI := a_l^* = รฃI - ฮธI_0-,
aโฅ I-1_ฮฃ := aI-1+aI,
AI-1 := aI-1-1/2 aโฅ I-1_ฮฃ,
AI :=aI-1/2aโฅ I-1_ฮฃ = -AI-1.
We call the values ฯI-1 and ฯI stop-trade times, which are the _0-measurable times that the rank-based agents (I-1) and (I) stop trading. We also denote agent (I-1)'s and (I)'s set of admissible strategies by I-1:=_i^* and I:=_l^*. Finally, we put I-2:=I-1\{i^*, l^*}, which will set us up for the next iteration of this recursive definition by removing alphabetic indices that have already been accounted for.
Backward induction: Let j be given with 1โค jโค I-2. We assume that the following have been previously defined: j,โฆ,I-1; ฯj+1,โฆ, ฯI; aj+1,โฆ, aI; aโฅ j+1_ฮฃ, โฆ, aโฅ I-1_ฮฃ; and Aj+1, โฆ, AI-1. For iโj, we define
ฮท_ij := inf {tโ[0,T]: |I-j+1/I-j(a_i-1/I-j+1(a_i+aโฅ j+1_ฮฃ))F(t) ..
.. + โ_k=j+1^I-2Ak/I-k F(tโจฯk)|โคฮป}.
In the definition of ฮทj_i and throughout this work, we allow for the possibility that summations are empty. When the upper summation index is strictly less than the lower index, our convention is that the summation is empty and zero-valued.
We choose i^* to maximize ฮทj_i by
i^*โ{ฮทj_i: iโj},
and define
ฯj :=ฮทj_i^*,
รฃj := รฃ_i^*,
ฮธj_0- := ฮธ_i^*, 0-,
aj := รฃj-ฮธj_0-,
aโฅ j_ฮฃ :=aj+aโฅ j+1_ฮฃ,
Aj := aj-1/I-j+1aโฅ j_ฮฃ,
j-1 :=j\{i^*}.
We call ฯj a stop-trade time since it is the _0-measurable time that the rank-based agent (j) stops trading. We denote agent (j)'s set of admissible trading strategies by j:=_i^*.
Other rank-based and equilibrium quantities:
The agents' penalty terms, wealth processes, and optimization problems were previously defined using the alphabetic notation, and now we rewrite these definitions using the rank-based notation. For 1โค j โค I and ฮธโj, agent (j)'s penalty term associated with strategy ฮธ is given by
L_T^(j),ฮธ:= 1/2โซ_0^T ฮบ(t)(ฮณ(t)(รฃj-ฮธj_0-) - (ฮธ_t-ฮธj_0-))^2dt,
and agent (j)'s wealth process for strategy ฮธ is
X^(j),ฮธ_t = ฮธj_0-S_0 + โซ_0^t ฮธ_u dS_u - ฮป(ฮธ^โ_t + ฮธ^โ_t),
tโ[0,T].
Finally, agent (j) seeks to optimize
[X^(j),ฮธ_T - L^(j),ฮธ_T | _0] โถ max
over ฮธโj. When rewriting the definitions of the penalty terms, wealth processes, and optimization problems, we have not changed any content beyond the agents' notational labels. The definition of equilibrium, Definitionย <ref>, is also readily adapted to the rank-based notation.
For notational purposes, we define ฯ0:=0. The candidate equilibrium stock price drift ฮผ is defined by
ฮผ_t :=
-ฮบ(t)(ฮณ(t)aโฅ j+1_ฮฃ/I-j+โ_k=1^jฮณ(ฯk)Ak/I-k), tโ[ฯj,ฯj+1), 0โค jโค I-2,
-ฮบ(t)(ฮณ(t)aโฅ I-1_ฮฃ/2+โ_k=1^I-2ฮณ(ฯk)Ak/I-k), tโ[ฯI-1,T].
For each 1โค jโค I, the candidate optimal trading strategy for agent (j) is given by
ฮธj_t :=
ฮธj_0-+ฮผ_t/ฮบ(t) + ฮณ(t)aj, tโ[0,ฯj],
ฮธj_ฯj, tโ(ฯj, T].
Theoremย <ref> (below) restates Theoremย <ref> using the rank-based notation. The proof of Theoremย <ref> will show that ฮผ in (<ref>) is the equilibrium stock drift, and ฮธj is agent (j)'s optimal trading strategy. The proof of Theoremย <ref> is provided at the end of Sectionย <ref>.
There exists an equilibrium in which the stock price is given by
S_t := [D] + โซ_0^t ฯ_u dB_u - โซ_t^T ฮผ_u du, tโ[0,T],
and ฮธ1โ1, โฆ, ฮธIโI are the optimal trading strategies.
Finally, we work towards the definition of the processes Yj, which are related to the agents' candidate first-order conditions. To ease notation, we introduce ฮj for 1โค jโค I as
ฮj_t:= Ajโซ_tโจฯj^Tฮบ(u)(ฮณ(u)-ฮณ(ฯj))du, tโ[0,T].
Since ฮบ is strictly positive and ฮณ is strictly increasing, we observe that
tโฆ F(t) is strictly decreasing,
| ฮj_t | โค| Aj| F(ฯj) for tโ [0,T].
Using this notation, agent (j) for 1โค jโค I has the candidate first-order condition quantity
Yj_t := I-j+1/I-jฮj_t + โ_k=j+1^I-2ฮk_t/I-k, 1โค j โค I-2,
ฮj_t, j =I-1, I,
, tโ[0,T].
For all 1โค jโค I, we notice that if ฯjโคฯj+1โคโฆโคฯI, then Yj_t = Yj_ฯj for all tโ[0,ฯj]. The process Yj is similar to the expression appearing inside (<ref>) and (<ref>), and we will prove below in Theoremย <ref> that |Yj|โคฮป.
The following lemma is a convenient representation of Yj in (<ref>). The result holds even without assuming (or proving) that |Yj|โคฮป or ฯjโคฯj+1โคโฆโคฯI. Indeed, once we prove that the Yj processes are bounded by ฮป, this lemma will be useful for showing that the times ฯj are ordered.
For 1โค jโค I-2, we have
โ_k=j+1^I-2ฮk/I-k
= 1/I-jโ_k=j+1^I-2Yk.
Consequently,
Yj = I-j+1/I-jฮj
+ 1/I-jโ_k=j+1^I-2 Yk.
We prove the claim by backward induction on j.
Base case: For j=I-2, the sum in (<ref>) is empty, and so (<ref>) and (<ref>) trivially hold. For j=I-3,
โ_k=j+1^I-2ฮk/I-k = ฮj+1/I-j-1 = Yj+1/I-j = 1/I-jโ_k=j+1^I-2 Yk,
as desired.
Induction Hypothesis: For some 1โค jโค I-3, suppose that
โ_k=j+2^I-2ฮk/I-k
= 1/I-j-1โ_k=j+2^I-2Yk.
Induction Step: Since
Yj+1 = I-j/I-j-1ฮj+1 + โ_k=j+2^I-2ฮk/I-k,
we deduce that
ฮj+1/I-j-1 = Yj+1/I-j
- 1/I-jโ_k=j+2^I-2ฮk/I-k.
Thus, we have
โ_k=j+1^I-2ฮk/I-k = Yj+1/I-j + I-j-1/I-jโ_k=j+2^I-2ฮk/I-k
= 1/I-jโ_k=j+1^I-2 Yk,
by the induction hypothesis.
By (<ref>), we see that (<ref>) holds too.
The rank-based definitions lead to an ordering of the stop-trade times and boundedness of the candidate first-order conditions.
We have that
0โคฯ1โคฯ2โคโฆโคฯI-1=ฯI<T,
and for all 1โค jโค I and tโ[0,T], we have that |Yj_t|โคฮป.
First, we notice that for all 1โค jโค I, we have 0โคฯj<T. Since ฯI-1=ฯI and YI-1=-YI, we prove the claim for indices j with 1โค jโค I-1. We proceed by backward induction on j.
Base Case: Beginning with j=I-2, we seek to prove that ฯI-2โคฯI-1, |YI-2|โคฮป, and |YI-1|โคฮป. The definitions of ฯI-1 and ฯI-2 imply that ฯI-1 and ฯI-2 can be rewritten as
ฯI-1 = inf {tโ[0,T]: | AI-1| F(t)โคฮป},
ฯI-2 = inf {tโ[0,T]: |3/2 AI-2|F(t)โคฮป},
which imply
| AI-1| F(ฯI-1) โคฮป and |3/2 AI-2| F(ฯI-2) โคฮป.
The definition of YI-1 and YI-2 in (<ref>) are expressed by
YI-1 = ฮI-1,
YI-2 = 3/2ฮI-2.
The above expression, (<ref>), and (<ref>) imply that |YI-1|โคฮป and |YI-2|โคฮป.
It remains to show that ฯI-2โคฯI-1. By (<ref>), it suffices to prove that |3/2 AI-2|โค|AI-1|.
By the definition of ฯI-1, if ฯI-1>0, then we have
|aI-2-aI-2+aI/2|
โค|aI-1-aI-1+aI/2|.
Then, we have
|3/2 AI-2|
=|aI-2-aI-2+aI/2| by algebra
โค|aI-1-aI-1+aI/2| by (<ref>)
= |AI-1|.
Therefore, 3/2|AI-2| โค|AI-1|, which implies that ฯI-2โคฯI-1.
Induction Hypothesis: For some 1โค jโค I-3, suppose that ฯj+1โคโฆโคฯI-1 and |Yj+1|, โฆ, |YI-1| โคฮป.
Induction Step: Let 1โค jโค I-3 be given, and suppose that the Induction Hypothesis holds for j. We seek to show that ฯjโคฯj+1 and |Yj|โคฮป.
First, we show ฯjโคฯj+1. The definition of ฯj implies that
ฯj = inf{tโ[0,T]: |I-j+1/I-jAjF(t) + โ_k=j+1^I-2Ak/I-k F(tโจฯk)|โคฮป}
Therefore, by the definition of ฯj and the induction hypothesis that ฯj+1โคโฆโคฯI-1, it is enough to show that
| I-j+1/I-jAj F(ฯj+1) + โ_k=j+1^I-2Ak/I-kF(ฯk) | โคฮป.
Simple algebra produces
I-j+1/I-jAj + 1/I-j-1Aj+1=I-j/I-j-1รj+1,
where รj+1:=aj-1/I-j(aj+aโฅ j+2_ฮฃ).
By the induction hypothesis that |Yj+1|, โฆ, |YI-2|โคฮป and identity (<ref>) in Lemmaย <ref>, we have
|โ_k=j+2^I-2Ak/I-kF(ฯk)|= | โ_k=j+2^I-2ฮk_ฯj+1/I-k| = 1/I-j-1|โ_k=j+2^I-2 Yk_ฯj+1| โคI-j-3/I-j-1ฮป< ฮป.
Using (<ref>), we check that (<ref>) is equivalent to
| I-j/I-j-1รj+1F(ฯj+1)
+โ_k=j+2^I-2Ak/I-kF(ฯk) | โคฮป.
Suppose that (<ref>) is false. Then, by (<ref>) and (<ref>), the following inequality should hold:
| I-j/I-j-1รj+1F(t)
+โ_k=j+2^I-2Ak/I-kF(ฯk) | > ฮป, for tโ [0,ฯj+1].
The above inequality implies ฮทj+1_i^*>ฯj+1 where a_i^*=aj, which contradicts the construction of ฯj+1. Therefore, we conclude that (<ref>) holds, so does (<ref>).
Second, we prove that |Yj|โคฮป on [0,T]. Since we have ฯjโคโฆโคฯI-1 from the first part, we know that Yj_t=Yj_ฯj for tโ[0,ฯj]. The definition of ฯj and ฯjโคโฆโคฯI-1 imply |Yj_ฯj|โคฮป. Therefore, we conclude that |Yj_t|โคฮป for tโ[0,ฯj]. We also have that Yj_T=0.
It remains to prove that |Yj_t|โคฮป for tโ (ฯj,T). Due to the strict positivity of ฮบ and the continuity of Yj, it is enough to show that
Yj_t/โซ_t^Tฮบ(u)duโคฮป/โซ_t^Tฮบ(u)du and -Yj_t/โซ_t^Tฮบ(u)duโคฮป/โซ_t^Tฮบ(u)du, tโ[ฯj,T).
We show this by noting that (<ref>) holds at t=ฯj and by checking that
| d/dt[Yj_t/โซ_t^Tฮบ(u)du] | โคd/dt[ฮป/โซ_t^Tฮบ(u)du], tโ[ฯj, T).
Since ฮบ is strictly positive and continuous on [0,T], the above expressions are well-defined.
For tโ[ฯj,T), either tโ[ฯj+m, ฯj+m+1) for some 0โค mโค I-j-3 or tโ[ฯj+m, T) for m=I-j-2. Then,
Yj_t
= I-j+1/I-jฮj_t + โ_k=j+1^j+mฮk_t/I-k_t-dependence + โ_l=j+m+1^I-2ฮl_ฯl/I-l_no t-dependence,
and, moreover,
Yj_t/โซ_t^Tฮบ(u)du = (I-j+1/I-jAj + โ_k=j+1^j+mAk/I-k)โซ_t^Tฮบ(u)(ฮณ(u)-ฮณ(ฯj+m))du/โซ_t^Tฮบ(u)du + โ_l=j+m+1^I-2ฮl_ฯl/I-l/โซ_t^Tฮบ(u)du_t-dependence
+ I-j+1/I-jAj(ฮณ(ฯj+m)-ฮณ(ฯj)) + โ_k=j+1^j+m-1Ak/I-k(ฮณ(ฯj+m)-ฮณ(ฯk))_no t-dependence
As usual, the summations are considered empty and zero-valued if their upper and lower bounds do not overlap. As side calculations,
d/dt[1/โซ^T_tฮบ(u)du] = ฮบ(t)/(โซ^T_tฮบ(u)du)^2,
and for any sโ[0,T],
d/dt [โซ^T_tฮบ(u)(ฮณ(u)-ฮณ(s))du/โซ^T_tฮบ(u)du]
= ฮบ(t)/(โซ^T_tฮบ(u)du)^2(-(ฮณ(t)-ฮณ(s))โซ^T_tฮบ(u)du + โซ^T_tฮบ(u)(ฮณ(u)-ฮณ(s))du)
= ฮบ(t)/(โซ^T_tฮบ(u)du)^2 F(t).
Therefore, we have
d/dt[Yj_t/โซ_t^Tฮบ(u)du]
= ฮบ(t)/(โซ_t^Tฮบ(u)du)^2( (I-j+1/I-jAj + โ_k=j+1^j+mAk/I-k)F(t) + โ_l=j+m+1^I-2ฮl_ฯl/I-l).
Care should be taken when considering derivatives calculated at the stop-trade times ฯj, ฯj+1, โฆ, ฯI-2, since these times are at boundary points of the calculation intervals. In fact, a closer inspection reveals that the left and right derivatives of Yj/โซ_ยท^T ฮบ are the same.
Now we are ready to prove (<ref>), and hence to conclude the induction step. By the above expression and (<ref>), it suffices to show that
| (I-j+1/I-jAj + โ_k=j+1^j+mAk/I-k)F(t) + โ_l=j+m+1^I-2ฮl_ฯl/I-l| โคฮป.
Since ฯjโคฯj+1โคโฆโคฯI-2 and ฮl_ฯl=AlF(tโจฯl) for lโฅ j+m+1, the above inequality is equivalent to
| I-j+1/I-jAj F(t) + โ_k=j+1^I-2Ak/I-k F(tโจฯk) | โคฮป.
Indeed, the definition of ฯj and the inequality tโฅฯj imply that (<ref>) should hold.
ยง VERIFICATION AND RELATED INEQUALITIES
In order to prove that the definitions of Sectionย <ref> satisfy the definition of equilibrium, we need to provide a verification argument.
The argument will rely on Theoremย <ref>, the properties of ฮธj in Propositionย <ref>, and the representation of Yj in Propositionย <ref>.
The following lemmas will help to prove needed properties of the candidate optimal trading strategies in Propositionย <ref>.
Let 1โค jโค I-1 and 1โค iโค j-1. If ฯj>0 and Ajโฅ 0 (Ajโค 0, resp.), then
Yj_t=ฮป for tโ [0,ฯj] and ajโฅ ai (Yj_t=-ฮป for tโ [0,ฯj] and ajโค ai, resp.).
Without loss of generality, we consider the case of Ajโฅ 0, since the Ajโค 0 is handled analogously. Representation (<ref>), the assumption that ฯj>0, and |Yj+1|, โฆ, |YI-1|โคฮป imply that (Yj_ฯj) = (Aj), and so Yj_ฯj=ฮป.
Since ฯjโคฯj+1โคโฆโคฯI by Theoremย <ref>, the definition of Yj in (<ref>) implies Yj_t = Yj_ฯj=ฮป for all tโ[0,ฯj].
Suppose that there exists 1โค i โค j-1 such that aj < ai. Then,
I-j+1/I-j(ai- ai+aโฅ j+1_ฮฃ/I-j+1) = ai- 1/I-jaโฅ j+1_ฮฃ
>aj- 1/I-jaโฅ j+1_ฮฃ = I-j+1/I-j Ajโฅ 0.
Then, the above inequality implies that for tโ [0,ฯj],
I-j+1/I-j(ai- ai+aโฅ j+1_ฮฃ/I-j+1) F(t) + โ_k=j+1^I-2Ak/I-k F(ฯk)> Yj_ฯj= ฮป.
The inequalityย (<ref>) contradicts to the maximality of ฯj in ฯj's definition.
For ฮผ defined in (<ref>), the map tโ[0,T]โฆฮผ_t/ฮบ(t) is continuous.
Since ฮณ is continuous, it is enough to check that for 1โค j โค I-1,
lim_tโฯjฮผ_t/ฮบ(t) = ฮผ_ฯj/ฮบ(ฯj).
By (<ref>), we have
lim_tโฯjฮผ_t/ฮบ(t) =
-ฮณ(ฯj)aโฅ j_ฮฃ/I-j+1 - โ_k=1^j-1ฮณ(ฯk)Ak/I-k
=
-ฮณ(ฯj)aโฅ j+1_ฮฃ/I-j - โ_k=1^jฮณ(ฯk)Ak/I-k, 1โค j โค I-2,
-ฮณ(ฯj)aโฅ j_ฮฃ/I-j+1 - โ_k=1^j-1ฮณ(ฯk)Ak/I-k, j =I-1.
,
where the second equality is due to aโฅ j_ฮฃ/I-j+1= aโฅ j+1_ฮฃ/I-j+ Aj/I-j. By (<ref>) and (<ref>), we conclude that (<ref>) holds.
For 1โค jโค I-1, ฮธj is continuous and monotone on [0,T]. If Ajโฅ 0 (Ajโค 0, resp.), then ฮธj is monotone increasing (decreasing, resp.).
Let 1โค jโค I-1 be given. The continuity of ฮธj is given by (<ref>) and Lemmaย <ref>.
For 0โค i โค j-1, by the definitions of ฮผ in (<ref>) and ฮธj in (<ref>), we have that
ฮธj = ฮธj_0-+ฮณ(aj-aโฅ i+1_ฮฃ/I-i) - โ_k=1^i 1/I-kฮณ(ฯk)Ak on [ฯi, ฯi+1).
Since ฮธj is constant on [ฯj, T], we prove that ฮธj is monotone on [0,ฯj].
By (<ref>) and ฮธj's continuity at the stop-trade times, it is enough to show that for any 1โค j' โค j-1, the sign of
aj- aโฅ j+1_ฮฃ/I-j and
aj- aโฅ j'+1_ฮฃ/I-j' are the same, under the assumption that ฯj>0. Further assume that
Ajโฅ 0, which is equivalent to assumption that aj- aโฅ j+1_ฮฃ/I-jโฅ 0. Then by Lemmaย <ref>, we have ajโฅ ai for 1โค i โค j-1. Using this, we obtain
(I-j')(aj- aโฅ j'+1_ฮฃ/I-j') =
(aj-aj'+1) + โฏ (aj-aj-1) +
(I-j)( aj- aโฅ j+1_ฮฃ/I-j)
โฅ 0.
Thus, ฮธj is monotone increasing. The case when Ajโค 0 is handled analogously.
Let 1โค j โค I-2 and 0โค m โค I-j-2. Then,
I-j+1/I-jAj +โ_k=j+1^j+mAk/I-k=
aj - aโฅ j+m+1_ฮฃ/I-j-m.
Observe that
aj - aโฅ j+m+1_ฮฃ/I-j-m = aj-aj+m+1+Aj+m+1,
and
I-j+1/I-jAj = aj - aj+1 + Aj+1,
I-j/I-j-1Aj+1 = aj+1 - aj+2 + Aj+2,
โฏ
I-j-m+1/I-j-mAj+m = aj+m - aj+m+1 + Aj+m+1.
We sum the right and left sides above and obtain the desired result.
The following proposition establishes key properties that allow us to prove that the Yj processes satisfy the first-order conditions of the optimization problem (<ref>).
For each 1โค jโค I,
Yj_t = [โซ_t^T ฮบ(u)(ฮผ_u/ฮบ(u)
+ ฮณ(u)(รฃj-ฮธj_0-)
- (ฮธj_u - ฮธj_0-))du
| _t], tโ[0,T],
and
โซ_0^T (ฮป-Yj_t)d_t
= โซ_0^T (ฮป+Yj_t)d_t
= 0.
Let 1โค jโค I-2 be given. For jโ{I-1, I}, the calculations are similar, though simpler, and we do not include them here.
For tโ[ฯj, T], either tโ[ฯj+m, ฯj+m+1) for some 0โค m โค I-j-2 or tโ[ฯj+m, T] for m=I-j-1. In either case, we have
ฮธj_t = ฮธj_ฯj
= ฮธj_0- + I-j+1/I-jฮณ(ฯj)Aj
- โ_k=1^j ฮณ(ฯk)Ak/I-k,
and
ฮผ_t/ฮบ(t) = -ฮณ(t)aโฅ j+m+1_ฮฃ/I-j-m
- โ_k=1^j+mฮณ(ฯk)Ak/I-k.
Hence,
ฮผ_t/ฮบ(t) + ฮณ(t)aj+ฮธj_0- - ฮธj_t
= ฮณ(t) (aj-aโฅ j+m+1_ฮฃ/I-j-m)
- โ_k=j+1^j+mฮณ(ฯk)Ak/I-k
- I-j+1/I-jฮณ(ฯj)Aj
= ฮณ(t) (I-j+1/I-jAj+โ_k=j+1^j+mAk/I-k)
- โ_k=j+1^j+mฮณ(ฯk)Ak/I-k
- I-j+1/I-jฮณ(ฯj)Aj
= I-j+1/I-jAj(ฮณ(t)-ฮณ(ฯj))
+ โ_k=j+1^I-2Ak/I-k(ฮณ(tโจฯk)-ฮณ(ฯk)),
where the second equality is due to Lemmaย <ref> and the third equality is due to t<ฯk for kโฅ j+m+1.
Therefore, for any tโ[ฯj, T],
โซ_t^T ฮบ(u) (ฮผ_u/ฮบ(u)+ฮณ(u)aj + ฮธj_0--ฮธj_u)du
= โซ_t^Tฮบ(u)(I-j+1/I-jAj(ฮณ(u)-ฮณ(ฯj))
+ โ_k=j+1^I-2Ak/I-k(ฮณ(uโจฯk)-ฮณ(ฯk)))du
= I-j+1/I-jฮj_t + โ_k=j+1^I-2ฮk_t/I-k
= Yj_t,
Since Yj and all terms in (ฮผ/ฮบ + ajฮณ + ฮธj_0--ฮธj) are _0-measurable, we conclude that (<ref>) holds.
Finally, we prove that the reflection conditionย (<ref>) holds. Suppose that ฯj=0. Then, Theoremย <ref> implies ฯk=0 for 1โค k โค j. Therefore, (<ref>)-(<ref>) and ฮณ(0)=0 produce ฮธj_t=ฮธj_0=ฮธj_0- for tโ [0,T]. In this case, (<ref>) trivially holds.
Suppose now that ฯj>0 and Ajโฅ 0. By Lemmaย <ref> and Propositionย <ref>, we have
Yj_t = ฮป for tโ [0,ฯj],
ฮธj_t = ฮธj_T for tโ [ฯj,T],
_t =0 for tโ [0,T].
The above equations produce (<ref>). If ฯj>0 and Ajโค 0, then (<ref>) still holds by the same way.
Next, we work towards our proof of equilibrium existence by studying the individual agents' optimization problems. We recall that for 1โค jโค I and ฮธโj, agent (j)'s loss term is given by
L^(j),ฮธ_T := 1/2โซ_0^T ฮบ(t)(ฮณ(t)(รฃj-ฮธj_0-) - (ฮธ_t-ฮธj_0-))^2dt,
while her wealth at t is given by
X^(j),ฮธ_t = ฮธj_0-S_0 + โซ_0^t ฮธ_u dS_u - ฮป(ฮธ^โ_t + ฮธ^โ_t),
tโ[0,T].
Agent (j) seeks to maximize
Vj(ฮธ):= [X^(j),ฮธ_T - L^(j),ฮธ_T | _0]
over ฮธโj.
We also recall that ฯ=(ฯ_t)_tโ[0,T] is given by the martingale representation of the terminal dividend D by
D = [D] + โซ_0^T ฯ_u dB_u.
Assume that the stock has dynamics
dS_t = ฮผ_t dt + ฯ_t dB_t, S_T = D.
For 1โค jโค I, the trading strategy ฮธjโj is optimal for agent (j) in (<ref>).
Let 1โค jโค I be given. First, we show that ฮธjโj. Since ฮธj is _0-measurable, it is also -adapted. Propositionย <ref> proves that ฮธj is continuous and monotone, so it is cร dlร g and of finite variation. Finally, ฮธj is bounded by a constant times sup_1โค iโค I|a_i|, where we have [(sup_1โค iโค I|a_i|)^2]<โ by the assumptions that [(a_i+ฮธ_i,0-)^2]<โ and ฮธ_i,0- is constant for all 1โค iโค I. We recall that by assumption, รฃ_i = a_i+ฮธ_i,0- is independent of the Brownian motion and D is measurable with respect to the Brownian filtration with [D^2]<โ. Thus, ฮธj is independent of ฯ, โซ_0^T(ฯ_tฮธj_t)^2dt <โ, โซ_0^T(ฮธj_t)^2dt<โ, and so, ฮธjโj.
Next, we proceed to show that ฮธj is optimal for agent (j). Since ฮธ, ฮธjโj, and by the definition of ฮผ in (<ref>), the integrals and conditional expectations in the calculations below are well-defined. For any ฮธโj, we have
Vj(ฮธ)-Vj(ฮธj)
= ฮป [_T+_T-_T-_T | _0]
+[โซ_0^T ฮบ(u)
( 1/2((ฮธj_u)^2-ฮธ_u^2)
+ (ฮธ_u-ฮธj_u)(ฮผ_u/ฮบ(u) +ฮณ(u)aj+ฮธj_0-))du | _0]
= ฮป [_T+_T-_T-_T | _0]
+[โซ_0^T ฮบ(u)(
(ฮธ_u-ฮธj_u)(ฮผ_u/ฮบ(u)+ฮณ(u)aj +ฮธj_0--ฮธj_u)-1/2(ฮธ_u-ฮธj_u)^2)du
| _0],
โคฮป [_T+_T-_T-_T
+1/ฮปโซ_0^T ฮบ(u)
(ฮธ_u-ฮธj_u)(ฮผ_u/ฮบ(u)+ฮณ(u)aj +ฮธj_0--ฮธj_u)du | _0]
= ฮป [_T+_T-_T-_T - 1/ฮปโซ_0^T (ฮธ_u-ฮธj_u)dYj_u | _0],
where the last equality is due to (<ref>). By integration by parts, the above is rewritten as
[ฮป_T+ฮป_T-ฮป_T-ฮป_T -Yj_T(ฮธ_T-ฮธj_T) + Yj_0(ฮธj_0--ฮธj_0-) + โซ_0-^T Yj_ud(ฮธ_u-ฮธj_u) | _0]
= [โซ_0-^T(ฮป-Yj_u)d_u
+โซ_0-^T(ฮป+Yj_u)d_u | _0]
+[
โซ_0-^T(Yj_u-ฮป)d_u
+โซ_0-^T(-ฮป-Yj_u)d_u | _0],
where we use Yj_T=0 and (ฮธj_0--ฮธj_0-)=0. The terms in (<ref>) are zero due to (<ref>), and the terms in (<ref>) are nonpositive due to |Yj|โคฮป in Theoremย <ref>. All in all, we conclude
Vj(ฮธ)-Vj(ฮธj) โค 0.
Therefore, ฮธjโj is the optimal trading strategy for agent (j).
We now work towards the proof of our equilibrium existence result, Theoremย <ref>. Lemmaย <ref> (below) proves a formula that will be used to prove market clearing.
The following holds:
โ_k=1^j ฮธk_ฯk
=
โ_k=1^jฮธk_0- + (I-j) โ_k=1^jฮณ(ฯk)Ak/I-k, 1โค j โค I-1,
n, j=I.
For 1โค kโค I-1, by the definition of ฮธk in (<ref>), we have
ฮธk_ฯk = ฮธk_0-+ฮผ_ฯk/ฮบ(ฯk) + ฮณ(ฯk)ak
=
ฮธk_0- + ฮณ(ฯk) Ak - โ_i=1^k-1ฮณ(ฯi)Ai/I-i,
where we use Lemmaย <ref> to obtain the second equality. We can easily see that (<ref>) holds for j=1 due to (<ref>) with k=1. Now, as an induction hypothesis, suppose that (<ref>) holds for 1โค j โค I-2. Then, by using (<ref>), we obtain
โ_k=1^j+1ฮธk_ฯk = ฮธj+1_ฯj+1 + โ_k=1^jฮธk_0- + (I-j) โ_k=1^jฮณ(ฯk)Ak/I-k
= โ_k=1^j+1ฮธk_0- + (I-j-1) โ_k=1^j+1ฮณ(ฯk)Ak/I-k.
Therefore, by induction, we conclude that (<ref>) holds for 1โค j โค I-1.
It only remains to prove (<ref>) for j=I. By (<ref>), we have
ฮธI_ฯI = ฮธI_0-+ฮผ_ฯI/ฮบ(ฯI) + ฮณ(ฯI)aI
=
ฮธI_0- + ฮณ(ฯI) AI - โ_k=1^I-2ฮณ(ฯk)Ak/I-k,
Finally, (<ref>) with j=I-1 and (<ref>) complete the proof:
โ_k=1^I ฮธk_ฯk = (โ_k=1^I-1ฮธk_0- + โ_k=1^I-1ฮณ(ฯk)Ak/I-k) +
(
ฮธI_0- + ฮณ(ฯI) AI - โ_k=1^I-2ฮณ(ฯk)Ak/I-k)
= โ_k=1^Iฮธk_0- + ฮณ(ฯI-1) AI-1 + ฮณ(ฯI) AI
=n,
where the last equality is due to ฯI-1=ฯI and AI-1 =-AI.
Finally, we have all of the pieces in place to prove the equilibrium existence result, Theoremย <ref>.
First, we notice from the definition of S in (<ref>) that S_T = [D]+โซ_0^T ฯ_u dB_u = D. Since the optimality of ฮธj is already checked in Propositionย <ref>, to complete the proof, it only remains to check the market clearing condition:
โ_k=1^I ฮธk_t =n, tโ [0,T].
Suppose that 0โค j โค I-1 and tโ[ฯj,ฯj+1). By (<ref>) and (<ref>), we have
ฮธk_t = ฮธk_0- - ฮณ(t) aโฅ j+1_ฮฃ/I-j - โ_i=1^j ฮณ(ฯi)Ai/I-i + ฮณ(t) ak, j+1โค k โค I.
Then, we obtain
โ_k=1^I ฮธk_t = โ_k=1^j ฮธk_ฯk + โ_k=j+1^I ฮธk_t
= โ_k=1^Iฮธk_0- + (I-j) โ_k=1^jฮณ(ฯk)Ak/I-k
+ ฮณ(t) โ_k=j+1^I ak - ฮณ(t) aโฅ j+1_ฮฃ - (I-j)โ_i=1^j ฮณ(ฯi)Ai/I-i
= โ_k=1^Iฮธk_0- = n,
where the first equality is due to (<ref>) and Theoremย <ref>, and the second equality is due to Lemmaย <ref> and (<ref>).
Finally, for tโ [ฯI-1,T], we have
โ_k=1^I ฮธk_t = โ_k=1^I ฮธk_ฯk= n,
where the first equality is due to (<ref>) and Theoremย <ref>, and the second equality is due to Lemmaย <ref>.
ยง ANALYZING EQUILIBRIUM OUTCOMES
How does the equilibrium stock drift ฮผ depend on the transaction cost ฮป? In Noh and Westonย <cit.>, the two-agent economy means that ฮผ does not vary with ฮป. In Gonon et. al.ย <cit.>'s ergordic-style equilibrium, the equilibrium drift is impacted by ฮป, even with a two-agent economy because the two agents have different quadratic penalty term coefficients. In our work, the presence of multiple agents gives way to ฮผ's dependence on ฮป, which is intricate and investigated below.
First, we prove that the rank-based ordering aj only depends on the relative trading targets, (a_i)_1โค iโค I, not on ฮณ, ฮบ, ฮป.
Let 1โค kโค I and ฯk>0. Then,
* AkF(ฯk)=c_k ฮป, where c_k is a constant that only depends on (a_i)_1โค iโค I,
not on ฮณ, ฮบ, ฮป.
* ak only depends on (a_i)_1โค iโค I, not on ฮณ, ฮบ, or ฮป.
We prove this by backward induction. The cases k=I, I-1 are simple, so we start with k=I-2.
Recall that if ฯI-2>0, then ฯI-1>0 and aโฅ I-1_ฮฃ=max_1โค i โค I{ a_i } + min_1โค i โค I{ a_i }. Also,
aI-2={a_i: | a_i - aโฅ I-1_ฮฃ/2|, iโI-2},
which implies that aI-2, aโฅ I-1_ฮฃ, AI-2 only depend on (a_i)_1โค iโค I. The first statement also holds by
AI-2F(ฯI-2)=(AI-2)2/3ฮป.
As the induction hypothesis, suppose that the statements hold for j+1โค k โค I and ฯj>0. Recall that (<ref>) and Theoremย <ref> imply | โ_k=j+1^I-2c_k/I-k|<1. Therefore, if ฮทj_i>0, then
( a_i- aโฅ j+1_ฮฃ/I-j)F(ฮทj_i) + ฮปโ_k=j+1^I-2c_k/I-k =
ฮป, if ( a_i- aโฅ j+1_ฮฃ/I-j)>0
-ฮป, if ( a_i- aโฅ j+1_ฮฃ/I-j)<0
Considering ฯj=max_iโj{ฮทj_i}, we can see that aj is chosen as the value of a_i for iโj that maximizes the expression:
1/1-โ_k=j+1^I-2c_k/I-k( a_i- aโฅ j+1_ฮฃ/I-j)^+ +1/1+โ_k=j+1^I-2c_k/I-k( a_i- aโฅ j+1_ฮฃ/I-j)^-,
which depends only on (a_i)_1โค iโค I,
and not on ฮณ, ฮบ, or ฮป. Finally,
I-j+1/I-jAj F(ฯj) + ฮปโ_k=j+1^I-2c_k/I-k =
ฮป, if Aj>0
-ฮป, if Aj<0
produces
c_j= I-j/I-j+1((Aj)-โ_k=j+1^I-2c_k/I-k),
and c_j also only depends on (a_i)_1โค iโค I.
In Section 4, we order (a_i)_1โค iโค I by the backward induction (<ref>)-(<ref>) and obtain (aj)_1โค jโค I. The resulting (aj)_1โค jโค I may not be unique: if {i_1, i_2 }โ{ฮทj_i: iโj} in (<ref>), then one can choose a_i_1 or a_i_2 as aj. However, the following proposition ensures that the rium stock price is uniquely determined by our construction in Section 4, regardless of the possible different choices of the ordering (aj)_1โค jโค I.
By the backward construction (<ref>)-(<ref>), ฮผ defined in (<ref>) is uniquely determined.
Suppose that 2โค j โค I-1 and {i_1, i_2,...,i_m }= {ฮทj_i: iโj} for 1โค m โคj in the j-step of the backward induction in (<ref>)-(<ref>).
Without loss of generality, let aj=a_i_1 and ฯj>0. Then,
ฯj
=ฮทj_i, i โ{i_1, i_2,...,i_m }
>ฮทj_i, i โjโ{i_1, i_2,...,i_m }.
The above observation and Theoremย <ref> produce
| (a_i-aโฅ j+1_ฮฃ/I-j) F(ฯj) + โ_k=j+1^I-2Ak/I-k F(ฯk) |
= ฮป, i โ{i_1, i_2,...,i_m }
<ฮป, i โjโ{i_1, i_2,...,i_m }.
Using the identity aโฅ j_ฮฃ/I-j+1= aโฅ j+1_ฮฃ/I-j+ Aj/I-j, we obtain that for any a_i,
| (a_i-aโฅ j+1_ฮฃ/I-j) F(ฯj) + โ_k=j+1^I-2Ak/I-k F(ฯk) |
= | I-j+2/I-j+1(a_i-a_i + aโฅ j_ฮฃ/I-j+2) F(ฯj) + โ_k=j^I-2Ak/I-k F(ฯk) | .
We combine (<ref>), (<ref>), and Theoremย <ref> to conclude that
ฯj=ฯj-1
=ฮทj-1_i, i โ{i_2,...,i_m }
>ฮทj-1_i, i โj-1โ{i_2,...,i_m }.
By above, without loss of generality, we set aj-1=a_i_2. As an induction hypothesis, suppose that for 1โค lโคm-1,
aj-k+1 =a_i_k, 1โค k โค l,
ฯj =ฯj-1=โฏ = ฯj-l+1
=ฮทj-l+1_i, i โ{i_l,...,i_m }
>ฮทj-l+1_i, i โj-l+1โ{i_l,...,i_m }.
Suitable rearrangement of (<ref>) produces
โ_k=j-l+1^j Ak/I-k =aโฅ j-l+1_ฮฃ/I-j+l - aโฅ j+1_ฮฃ/I-j.
Using (<ref>) and (<ref>), we obtain that for any a_i,
| (a_i-aโฅ j+1_ฮฃ/I-j) F(ฯj) + โ_k=j+1^I-2Ak/I-k F(ฯk) |
= | I-j+l+1/I-j+l(a_i-a_i + aโฅ j-l+1_ฮฃ/I-j+l+1) F(ฯj) + โ_k=j-l+1^I-2Ak/I-k F(ฯk) | .
We combine (<ref>), (<ref>), and Theoremย <ref> to conclude that
ฯj =ฯj-1=โฏ = ฯj-l
=ฮทj-l_i, i โ{i_l+1,...,i_m }
>ฮทj-l_i, i โj-lโ{i_l+1,...,i_m },
and without loss of generality, we set aj-l=a_i_l+1.
Therefore, by induction, we conclude that the agents i_1,...,i_m stop trading at the same time ฯj, regardless of the rearrangement choice
{aj,aj-1,...,aj-m+1} of {a_i_1,a_i_2,...,a_i_m}.
To complete the proof of the uniqueness of ฮผ in (<ref>), it only remains to show that the rearrangement choice
{aj,aj-1,...,aj-m+1} of {a_i_1,a_i_2,...,a_i_m}
does not affect the value of ฯj-m. Indeed, this can be checked by the following observation:
โ_k=j-m+1^I-2Ak/I-k F(ฯk) = โ_k=j+1^I-2Ak/I-k F(ฯk) + ( โ_k=j-m+1^j Ak/I-k) F(ฯj)
= โ_k=j+1^I-2Ak/I-k F(ฯk) +
( โ_l=1^m a_i_l/I-j+m - m aโฅ j+1_ฮฃ/(I-j+m)(I-j)) F(ฯj),
where the first equality is due to ฯj=...=ฯj-m+1, and the second equality is due to (<ref>) with l=m. Obviously, the above expression does not depend on the rearrangement choice.
We are primarily interested in how ฮผ varies with the choice of ฮป. In our model, ฮผ depends on ฮป in a complex, nonsuccinct way. The ฮป-dependence on ฮผ can best be captured through ฮผ's derivative, as described in Corollaryย <ref> below. Corollaryย <ref> shows that the formula for the derivative of ฮผ/ฮบ does not change with ฮป, but the time intervals, on which the formulas apply, do change with ฮป.
All equilibrium outputs depend on ฮป, but ฮป has been fixed up until now. In what follows, we append (ฮป) to our notation when we need to indicate that the market in question has a transaction cost proportion ฮป; e.g., ฮผ(ฮป), ฯj(ฮป), etc.
Let ฮป>0 and suppose that ฮณ is differentiable on [0,T]. Then,
d/dt[ฮผ_t(ฮป)/ฮบ(t)]
=
-ฮณ'(t)aโฅ j+1_ฮฃ/I-j, tโ(ฯj(ฮป),ฯj+1(ฮป)), 0โค jโค I-2,
-ฮณ'(t)aโฅ I-1_ฮฃ/2, tโ(ฯI-1(ฮป),T].
In particular, suppose that the traders are TWAP traders with ฮณ^TWAP(t) := t/T for tโ[0,T]. Then for ฮป, ฮป' >0 and 0โค j โค I-1, we have that d/dt[ฮผ_t(ฮป)/ฮบ(t)] for tโ(ฯj(ฮป), ฯj+1(ฮป)) is constant and agrees with d/dt[ฮผ_t(ฮป')/ฮบ(t)] for tโ(ฯj(ฮป'), ฯj+1(ฮป')).
By Propositionย <ref>, the rank-based ordering is determined independently of the choice of the transaction cost proportion. Thus, taking derivatives using the definition of ฮผ in (<ref>) yields the desired result.
For TWAP traders, Corollary <ref> shows that the two plots of ฮผ have the same slope on the time intervals corresponding to when the same number of agents are trading. However, the stop-trade times, and thus the number of agents trading, depend on ฮป. Figure <ref> (below) illustrates Corollary <ref> by plotting two equilibrium drifts with different ฮป parameters. We consider the TWAP trading case with ฮณ(t)=ฮณ^TWAP(t)= t/T, tโ[0,T] and have equilibrium inputs that are the same as in Sectionย <ref> with I=20, T=1, ฮบ(t)=0.1, tโ[0,T], and trading targets given by (<ref>). The two graphs of equilibrium drift ฮผ correspond to models with ฮป=0.1 and ฮป=0.2.
By inspecting the graphs plotted in Figureย <ref>, we notice that the drifts have the same slope at time T. (Note: the time scale of Figureย <ref> changes at time 0.9, and so the graph of ฮผ with ฮป=0.2 appears to change slope at time 0.9, though the slope is actually constant.) Tracing back further in time, ฮผ with ฮป=0.1 and ฮป=0.2 have the same slopes on the intervals corresponding to the same number of agents trading.
Higher levels of transaction costs lead to earlier (lower) stop-trade times, which lead to a wider range of possible ฮผ values for higher ฮป. A similar phenomenon appeared Gonon et. al.ย <cit.>. In both this work and Gonon et. al.ย <cit.>, higher ฮป leads to less trading, which results in changes in the equilibrium drift. In Gonon et. al.ย <cit.>, the equilibrium drift also has a wider range of values and activity as ฮป increases, due to an increase in the range of a doubly-reflected Brownian motion state process that drives the drift.
The impact of transaction costs can also be seen through the equilibrium stock price. Interestingly, the impact is not monotone but instead is piecewise linear. Figureย <ref> (below) plots the impact of ฮป on S_0. The equilibrium inputs are the same as in Sectionย <ref> with I=20, T=1, ฮบ(t)=0.1 and ฮณ(t) = ฮณ^TWAP(t)=t/T, tโ[0,1], and trading targets given by (<ref>). We vary ฮป in the range (0, 2].
Why does the plot of S_0 in Figureย <ref> have a piecewise linear shape?
Corollaryย <ref> (below) provides a formula for S_0 with insights into Figureย <ref>'s shape.
We have
S_0 = [D] +โซ_0^T ฮบ(t)ฮณ(t)aโฅ 1_ฮฃ/I dt
- โ_j=1^I-2AjF(ฯj)/I-j,
and the map ฮปโฆ S_0 (ฮป) is piecewise linear.
We observe that for 0โค jโค I-2 and tโ[0,T],
ฮณ(t) aโฅ j+1_ฮฃ/I-j + โ_k=1^j ฮณ(ฯk)Ak/I-k =ฮณ(t) ( aโฅ j+1_ฮฃ/I-j + โ_k=1^j Ak/I-k) -โ_k=1^jAk/I-k(ฮณ(t)-ฮณ(ฯk))
=ฮณ(t)aโฅ 1_ฮฃ/I
-โ_k=1^jAk/I-k(ฮณ(t)-ฮณ(ฯk)),
where the second equality above is due to (<ref>) with l=j.
Using the definition of ฮผ in (<ref>) and the identity (<ref>), we have
ฮผ_t = ฮบ(t)(-ฮณ(t)aโฅ 1_ฮฃ/I
+โ_k=1^jโง (I-2)Ak/I-k(ฮณ(t)-ฮณ(ฯk))),
for tโ[0,T], where either tโ[ฯj,ฯj+1), for some 0โค jโค I-2, or tโ[ฯI-1,T] for j=I-1.
Therefore, the definition of S in (<ref>) and the expression of ฮผ in (<ref>) produce
S_0 - [D] = -โซ_0^T ฮผ_t dt
= - โ_j=0^I-2โซ_ฯj^ฯj+1ฮผ_t dt - โซ_ฯI-1^Tฮผ_t dt
= -โ_j=0^I-2โซ_ฯj^ฯj+1ฮบ(t)(-ฮณ(t)aโฅ 1_ฮฃ/I
+โ_k=1^jAk/I-k(ฮณ(t)-ฮณ(ฯk)))dt
- โซ_ฯI-1^T ฮบ(t)(-ฮณ(t)aโฅ 1_ฮฃ/I
+โ_k=1^I-2Ak/I-k(ฮณ(t)-ฮณ(ฯk)))dt
= โซ_0^T ฮบ(t)ฮณ(t)aโฅ 1_ฮฃ/Idt
- โ_k=1^I-2Ak/I-kโซ_ฯk^T ฮบ(t)(ฮณ(t)-ฮณ(ฯk))dt,
where we interchange the order of the summations to obtain the last equality.
Finally, the expression above and Propositionย <ref> imply that the map ฮปโฆ S_0 (ฮป) is piecewise linear.
plain
|
http://arxiv.org/abs/2306.05311v1
|
20230608160004
|
Predictive Modeling of Equine Activity Budgets Using a 3D Skeleton Reconstructed from Surveillance Recordings
|
[
"Ernest Pokropek",
"Sofia Broomรฉ",
"Pia Haubro Andersen",
"Hedvig Kjellstrรถm"
] |
cs.CV
|
[
"cs.CV"
] |
Predictive Modeling of Equine Activity Budgets Using a 3D Skeleton Reconstructed from Surveillance Recordings
Ernest Pokropek^1 ย ย ย ย
Sofia Broomรฉ^1,2 ย ย ย ย Pia Haubro Andersen^3 ย ย ย ย Hedvig Kjellstrรถm^1,3
^1 KTH, Sweden pokropek,sbroome,[email protected] ย ย ย ย
^2 Therapanacea, France
^3 SLU, Sweden [email protected]
July 31, 2023
==========================================================================================================================================================================================================================
In this work, we present a pipeline to reconstruct the 3D pose of a horse from 4 simultaneous surveillance camera recordings. Our environment poses interesting challenges to tackle, such as limited field view of the cameras and a relatively closed and small environment. The pipeline consists of training a 2D markerless pose estimation model to work on every viewpoint, then applying it to the videos and performing triangulation. We present numerical evaluation of the results (error analysis), as well as show the utility of the achieved poses in downstream tasks of selected behavioral predictions. Our analysis of the predictive model for equine behavior showed a bias towards pain-induced horses, which aligns with our understanding of how behavior varies across painful and healthy subjects.
ยง INTRODUCTION
Animal welfare science is increasingly focusing on not only assessing the quality of the environment in which animals are kept but also evaluating their physical and psychological well-being. In this context, behavior is a valuable indicator of welfare and disease <cit.>, providing insight into the animal's subjective state. One way to assess behavior is through the evaluation of the amount of time spent engaging in different activities, also known as a time budget. Changes in a time budget can help identify painful conditions and evaluate the effectiveness of management interventions to improve equine welfare. However, measuring time budgets requires detailed and lengthy surveillance, which is usually done through direct observation by human observers. This method is prone to bias, fatigue, and limited accuracy and temporal resolution. Additionally, direct observation and video analysis are resource-intensive, which restricts the observation periods and the number of individuals studied. Thus, for time budget analysis to become a reliable and useful tool for welfare assessment, automated recognition of behavior is necessary <cit.>.
Pose estimation employs machine learning models to estimate the positions of points/areas of interest of a given agent (or multiple agents) in video data, for them to be later connected creating a pose. This can be performed to produce an either 2-dimensional 2D or 3-dimensional 3D result, most commonly used with human agents <cit.>. When it comes to the latter, the methodology usually consists of first performing the 2D pose estimation from various angles and then combining the results in the process called lifting, which commonly involves various triangulation techniques. Pose reconstruction is especially useful in areas of augmented reality <cit.>, healthcare <cit.>, motion analysis <cit.> and many more, vastly reducing the necessity of computational and financial resources and outperforming previous detection methods <cit.>. Furthermore, it showed a distinct improvement over commonly used methodologies for tasks such as object detection <cit.> or image classification <cit.>.
In this work, we present a pipeline for reconstructing the 3D pose of a whole horse constructed using 28 keypoints in a challenging environment. We show the utilization of the obtained skeleton in a downstream task of behavior/activity prediction. Our pipeline addresses the 3D pose estimation task by solving two major challenges: (1) 2D keypoint estimation and (2) lifting the 2D keypoints into 3 dimensions. The first part was implemented using the DeepLabCut <cit.> interface, where we used pretrained ResNet50 weights and hand labelled data to fit a single keypoint estimation model for all viewpoints. For the latter, we used functions provided from the library <cit.> and performed a RANSAC triangulation on keypoints estimated on videos from 4 viewpoints.
ยง RELATED WORK
3D pose reconstruction has been widely used for humans and it also gained popularity with animal data. Nath et al. provide an exhaustive description on marker-less pose estimation for various animals and they present an example of 3D reconstruction for cheetahs in the wild from 6 camera recordings using the DeepLabCut interface <cit.>.
One of the potential use cases of such reconstruction is a pipeline involving object detection, 2D pose estimation as well as 3D reconstruction generates pose of the animal to be later used for wildlife animation<cit.>.
Without a surprise, the scientific community noticed many problems regarding the unpredictability of animal data and resources needed for triangulation.
With concerns about the unpredictability of animal data and resources needed for triangulation,
a framework called LiftPose3D is introduced, which via deep neural networks estimates the 3D pose of an animal based on a single camera view <cit.>. This work is especially important given that sometimes one may not have access to many views of the animal, as well as to the camera calibration parameters, making the standard triangulation unfeasible. Furthermore, datasets dedicated to
animal pose estimation have been introduced, where a domain adaptation approach is used to reconstruct the skeleton, achieving results as good as for the more explored
human pose estimation <cit.>.
DeepFly3D <cit.> uses deep neural networks to estimate the 3D pose of tethered Drosophila melanogaster. It eliminates the need for manual calibration, incorporates error detection and correction, and improves performance through active learning. DeepFly3D enables detailed automated behavioral measurements for various biological applications.
Action segmentation has also been employed in animal behavior analysis. SIPEC <cit.>, a novel deep learning architecture, enables the classification of individual and social animal behavior in complex environments directly from raw video frames. It outperforms existing methods and successfully recognizes behaviors of freely moving mice and socially interacting non-human primates using simple mono-vision cameras in home-cage setups.
ยง DATA AND METHODS
The workflow of this study is illustrated in Figure <ref>. Data for veterinary research has been used; it consists of approximately 45 minute recordings (approximately 20 frames per second each) of horses in rectangular enclosures, filmed simultaneously from 4 viewpoints placed in the corners, above the horse <cit.>. Due to noise in the data set, we used 13 subjects with reliable videos, resulting in 52 videos being analyzed in this study. Some of the subjects (9) were exposed to pain, which affected their behavior. Visualization, together with examples from the real dataset, can be seen in Figure <ref>. This data set is quite challenging - each camera has a limited field of view, with parts of the animal not being visible from certain angles.
Videos have been undistorted (camera distortions have been removed) as well as downsized to resolution of 336x190. The horses participated in an orthopedic pain study and videos were recorded before the horses were in pain and during pain. All videos were manually annotated for all observable state and point events, according to an ethogram <cit.>.
Activity budgets were available for basic behaviors for horses in no-pain and pain states.
ยง.ยง Automated keypoint estimation
In this study, 28 keypoints were selected to represent the pose of the horse in a simplistic but informative manner. First, one has to manually annotate a portion of the frames; in case of this study, it has been performed using the open-source marker-less pose estimation package DeepLabCut designed for animal data <cit.> in its Python implementation <cit.>. In total, we extracted 1550 frames from the videos: some sampled randomly, some sampled via k-means that maximize the generalization capabilities of the frames, and lastly via sampling 'outlier' frames. When the
initial model was trained, we selected the frames that the model was performing poorly on and labelled them to later merge it into the training set, as proposed by the DeepLabCut method. The model with ResNet50 weights has been trained for 500,000 iterations, until convergence.
It was then used to predict the keypoints for whole length of videos, and the predictions were passed through an arima filter <cit.>. Finally, we dropped the points with probability smaller than 0.6, in order to remove unsure predictions.
ยง.ยง 3D pose reconstruction
Having access to many cameras placed around the agent with knowledge of their calibration parameters, it is possible to lift
the set of 2D coordinates into 3D.
In this study, it has been done via the RANSAC triangulation algorithm using the Anipose package for Python <cit.> which provides a triangulation interface for many viewpoints and is directly compatible with DeepLabCut. After receiving the 2D keypoint estimation for all videos from the previous step, we resize these coordinates to correspond to the original video size (due to having the calibration parameters corresponding to that). Then, we perform the triangulation while minimizing the reprojection error (we project the 3D coordinates to 2D, calculate the error of projection, and repeat that for all viewpoints). Additional filtering (median filter) of the final pose is added to smooth out the noise, and for the final poses we drop the individual keypoints for which the 3D reprojection error is bigger than 200 pixels, which we found to be a reasonable threshold four our data that does allow for some level of uncertainty. Not removing these keypoints would in best case scenario introduce unnecessary noise for downstream task of behavior prediction, and in worst, potentially bias the downstream prediction model to identify subjects based on this noise. Usually, high reprojection error was caused by the keypoint not being properly visible from some of the viewpoints, which could indicate the animal's placement within the environment, which varied across the subjects.
ยง.ยง Downstream prediction tasks
To further evaluate the quality of our 3D pose estimation, we have used the available annotations of horses' activity budgets and performed a binary classification for them, using only skeleton data. Because movement, eating, and standing are essential behaviors in an activity time budget and have been found to exhibit varying durations in horses experiencing pain versus those that are not, these were chosen for analysis based on an ethogram <cit.>. Data from 13 horses has been used, with videos of
โผ 45 minutes each. Although given the dense nature of the behavior annotations, action segmentation could have been used, we opted for action recognition frameworks given their popularity and relatively lower computational cost, as the presented pipeline should be able to operate in real time.
ยง.ยง.ยง Data preparation
For each annotated interval of the behavior happening, we randomly selected a 200 frame segment of it. Since we perform this experiment in a binary fashion, for each such segment, we also picked another one (of same length, from same subject), where the behavior was not occurring. For each horse, if there was more than 5 such pairs, we randomly selected 1 segment pair for the validation set and 3 pairs for the test set (without replacement). The remaining segments were used for training. This resulted in 986 annotated segments (200 frames each) used for training, 58 for validation, and 174 for test. Each segment corresponds to one label to predict.
ยง.ยง.ยง Training and evaluation
We performed two main experiments: (1) training a Random Forest Classifier (RFC) with plain 3D coordinates (using 100 tree estimators), and (2) training a Spatial Temporal Graph Convolutional Network (ST-GCNN), which leverages temporal information and has been successfully applied to human skeleton data for activity recognition <cit.>. In order to produce one prediction per each 200 frame segment, validation for the RFC was conducted in a voting manner. Prediction for each frame was produced, and then they were averaged across all frames. The resulting value was then rounded.
Each ST-GCNN was trained for 50 epochs, using the parameters from its original implementation. Both models were trained to perform binary classification for the selected 3 activities (one model each per behavior). For both experiments, the missing values (due to high reprojection error) were replaced with zeros.
ยง RESULTS
In total, we estimated over 700,000 poses with mean reprojection error of 89.90ยฑ45.98px (on an original video size of 2688x1520), with median reprojection error being slightly lower (84.60px). On average, each pose consists of 18ยฑ3 3D keypoints (out of total 28). Example poses are shown in Figure <ref> and more detailed results (per
keypoint group) are presented in Table <ref>. The errors per
keypoint and the missing rate per keypoint tend to fluctuate around similar magnitudes.
Basing only on the 3D coordinates, RFC was able to correctly classify eating behavior, while struggling with movement and standing, as there was no information about the dynamics of the skeleton.
When utilizing the temporal information, the predictive capabilities of the model using the animal's pose are especially well-performing.
This can be seen
in Figure <ref> with both precision and recall oscillating above 70% for ST-GCNN for all behaviors. As expected, eating is the easiest to predict on static data, as it is usually represented by the horse lowering its head.
We conducted an experiment to test for predictive bias towards pain behavior in horses. The original training data had imbalances in terms of pain subjects, with some segments having nearly triple the instances, so we balanced it by downsampling the majority class. We trained ST-GCNN on this data and tested against the unchanged test set. Achieving good metrics was difficult due to the significant reduction in the training dataset size.
Table <ref> presents the results of this experiment. Effectively, we have investigated the prediction distribution of the models that were trained on
an equal amount of segments with healthy and painful subjects.
The smallest difference in those distributions is attributed to eating, however, given the small amount of data, we were not able to train this classifier to achieve satisfactory performance. Nevertheless, when performing better than at random, it exhibits the behaviors expected for these subjects. In total, there is a bias for false positives to be predicted for the painful group. For more details about how this
compares to the distribution of actual behaviors, see Section <ref>.
ยง DISCUSSION
In this work, we presented our pipeline for reconstructing the 3D pose of a horse in challenging environments. Apart from the investigation of the errors, we presented a downstream task as an example of how these coordinates may become useful for predictive challenges. However, there are some improvements that could have been made to this work: in order for the triangulation to work as intended, the videos from different viewpoints need to be well synchronized. Unfortunately, in case of our data it was not always the case, and it occurred in a non-systematic manner, hence fixing it became an endeavour on its own. Also, having a lot of missing coordinates in 3D (due to various reasons), it may make sense to interpolate them based on previous and proceeding values for a more continuous and stable pose reconstruction.
The behavior prediction could also be improved. The data amount used is small ompared to human pose action recognition datasets; when utilizing more data, the performance metric would probably improve significantly, given the good performance on the current dataset.
It should further be noted that this environment is incredibly challenging โ most of the time, some
keypoints on the horse are occluded. Usually, the best case scenario is having a point visible from 3 cameras, and even that rarely happens โ the horse would need to place itself in the middle of the box, ideally with the head up.
The proposed method for automated determination of an activity budget has demonstrated promising potential, as evidenced by the results obtained shown in Figure <ref> and Table <ref>. All selected behaviors exhibited the same variation as earlier detected by manual observation and labeling, as reported by Pรฅlsson. Interestingly, the behavior "eating" revealed that the painful horses had more eating time than the sound horses, which is controversial, since painful horses are expected not to eat <cit.>. However, the Pรฅlsson study also indicated that painful horses had more eating behavior, which was interpreted as a comforting behavior. Further research is needed to differentiate between "eating" and the head position "down". The changes in "movement" indicate that painful horses tend to be more restless, which aligns with previous findings by Ashley and Hausberger. Moreover, the study revealed that painful horses are less inclined to lie down, as shown in Table <ref>.
It has to be noted, that the prediction of behavior is not perfect and shows notable errors, which could undermine the statistical significance of results presented in Table <ref>. The direction for future studies would be to analyse the bias of the predictive model towards pain assessment when training data is balanced and yields good validation results.
Despite the study's limitations, our method shows great potential for the automated recognition of
horse behaviors and their time budgets.
ieee_fullname
|
http://arxiv.org/abs/2306.04093v2
|
20230607013006
|
Subnetwork Estimation for Spatial Autoregressive Models in Large-scale Networks
|
[
"Xuetong Li",
"Feifei Wang",
"Wei Lan",
"Hansheng Wang"
] |
stat.CO
|
[
"stat.CO"
] |
();a,,
theoremTheorem
lemmaLemma
propositionProposition
= 7mm
= 2.5mm
๐ผ
1
ฮต
โฅโฅ
|
http://arxiv.org/abs/2306.09126v1
|
20230615133714
|
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events
|
[
"Kazuki Shimada",
"Archontis Politis",
"Parthasaarathy Sudarsanam",
"Daniel Krause",
"Kengo Uchida",
"Sharath Adavanne",
"Aapo Hakala",
"Yuichiro Koyama",
"Naoya Takahashi",
"Shusuke Takahashi",
"Tuomas Virtanen",
"Yuki Mitsufuji"
] |
cs.SD
|
[
"cs.SD",
"cs.CV",
"cs.MM",
"eess.AS",
"eess.IV"
] |
New Dimensions of Galactic Chemical Evolution
David H. Weinberg^1
July 31, 2023
=============================================
While direction of arrivalย (DOA) of sound events is generally estimated from multichannel audio data recorded in a microphone array, sound events usually derive from visually perceptible source objects, e.g., sounds of footsteps come from the feet of a walker.
This paper proposes an audio-visual sound event localization and detectionย (SELD) task, which uses multichannel audio and video information to estimate the temporal activation and DOA of target sound events.
Audio-visual SELD systems can detect and localize sound events using signals from a microphone array and audio-visual correspondence.
We also introduce an audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023ย (STARSS23), which consists of multichannel audio data recorded with a microphone array, video data, and spatiotemporal annotation of sound events.
Sound scenes in STARSS23 are recorded with instructions, which guide recording participants to ensure adequate activity and occurrences of sound events.
STARSS23 also serves human-annotated temporal activation labels and human-confirmed DOA labels, which are based on tracking results of a motion capture system.
Our benchmark results show that the audio-visual SELD system achieves lower localization error than the audio-only system.
The data is available at <https://zenodo.org/record/7880637>.
ยง INTRODUCTION
Given multichannel audio input from a microphone array, a sound event localization and detectionย (SELD) systemย <cit.> outputs a temporal activation track for each of the target sound classes along with one or more corresponding spatial trajectories, e.g., the direction of arrivalย (DOA) around the microphone array, when the track indicates activity.
Such a spatiotemporal characterization of sound scenes can be used in a wide range of machine cognition tasks, such as inference on the type of environment, tracking of specific types of sound sources, acoustic monitoring, scene visualization systems, and smart-home applications.
Recently neural networkย (NN)-based SELD systemsย <cit.> show high localization and detection performance.
These systems need data with activity and DOA labels of target sound events for training and evaluation.
Because annotation of DOA labels is challenging in real sound scene recordings, most SELD datasetsย <cit.> consist of synthetic audio data, which are made by convolution of monaural sound event signals and multichannel impulse response signals with DOA labels.
The Sony-TAu Realistic Spatial Soundscapes 2022 dataset (STARSS22)ย <cit.> tackles real sound scene recordings with DOA labels, which is based on tracking results of a motion captureย (mocap) system.
Currently, STARSS22 is the only SELD dataset with real sound scenes, including overlapping sound events, moving source events, and natural distribution of temporal activation and DOA, e.g., sounds of footsteps are relatively short and heard from a lower elevation.
While this dataset is suitable for evaluating audio-only SELD systems in natural sound scenes, the dataset does not include other modality input, e.g., video data.
Sound events in real sound scenes originate from their sound source objects, e.g., speech comes from a person's mouth, sounds of footsteps are produced from the feet of a walker, and a knocking sound originates from a door.
Such sound source object information is usually apparent in the visual modality.
Video data aligned with audio recordings in SELD tasks have the potential to mitigate difficulties and ambiguities of the spatiotemporal characterization of the sound scene as audio-visual data improves source separationย <cit.> and speech recognitionย <cit.>.
Visible people in the video data can provide candidate positions of human body-related sounds.
When a person walks in the video, tapping sounds are easily recognized as footsteps.
In this context, we propose audio-visual SELD task, which uses audio and video data to estimate spatiotemporal characterization of sound events.
The left side of Figureย <ref> shows an audio-visual SELD system, which takes multichannel audio recordings and video aligned with the audio recordings and outputs activity and DOA of target sound events in each frame.
To tackle audio-visual SELD tasks, we need an audio-visual dataset consisting of multichannel audio, video, and activity and DOA labels of sound events per frame.
There is another interest in audio-visual sound source localization taskย <cit.>, which takes monaural audio and images and estimates where the audio is coming from in the images.
They focus on learning audio-visual semantic correspondence, not estimating physical DOA around a microphone array.
While the audio-visual sound source localization datasetsย <cit.> are adequate to train NNs with audio-visual correspondence and evaluate localization performance in images, they are typically monophonic without spatial labels for the target sound events.
Several datasets serve multichannel audio and video data with real sound scenesย <cit.>, whereas only a few audio-visual datasets have multichannel audio data with DOA labels of speakers around a microphone arrayย <cit.>.
While these datasets help to evaluate audio-visual speaker DOA estimationย (DOAE) tasks, the evaluation focuses only on speech, not sound events such as musical instruments or footsteps.
To tackle audio-visual SELD tasks, we introduce an audio-visual dataset, the Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), consisting of multichannel audio, video, and spatiotemporal annotations of sound events, i.e., activity and DOA labels per each frame.
The right part of Figureย <ref> shows a still in a frame of a video in STARSS23, made from 360^โ video, spatial acoustic power map generated from a microphone array, and sound event labels.
The dataset contains over 7-hour recordings with 57 participants in 16 rooms with the spatiotemporal annotation as a development set.
The participants are guided by generic instructions and suggested activities in the recording to induce adequate occurrences of the sound events and diversity of content.
There are 13 classes of target sound events, such as speech, musical instruments, and footsteps.
To reveal whether the dataset has a natural distribution of sound events, we analyze the dataset in terms of frame coverage, overlap, and DOA distributions per each sound event class.
We develop and test an audio-visual SELD system with STARSS23.
To investigate the effects of audio-visual input, we present overall localization and detection performance and per-class results.
ยง RELATED WORK
Sound event localization and detection
SELDnetย <cit.> is the first SELD method, which uses convolutional recurrent neural networkย (CRNN) to output activity and DOA separately.
An activity-coupled Cartesian DOAย (ACCDOA)ย <cit.> vector, which embeds sound event activity information to the length of a Cartesian DOA vector, enables us to solve SELD tasks with a single output.
While the two methods tackle overlaps from different classes, they cannot solve overlaps from the same class.
Multi-ACCDOAย <cit.> is an extension of ACCDOA,
which allows models to output overlaps from the same class.
To handle that case effectively, Multi-ACCDOA incorporates auxiliary duplicating permutation invariant trainingย (ADPIT)ย <cit.>.
There are other SELD works about frameworkย <cit.>, network architectureย <cit.>, audio featureย <cit.>, data augmentationย <cit.>.
To train or evaluate SELD methods, we need multichannel audio data with temporal activation and DOA labels.
In synthetic multichannel audio datasetsย <cit.>, DOA labels can be easily annotated because the data are made from multichannel impulse response signals with DOA labels.
While the SECL-UMons and AVECL-UMons datasetsย <cit.> tackled spatial recording with DOA labels, it is limited to isolated single event recordings or combinations of two simultaneous events, ignoring spatiotemporal information linking events in a natural scene.
STARSS22ย <cit.> tackled real spatial recording with temporal activation and DOA labels of each target class in natural scenes.
Participants improvise natural scenes with a mocap system, whose tracking results are used for DOA labels.
However, the dataset does not release video data.
Therefore, it cannot be used to evaluate audio-visual SELD systems.
Audio-visual sound source localization
There is broad interest in audio-visual sound source localization tasksย <cit.>.
Chen et al. have tackled unsupervised learning to localize sound sources in video and evaluated the method on the VGG-SS datasetย <cit.>, which annotates bounding boxes of sound sources for sound and video pairs.
The AVSBench datasetย <cit.> serves pixel-level audio-visual segmentation maps for videos over 23 class categories.
Because the datasets do not have multichannel audio recordings, they cannot be applied to evaluating SELD tasks.
Audio-visual dataset with multichannel audio
Several audio-visual datasets include multichannel audio dataย <cit.>.
As many datasets are used for self-supervised learningย <cit.> or non-localization tasksย <cit.>, there are no DOA labels.
The YouTube-360 datasetย <cit.> serves first-order ambisonicsย (FOA) signal and 360^โ video data without any labels for self-supervised learning.
A few audio-visual datasets are collected for audio-visual DOAE tasksย <cit.>.
Qian et al. proposed an audio-visual DOAE system, which takes spectrograms and phase features
from the audio input and face-bounding boxes from video input to estimate the DOA of each speakerย <cit.>.
The system was evaluated with the Audio-Visual Robotic Interfaceย (AVRI) dataset, recorded using Kinect and a four-channel Respeaker array, along with activity and DOA labels.
The audio-visual features are helpful for DOAE, and the dataset supports the evaluation of audio-visual speaker DOAE.
However, the dataset is only for speech, not various sound events such as clapping and knocks.
We summarize the comparison of STARSS23 with other real sound scene datasets in tableย <ref>.
ยง STARSS23 DATASET
ยง.ยง Overview
STARSS23 contains multichannel audio and video recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of sound events belonging to a set of target classes.
The dataset enables us to train and evaluate audio-visual SELD systems, which localize and detect sound events from multichannel audio and visual information.
STARSS23 is available in a public research data repository[<https://zenodo.org/record/7880637>] under the MIT license.
There is also a demo video[<https://www.youtube.com/watch?v=ZtL-8wBYPow>].
The contents are recorded with short instructions, guiding participants in improvising sound scenes.
The recordings contain a total of 13 target sound event classes.
The multichannel audio data are delivered as two 4-channel spatial formats: FOA and tetrahedral microphone array (MIC).
The video data are blurred 1920ร960 equirectangular video data recorded by a 360^โ camera.
The annotations of STARSS23 consist of temporal activation, DOA, and source distance of the target classes.
STARSS23 is split into a development set and an evaluation set.
The development set totals about 7 hours and 22 minutes, of which 168 clips were recorded with 57 participants in 16 rooms.
The development set is further split into a training part (dev-set-train, 90 clips) and a testing part (dev-set-test, 78 clips) to support the development process.
In the evaluation set, no publicly available annotations exist because the evaluation set is prepared for a competition, which is described in Appendixย <ref>.
STARSS23 improves a multichannel audio dataset, i.e., STARSS22ย <cit.>.
One of the critical differences is releasing video data aligned with multichannel audio data.
While we maintain all the sessions of STARSS22, we add about 2.5 hours of material to the development set.
STARSS23 also serves source distance labels of sound events as additional annotations.
We follow the recording procedure in STARSS22, where video data are used only to check labels internally.
Adding descriptions about video data and distance annotation, we show the data construction part as an audio-visual dataset.
ยง.ยง Data construction
As shown in Figureย <ref>, STARSS23 is constructed in three steps: sound scene recording, data conversion, and annotation.
We explain each step as follows.
Sound scene recording
STARSS23 was created in Tampere, Finland, and Tokyo, Japan.
Recordings at both sites shared the same process, organized in sessions corresponding to different rooms, sound-making props, and participants.
In each session, various clips were recorded with combinations of that session's participants acting simple scenes and interacting among themselves and with the sound props.
The scenes were based on generic instructions on the desired sound events.
The instructions were a rough guide to ensure adequate event activity and occurrences of the target sound classes in a clip.
The left photo of Figureย <ref> shows that participants improvise following the instructions.
A set of 13 target sound event classes are selected to be annotated, based on the sound events captured adequately in the recorded scenes.
The class labels are chosen to conform to the AudioSet ontologyย <cit.>. They are: female speech, male speech, clapping, telephone, laughter, domestic sounds, footsteps, door, music, musical instrument, water tap, bell, knock.
Music, e.g., background music or pop music, is played by a loudspeaker in the room.
On the other hand, musical instruments are played by participants, including acoustic guitar, piano, and others.
Domestic sounds consist of vacuum cleaners, mechanical fans, and boiling, which have strongly-directional and loud sounds.
They can be distinguishable from natural background noise in sound scenes.
The scenes also contain directional interference sounds such as computer keyboard or shuffling cards that are not labeled.
As shown in the left photos of Figureย <ref>, each scene was captured with audio-visual sensors, i.e., a high-resolution 32-channel spherical microphone array (Eigenmike em32[<https://mhacoustics.com/products#eigenmike1>]) with a height set at 1.5 m, and a 360^โ camera (Ricoh Theta V[<https://theta360.com/en/about/theta/v.html>]) mounted 10 cm above the microphone array.
For each recording session, a suitable position of the Eigenmike and Ricoh Theta V was determined to cover the scene from a central place.
We also capture the scenes with two additional sensors for annotation: a mocap system of infrared cameras surrounding the scene, tracking reflective markers mounted on the participants and sound sources of interest (Optitrack Flex 13[<https://optitrack.com/cameras/flex-13/>]), and wireless microphones mounted on the participants and sound sources, providing close-mic recordings of the main sound events (Rรธde Wireless Go II[<https://rode.com/en/microphones/wireless/wirelessgoii>]).
The origin of the mocap system was set at ground level on the exact position of the Eigenmike. In contrast, the mocap cameras were positioned at the room's corners.
Recording starts on all devices before the beginning of a scene and stops right after.
A clapper sound initiates the acting, and it serves as a reference signal for synchronization between the different types of recordings, including the mocap system that can record a monophonic audio side signal for synchronization.
All types of recordings were manually synchronized based on the clapper sound and subsequently cropped and stored at the end of each recording session.
The details of sound scene recording, e.g., generic instructions, sound events, and sensors, are summarized in Appendixย <ref>.
Data conversion
The original 32-channel recordings are converted to two 4-channel spatial formats: FOA and MIC.
Conversion of the Eigenmike recordings to FOA following the SN3D normalization scheme (or ambiX) was performed with measurement-based filtersย <cit.>.
Regarding the MIC format, channels 6, 10, 26, and 22 of the Eigenmike were selected, corresponding to a nearly tetrahedral arrangement.
Analytical expressions of the directional responses of each format can be found inย <cit.>.
Finally, the converted recordings were downsampled to 24kHz.
The raw 360^โ video data were converted to an equirectangular format with 3840ร1920 resolution at 29.97 frames per second, which is convenient to handle as planar video data.
Based on the participant's consent, the visible faces of all recordings were blurred.
Finally, the video with face-blurring was converted to a 1920ร960 resolution.
Annotation
Spatiotemporal annotations of the sound events were conducted manually by the authors and research assistants.
As shown in the lower right part of Figureย <ref>, there are four steps:
a) annotate the subset of the target classes that were active in each scene,
b) annotate the temporal activity of such class instances,
c) annotate the position of each such instance when active,
moreover, d) confirm the annotations.
Class annotations (a) were observed and logged during each scene recording.
Activity labels (b) were manually annotated by listening to the wireless microphone recordings.
Because each wireless microphone would capture prominent sounds produced by the participant or source it was assigned to, onset, offsets, source, and class information of each event could be conveniently extracted.
In scenes or instances where associating an event to a source was ambiguous purely by listening, annotators would consult the video recordings to establish the correct association.
The temporal annotation resolution was set to 100 msec.
After onset, offset, and class information of events was established for each source and participant in the scene, the positional annotations (c) were extracted for each such event by attaching tracking results to the temporal activity window of the event.
Positional information was logged in Cartesian coordinates with respect to the mocap system's origin.
The event positions were converted to spherical coordinates, i.e., azimuth, elevation, and distance, which are more convenient for SELD tasks.
Then, the class, temporal, and spatial annotations were combined and converted to the text format used in the previous datasetย <cit.>.
The details of the annotation are summarized in Appendixย <ref>.
Confirmation of the annotations (d) was performed by listening to the Eigenmike recording while watching a synthetic video.
The video is the equirectangular video overlapped with the event activities, which is visualized as labeled markers positioned at their respective azimuth and elevation on the video plane.
If a clip is not passed the confirmation, the clip is annotated again.
ยง.ยง Data analysis
Having a natural distribution of sound events is beneficial to evaluate audio-visual SELD systems.
We analyze the frame coverage, polyphony, and DOA per sound event classes on dev-set-train of STARSS23.
Tableย <ref> shows frame coverage, and max, mean, and distribution of polyphony globally and of each class separately.
Regular classes in frames are female and male speech, music, and domestic sounds.
These classes are also frequent in our daily lives.
Musical instruments and laugh classes show high mean polyphony of the same class, which are natural situations in jam sessions and conversations.
Figureย <ref> shows the distribution of DOA with the axis of the azimuth and elevation.
Regarding female speech in Figureย <ref>, the elevation distribution has a strong peak around -10 degrees, while that of azimuth seems uniformly distributed.
Compared to the speech class, footsteps appear in lower azimuth than -10 degrees.
See Appendixย <ref> for further data analysis, e.g., duration.
ยง BENCHMARK
In this section, we examine an audio-visual SELD task with STARSS23.
For evaluation, we set the dev-set-train for training and hold out the dev-set-test for validation.
ยง.ยง Audio-visual SELD system
To build an audio-visual SELD system, we start with an audio-only system based on SELDnetย <cit.> and multi-ACCDOAย <cit.>, which is widely used in audio-only SELD tasksย <cit.>.
To extend the audio-only systems to audio-visual, we merge visual and audio information in the middle of the network, following the audio-visual speaker DOAE workย <cit.>.
First, we summarize the audio-only SELD system.
Audio features, e.g., amplitude spectrograms, are extracted from the multichannel audio data.
Convolution layers embed the audio features, then a gated recurrent unitย (GRU) layer and a fully connected layer (FC) decode the audio embedding to multi-ACCDOA output.
As shown in the bottom of Figureย <ref>, each class in the multi-ACCDOA output is represented by a three-dimensional vector with Cartesian coordinates x, y, and z.
The length of the vector shows activity, and the direction of the vector indicates the DOA around the microphone array.
To train the SELD system, we use mean squared errorย (MSE) between the estimated and target multi-ACCDOA outputs under the ADPIT schemeย <cit.>.
In inference, when the length of the vector is greater than a threshold, the class is considered active.
Next, we extend the audio-only system to handle audio and visual input.
We concatenate audio and visual embedding in the middle of the network.
As visual input, we use the corresponding video frame at the start of the audio features.
With the corresponding image, an object detection module, e.g., YOLOXย <cit.>, outputs bounding boxes of potential objects on target classes, e.g., person class.
As shown in the right part of Figureย <ref>, each bounding box is encoded to two vectors along the image's horizontal and vertical axis, based on Gaussian distributionsย <cit.>.
The center is the same as the bounding box, and the standard deviation is proportional to the width and height.
These vectors are combined into two vectors along the azimuth and elevation.
The visual encoded vectors are embedded by FCs.
Then the visual embedding and the audio embedding from the convolution layers are concatenated.
The concatenated embeddings are fed into the decoder to output multi-ACCDOA.
ยง.ยง Evaluation metric
We used four joint localization and detection metricsย <cit.> with extensions from a previous studyย <cit.>, which supports multi-instance scoring of the same class.
Two metrics are referred to as location-aware detection and are error rate (ER_20^โ) and F-score (F_20^โ) in one-sec non-overlapping segments.
We consider the prediction as a true positive if the prediction and reference class are the same and the angle difference is below 20^โ.
F_20^โ is calculated from location-aware precision and recall, whereas ER_20^โ is the sum of insertion, deletion, and substitution errors, divided by the total number of the references.
The other two metrics are referred to as class-aware localization and are localization error (LE_CD) in degrees and localization recall (LR_CD) in one-sec non-overlapping segments, where the subscript refers to classification-dependent.
Unlike location-aware detection, we do not use any threshold but estimate the difference between the correct prediction and reference.
LE_CD expresses the average angular difference between the same class's predictions and references.
LR_CD tells the true positive rate of how many of these localization estimates were detected in a class out of the total number of class instances.
We used the macro mode of computation while the mode does not apply ER_20^โ because it includes substitution errors between two classes.
We first computed the metrics for each class and then averaged them for the other three metrics to obtain the final system performance.
ยง.ยง Experimental setting
As audio features, multichannel amplitude spectrograms and inter-channel phase differencesย (IPDs) are usedย <cit.>.
Input features are segmented to have a fixed length of 1.27 sec.
To reduce the calculation cost of video, we use 360ร180 videos converted from the released 1920ร960 videos.
As visual input, we extract the corresponding video frame at the start of the audio features.
We use a pretrained YOLOX object detection model[<https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox/yolox_tiny_8x8_300e_coco.py>] to get bounding boxes of person class. Other classes, e.g., cell phone and sink, are not stably detected in our preliminary experiments with STARSS23 videos.
The bounding box results are encoded to two vectors along azimuth and elevation as in Sec.<ref>.
The vector size is 37 (= 36 + 1) to cover 360 degrees of azimuth per 10 degrees.
To get audio embedding, we stack three convolutional layers with kernel size 3ร3.
We embed the visual encoded vectors with two FCs.
The concatenated embeddings are processed with a bidirectional GRU layer with a hidden state size of 256.
The number of tracks in the multi-ACCDOA format was fixed at N = 3 maximum simultaneous sources.
The threshold for activity was 0.3 to binarize predictions during inference.
Details on the experimental setting are in Appendixย <ref>.
We compare the audio-visual SELD system with an audio-only system based on the same data split and implementation.
The difference is the presence or absence of video input.
The experiments are for the two formats; FOA and MIC.
The code is available in a GitHub repository[<https://github.com/sony/audio-visual-seld-dcase2023>] under the MIT license.
ยง.ยง Experimental results
Tableย <ref> summarizes the performance of the audio-visual and audio-only SELD systems in both audio formats.
Compared with both formats, SELD systems with FOA format show better SELD performance than that with MIC format.
In FOA format, while the audio-visual SELD system shows a slightly worse location-aware F-score, the audio-visual system exhibited better localization error with comparable localization recall.
There is a similar trend of lower localization error in MIC format.
We further investigate the location-aware F-score over classes, considering both localization and detection aspects.
Figureย <ref> shows the F-score per class in FOA format.
We focus on five classes related to a human body, i.e., female and male speeches, clapping, laughing, and footsteps because the audio-visual SELD system uses bounding boxes of a person as visual input.
We show the average score of the five classes as body-related on the left of the figure.
The audio-visual system demonstrates a higher location-aware F-score in the body-related.
On the other hand, the audio-visual system performs worse in the average of the other classes, i.e., non-body-related.
The results suggest that the visual input, i.e., bounding boxes of a person, contributes to localization and detection of body-related classes, whereas the visual input may limit the performance of non-body-related classes.
Further experimental results are in Appendixย <ref>.
ยง CONCLUSION
This paper attempts to broaden sound event localization and detection (SELD) to an audio-visual area by introducing an audio-visual SELD task.
We present an audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), which consists of multichannel audio data, video data, and spatiotemporal annotation of sound events in natural sound scenes.
Furthermore, we present quantitative evaluations for an audio-visual SELD system compared with an audio-only system and demonstrate the benefits of visual object positions.
We still need to improve SELD performance of various sound events using audio-visual data.
Also, we hope that STARSS23 opens a wide range of future research on spatial audio-visual tasks, taking advantage of the well-organized audio-visual recording and detailed labels about spatial sound events.
We thank Akira Takahashi for his helpful code review and thank Atsuo Hiroe, Kazuya Tateishi, Masato Hirano, Takashi Shibuya, Yuji Maeda, and Zhi Zhong for valuable discussions about the data construction process.
The data collection and annotation at Tampere University have been funded by Google.
This work was carried out with the support of the Centre for Immersive Visual Technologies (CIVIT) research infrastructure at Tampere University, Finland.
ieee
ยง CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
See Sectionย <ref>.
* Did you describe the limitations of your work?
See Sectionย <ref>.
* Did you discuss any potential negative societal impacts of your work?
See Appendixย <ref>.
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
We have read them and confirmed them.
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
<https://github.com/sony/audio-visual-seld-dcase2023>
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
See Sectionย <ref> and Appendixย <ref>.
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
See Tableย <ref> and Figureย <ref>.
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
See Appendixย <ref>
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
See Sectionย <ref>.
* Did you mention the license of the assets?
YOLOXย <cit.> code from MMDetection is licensed under the Apache-2.0 license, free for research and commercial use.
* Did you include any new assets either in the supplemental material or as a URL?
<https://zenodo.org/record/7880637>
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
See Appendixย <ref>.
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
See Appendixย <ref>.
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
See Appendixย <ref>.
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
See Appendixย <ref>.
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
See Appendixย <ref>.
ยง APPENDIX
We show appendixes for data construction, data analysis, experiment, competition, social impact, and personal data handling.
Finally, we answer the questions from Datasheets for Datasetsย <cit.>.
ยง DATA CONSTRUCTION
ยง.ยง Sound scene recording
The generic instructions for recording participants include information about duration, participants, props, active classes, and description of sound scenes.
The recording duration can be shorter or longer as they are rough instructions.
We set the basic structure of sound scenes, e.g., people gathering on a sofa and discussing the weekend.
Following the rough instructions, we leave the other details of sound scenes to the participants, e.g., they can walk wherever they want and talk about whatever they want without a fixed dialogue text.
We show an example of the instructions:
Duration 3 min
Participants 3 people
Props Playing cards, mobile phone
Active classes Speech, laugh, mobile phone
Description
* 3 people gather on the sofa and talk about the weekend.
* Propose to play a game of cards, shuffle and distribute the cards.
* Mobile phone rings while playing, attend the call, walk around and talk for a few seconds and laugh during the call.
* Come back, sit and continue playing.
Sound scenes consist of target sound events, directional interference sounds, and background noise.
Each target class contains diverse sounds, e.g., the speech classes include a few different languages, and the phone class has sounds from different mobile phones.
In addition, several target classes correspond to super classes with some subclasses in the AudioSet ontologyย <cit.>, e.g., the domestic sounds class contains the vacuum cleaner, mechanical fan, and boiling subclasses.
We provide the subset of sounds encountered in the recordings for the target classes in the form of more specific AudioSet-related labels.
The subset information is summarized in Tableย <ref>, which is a re-post of a table in the STARSS22 paperย <cit.>.
Directional interference sounds are derived from computer keyboards, shuffling cards, dishes, pots, and pans; however, they are not annotated.
Natural background noise is mainly related to HVACย (heating, ventilation, and air-conditioning), ranging from low to high levels.
Spatial and temporal annotations relied on careful mounting of tracking markers and wireless microphonesย <cit.>.
The tracking markers were mounted on independent sound sources such as on a mobile phone or on a hoover.
Head markers were also provided to the participants as headbands or hats.
The head tracking results served as reference points for all human body-related sound classes.
Mouth position for speech and laughter classes, feet stepping position for footstep class, and hand position for clapping class were each approximated with a fixed translation from the head tracking result.
Regarding clapping, participants were instructed to clap about 20 cm in front of their faces, while footsteps were projected from the head coordinates to floor level.
Hence, the mounting positions were considered in the annotation process to translate the tracking data of each class into each sound source position.
The wireless microphones were mounted to the lapel of each participant and to additional independent sound sources being far from the participants.
ยง.ยง Annotation
At certain time, the motion captureย (mocap) system could not track the attached marker, e.g., when markers were out of the view of the mocap coverage, markers were moving too fast, or obstacles occluded markers.
Whenever such misses were short, the tracking results were interpolated with Motive[<https://optitrack.com/software/motive/>].
If such misses were long and interpolating the results was not possible, azimuth and elevation of sound sources were annotated based on the 360^โ video data.
For example, direction of arrivalย (DOA) of the door class was often annotated from the the video data because many doors were outside of the view of the mocap.
To calculate source distance in such cases, in addition to the video data, we additionally used information on room dimensions and installation positions from our recording log.
Any interpolated spatial annotations were visually checked at the confirmation videos.
ยง.ยง Data format
We use WAV file format for the audio data and MP4 file format for the video data.
The metadata are tabulated and served in CSV file format.
The sound event classes, DOAs, and distances are provided in the following format:
* frame number, active class index, source number index, azimuth, elevation, distance
with all labels served as integers.
Frame, class, and source enumeration begins at 0.
Frames correspond to a temporal resolution of 100 msec.
Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth ฯโ [-180^โ, 180^โ], and elevation ฮธโ [-90^โ, 90^โ].
The azimuth angle increases counter-clockwise (ฯ = 90^โ at the left).
Distances are provided in centimeters, also rounded to the closest integer value.
The source index is a unique integer for each source in the scene.
Note that each unique participant gets assigned one identifier, but not individual events produced by the same participant; e.g., a clapping event and a laughter event produced by the same person have the same identifier.
Independent sources that are not participants (e.g., a loudspeaker playing music in the room) get a 0 identifier.
Note that the source index and the source distance are only included as additional information that can be exploited during training.
An example line could be as follows:
* 10, 1, 1, -50, 30, 181
which describes that in frame 10, an event of class male speech (class 1) belonging to one participant (source 1) is active at location (-50^โ, 30^โ, 181 cm).
ยง DATA ANALYSIS
ยง.ยง Duration
Figuresย <ref> illustrates the box plots of duration, i.e., how long a sound event lasts.
The figure shows that there are classes with similar duration trends.
Speech classes have a wide range of plots, whereas laughing has a shorter duration than speech.
While phone and bell have similar medians, the box of phone is longer as phone calls repeated the recorded sound until it was answered.
There are several classes with longer duration, e.g., domestic sounds, music, musical instruments, and faucet, whereas collision or tapping sounds such as door and knock are relatively short.
ยง.ยง DOA
Figureย <ref> shows the distribution of DOAs of the rest of the classes not depicted in Figureย <ref>.
While DOAs of human-produced classes such as speech, laugh, clap, or footsteps are dispersed across the 360^โ plane, classes such as door, knock, or faucet result in specific discrete points in the plot due to their fixed position in the room.
Music and bell classes also show similar trends to door class as their sound sources are rarely moved.
Domestic sounds, phone, and musical instrument classes show similar trends to the speech classes as the respective sources are sometimes still and other times moving.
ยง EXPERIMENT
ยง.ยง Experimental setting
We add a few details on experimental settings.
We apply the short-term Fourier transformย (STFT) as audio features with a 20-ms frame length and 10-ms frame hop.
We keep the input feature length to 1.27 sec during inference and set the shift length to 1.2 sec.
For visual features, we use the pre-trained YOLOX object detection model[<https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox/yolox_tiny_8x8_300e_coco.py>], which is trained with the COCO datasetย <cit.>.
While COCO has 80 object classes, we focus on person, cell phone, and sink classes because they are strongly related to the 13 sound events in STARSS23.
Only person class is stably detected in our preliminary experiments in STARSS23 videos.
Therefore, we use the model to get bounding boxes for the person class.
We use a batch size of 16 and the Adam optimizer with a weight decay ofย 10^-6 to train the audio-visual sound event localization and detectionย (SELD) system.
The learning rate is set to 0.001.
We validate and save model weights at every 1,000 iterations up to 20,000 iterations.
We select a model that demonstrated the best aggregated SELD error, โฐ_SELD, calculated as
โฐ_SELD = ER_20^โ + ( 1 - F_20^โ ) + LE_CD/180^โ + ( 1 - LR_CD )/4.
When there are no true positive outputs in a class, to compute the macro average, we set 180^โ as localization error in the class.
We report the average scores and error bars of five experiments.
The model parameter size is 0.8 M.
The model size is kept intentionally small for easier trials.
A single GPU (e.g., GeForce GTX 1080 Ti) is used for training.
The training take around six hours.
ยง.ยง Experimental results
In addition to the F-score per class in first-order ambisonicsย (FOA) format in Figureย <ref>, we show the other SELD metrics per class in both FOA and tetrahedral microphone arrayย (MIC) formats.
As shown in Figureย <ref>, the F-score in MIC format shows a similar trend as in FOA format.
While the audio-visual SELD system performs worse in the F-score of non-body-related classes, the audio-visual system demonstrates a higher F-score in body-related classes.
Figureย <ref> and <ref> show that the audio-visual system demonstrates lower localization error of body-related classes in both formats.
There is no significant trend of the localization recall between the two formats.
ยง.ยง Additional experiment
We add a few experiments to support the above experimental results, i.e., demonstrating that the audio-visual system contributes to the performance of body-related classes.
In the additional experiments, we use only body-related classes as the target classes of the SELD systems.
The systems train and evaluate with activities and DOAs of speeches, clap, laugh, and footsteps classes.
We follow the previous experiments in the other settings.
Tableย <ref> shows the body-related classes only SELD performance in audio-visual and audio-only systems evaluated for dev-set-test in STASS23.
The audio-visual SELD system scores better in all metrics than the audio-only system.
The visual input, i.e., bounding boxes of the person class, enables the audio-visual system to localize and detect body-related classes more accurately.
Audio-only system in MIC format shows high standard deviation in localization error.
It is because a few classes are sometimes set 180^โ as they had no true positive output.
Even if we omit such cases, the audio-visual system still shows lower localization recall.
ยง COMPETITION
STARSS23 has served as the development set and evaluation set for the SELD Task of the DCASE 2023 Challenge[<https://dcase.community/challenge2023/task-sound-event-localization-and-detection-evaluated-in-real-spatial-sound-scenes>], which aims to accelerate audio-only and audio-visual SELD research.
The task participants use the development audio/video recordings and labels to train and validate their SELD systems in the development process.
The evaluation recordings without labels are used to produce system outputs for the challenge evaluation phase.
If researchers wish to compare their system against the submissions of the challenge, they will have directly comparable results if they use the evaluation data as their testing set.
Also, the implementation of the audio-visual SELD system described herein, trained and evaluated with STARSS23, has served as the baseline method for the audio-visual track of the challenge.
ยง SOCIAL IMPACT
The STARSS23 enables research on audio-only and audio-visual SELD tasks, which form the backbone of various real-world applications important on acoustic and audio-visual monitoring, intelligent home applications, or audio-visual machine perception.
The dataset, together with the associated challenge, accelerates such research for research institutes and industry since it is the first of its kind based on real annotated recordings.
Multiple research institutes, university laboratories, and industrial research and development groups have already shown interest in the dataset and its use, either as part of the DCASE challenge, or outside of it in independent published studies.
We expect the dataset to set the standard in the upcoming years in audio-visual SELD-related studies, due to its unique spatial annotations from real tracked people and sound sources, and its spatial audio content which is becoming more and more relevant with multiple monitoring or smart home devices employing microphone arrays.
The dataset also offers opportunities for cross-regional evaluation, with its recordings coming from two different sites geographically far apart.
Of course as with many strongly annotated datasets, the diversity of sound events is limited and cannot capture the conditions of many real-world application specific scenes.
However, we believe that it is a useful contribution to the development and maturation of such systems, at which point we expect more application-specific SELD datasets to appear.
ยง PERSONAL DATA HANDLING
STARSS23 data was recorded with over 50 voluntary participants.
Before the recording, we explain our research purpose, how we record the sound scenes, and how we treat and release the recording data.
Regarding the recording process, an example of the generic instructions is in Appendixย <ref>.
Our explanation is based on text format and verbal description.
Participants can ask us questions related to recording and public release.
We also explain potential risks, i.e., recording data containing personally identifiable information, and our Institutional Review Boardย (IRB) approvals.
The personally identifiable information is raw speech and blurred faces. The participants are also instructed not to reveal personal information during the recordings, and limit themselves to conversations of generic topics.
After the explanation and Q&A, when participants understand the purpose and risk of recording and release, each participant signs the consent form.
ยง DATA SHEET
For dataset documentation, we take the questions from Datasheets for Datasetsย <cit.> and answer them.
ยง.ยง Motivation
* Q: For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.
A: This dataset is created to tackle audio-visual sound event localization and detectionย (SELD) tasks. While visual data about sound source objects can support SELD tasks, e.g., feet are a potential source of footsteps, the existing datasets did not contain the complete multichannel audio, video, and annotation set. STARSS23 serves real sound scene recording with multichannel audio, video data aligned with the audio, and spatiotemporal annotation of sound events. STARSS23 allows the incorporation of audio-visual correspondence into multichannel audio signal processing methods.
* Q: Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?
A: Creative AI Laboratoryย (CAL) at Sony Group and Audio Research Groupย (ARG) at Tampere University.
* Q: Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.
A: The data collection and annotation at Tampere University has received funding by Google.
* Q: Any other comments?
A: N/A.
ยง.ยง Composition
* Q: What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.
A: Real sound scene recordings with multichannel audio data, video data aligned with the audio data, and annotations of temporal activation, the direction of arrivalย (DOA), and distance of sound events.
* Q: How many instances are there in total (of each type, if appropriate)?
A: The development set totals about 7 hours and 22 minutes, of which 168 clips were recorded with 57 participants in 16 rooms with annotation. The evaluation set totals about 3.5 hours and 79 clips without annotation.
* Q: Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).
A: It contains all possible instances.
* Q: What data does each instance consist of? โRawโ data (e.g., unprocessed text or images) or features? In either case, please provide a description.
A: Raw audio and video data, but we convert 32ch 48kHz audio recordings to 4ch 24kHz, and 3840ร1920 360^โ video recordings to 1920ร960 equirectangular. We also conduct face-blurring on video data to anonymize identical information.
* Q: Is there a label or target associated with each instance? If so, please provide a description.
A: The dataset contains temporal activation, DOA, and source distance labels of 13 target sound event classes. The classes are female speech, male speech, clapping, telephone, laughter, domestic sounds, footsteps, door, music, musical instrument, water tap, bell, and knock.
* Q: Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.
A: The evaluation set has no annotation because the set is used in a testing phase of the SELD task of the DCASE 2023 challenge. While almost all audio recordings in the development and evaluation set are accompanied by synchronized video recordings, only 12 audio recordings in the development set are missing videos (from fold3_room21_mix001.wav to fold3_room21_mix012.wav).
* Q: Are relationships between individual instances made explicit (e.g., usersโ movie ratings, social network links)? If so, please describe how these relationships are made explicit.
A: In each clip, we use the same tag, e.g., foldX_roomY_mixZ. We use the tag for audio (.wav), video (.mp4), and labels (.csv).
* Q: Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.
A: Yes. STARSS23 has the dev-set-train part and the dev-set-test part for the development process.
* Q: Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.
A: In addition to target sound events, the sound scene recordings contain directional interference sounds and background noise. That makes the dataset a more realistic situation. We confirm all the labels by listening to the audio and watching the video. If there are any errors, they would be negligible.
* Q: Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.
A: Self-contained.
* Q: Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctorโpatient confidentiality, data that includes the content of individualsโ non-public communications)? If so, please provide a description.
A: No.
* Q: Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why.
A: No.
* Q: Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.
A: STARSS23 set female and male speech as two target sound event classes. The frame coverages of both classes are almost equal; 28.4 % and 31.4 %, respectively.
* Q: Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.
A: As an audio-visual sound scene recording dataset, STARSS23 contains raw speech data. However, faces in video data are blurred, and the talk contents have no personal topics. So it is hard to identify individuals. We also explain to participants the potential risk of identical information before recording. After the explanation, participants sign a consent form for recording when they understand the potential risk.
* Q: Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description.
A: No.
* Q: Any other comments?
A: N/A.
ยง.ยง Collection process
* Q: How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If the data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.
A: Each clip of a sound scene is recorded in a room with participants and sound props. Participants improvise a scene following a generic instruction about the sound scene and event.
* Q: What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? How were these mechanisms or procedures validated?
A: Multichannel audio and video data are recorded with a 32ch microphone array and 360^โ camera, respectively. A motion capture system and wireless microphones are also recorded for annotation. The specific recording equipment is Eigenmike em32[<https://mhacoustics.com/products#eigenmike1>], Ricoh Theta V[<https://theta360.com/en/about/theta/v.html>], Optitrack Flex 13[<https://optitrack.com/cameras/flex-13/>], and Rรธde Wireless Go II[<https://rode.com/en/microphones/wireless/wirelessgoii>] respectively.
* Q: If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?
A: N/A.
* Q: Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?
A: Voluntary participants act in a sound scene, and authors record the scene.
* Q: Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created.
A: The first round of recordings was collected between September 2021 and April 2022. A second round of recordings was collected between November 2022 and February 2023.
* Q: Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.
A: Yes. We get approvals from our Institutional Review Boardย (IRB). Following the discussion about personally identifiable information, we conduct face-blurring on video data and explain to participants about potential risks, i.e., recording data containing identifiable information.
* Q: Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?
A: From the individuals.
* Q: Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.
A: Yes. Before the recording, we explain our research purpose, how we record sound scenes, and how we treat and release the recording data. Our explanation is based on text format and verbal description. Participants can ask us questions related to recording and release.
* Q: Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented.
A: Yes. After our explanation and Q&A, when participants understand the purpose and risk of recording and release, each participant signs the consent form.
* Q: If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).
A: Yes. In such a case, they can contact authors.
* Q: Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.
A: N/A.
* Q: Any other comments?
A: N/A.
ยง.ยง Preprocessing/cleaning/labeling
* Q: Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section.
A: We convert audio and video data to the appropriate size. We also annotate temporal activation and DOA of sound events. After annotating, we confirm the labels by listening to the audio and watching the video.
* Q: Was the โrawโ data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the โrawโ data.
A: We saved โrawโ (large size) audio and video data for future use. If one is interested in them, one can contact the authors.
* Q: Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.
A: We use python code for audio data conversion, and RICOH THETA[<https://theta360.com/en/about/application/pc.html>] and FFmpeg[<https://ffmpeg.org/>] for video data conversion. To annotate temporal activation, Audacity[<https://www.audacityteam.org/>] and REAPER[<https://www.reaper.fm/>] are used. Tracking results are used in Motive[<https://optitrack.com/software/motive/>].
* Q: Any other comments?
A: N/A.
ยง.ยง Uses
* Q: Has the dataset been used for any tasks already? If so, please provide a description.
A: No, the dataset has not yet been used for any scientific papers. This paper is the first to use the dataset.
* Q: Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.
A: Yes. We have the code repository of the SELD system: <https://github.com/sony/audio-visual-seld-dcase2023>.
* Q: What (other) tasks could the dataset be used for?
A: Apart from audio-visual SELD tasks, one could use the dataset for audio-only SELD tasks, audio-visual speaker DOA estimation tasks, and audio-visual sound source localization tasks. Audio-visual sound source distance estimation could be another task.
* Q: Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms?
A: No.
* Q: Are there tasks for which the dataset should not be used? If so, please provide a description.
A: No.
* Q: Any other comments?
A: N/A.
ยง.ยง Distribution
* Q: Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.
A: Yes. STARSS23 is publicly available at <https://zenodo.org/record/7880637>.
* Q: How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?
A: STARSS23 is distributed via <https://zenodo.org/record/7880637>. The DOI is "10.5281/zenodo.7709051".
* Q: When will the dataset be distributed?
A: STARSS23 is already distributed.
* Q: Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.
A: STARSS23 is licensed under MIT License.
* Q: Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.
A: No.
* Q: Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.
A: No.
* Q: Any other comments?
A: N/A.
ยง.ยง Maintenance
* Q: Who will be supporting/hosting/maintaining the dataset?
A: Sony Group and Tampere University.
* Q: How can the owner/curator/manager of the dataset be contacted (e.g., email address)?
A: Please contact and .
* Q: Is there an erratum? If so, please provide a link or other access point.
A: All changes to the dataset will be announced on <https://zenodo.org/record/7880637>.
* Q: Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)?
A: Yes, all the updates will be synced on the website.
* Q: If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.
A: No.
* Q: Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.
A: The older dataset versions remain in Zenodo if any changes are made.
* Q: If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description.
A: Yes. Others can contact the authors of this paper to describe their proposed extension or contribution. We would discuss their proposed contribution to confirm its validity, and if confirmed, we will release a new version of the dataset on Zenodo and announce it accordingly.
* Q: Any other comments?
A: N/A.
|
http://arxiv.org/abs/2306.17792v1
|
20230630164822
|
Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition
|
[
"Andrei-Marius Avram",
"Rฤzvan-Alexandru Smฤdu",
"Vasile Pฤiล",
"Dumitru-Clementin Cercel",
"Radu Ion",
"Dan Tufiล"
] |
cs.CL
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] |
Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition
Andrei-Marius Avram^*, Rฤzvan-Alexandru Smฤdu^*, Vasile Pฤiศ^โ ,
Dumitru-Clementin Cercel^*, Radu Ion^โ , and Dan Tufiศ^โ
^*Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Romania
^โ Research Institute for Artificial Intelligence โMihai Drฤgฤnescuโ, Romanian Academy
{andrei_marius.avram, razvan.smadu}@stud.acs.upb.ro, [email protected], {vasile,radu,tufis}@racai.ro
================================================================================================================================================================================================================================================================================================================================================================================================================================
With the rise of bidirectional encoder representations from Transformer models in natural language processing, the speech community has adopted some of their development methodologies. Therefore, the Wav2Vec models were introduced to reduce the data required to obtain state-of-the-art results. This work leverages this knowledge and improves the performance of the pre-trained speech models by simply replacing the fine-tuning dense layer with a lateral inhibition layer inspired by the biological process. Our experiments on Romanian, a low-resource language, show an average improvement of 12.5% word error rate (WER) using the lateral inhibition layer. In addition, we obtain state-of-the-art results on both the Romanian Speech Corpus and the Robin Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively.
Lateral Inhibition; Romanian Language; Speech Recognition; Wav2Vec 2.0
ยง INTRODUCTION
Deep neural networks benefit from large amounts of annotated training data. However, annotated data is challenging to obtain in many settings. Except for English, generating thousands of hours of transcribed audio necessary to train a state-of-the-art speech recognition system is infeasible for most languages worldwide. Self-supervised learningย <cit.> has become the de facto technique for addressing this issue by first teaching a general data representation from unlabeled samples and then transferring the accumulated knowledge to a downstream task via fine-tuningย <cit.>.
Working with self-supervision on unlabeled speech signals involves similar challenges as in computer vision. However, the research community continued to build pre-trained models on audio that have pushed further the state of the art in speech recognition. Schneider et al. <cit.> introduced the Wav2Vec model, which encodes the input audio data into a latent space to create a contextualized representation employing a Transformer encoder <cit.>. Baevski et al.ย <cit.> built Wav2Vec 2.0 on top of the previous work, mainly using the same model architecture while changing the pre-training objective to a discretized contrastive loss similar to the masked language model strategy from natural language processing.
Introduced by Pฤiศ <cit.>, the lateral inhibition layer helps the model to learn when the annotated data is scarce. This paper investigates its application in transcribing human voice from audio files by integrating the lateral inhibition mechanism into a pre-trained automatic speech recognition (ASR) system. We choose the Wav2Vec 2.0 Base model pre-trained on 100k hours of unlabeled audio data extracted from VoxPopuli (i.e., Wav2Vec2.0-VP-100k)ย <cit.>. We run our experiments on a low-resource language, namely the Romanian language.
Our results for the experimental setup with the lateral inhibition layer show an average performance of 12.5% word error rate (WER) on various dataset settings compared with the feed-forward layer.
In addition, we obtain state-of-the-art results on the Romanian Speech Corpus (RSC) <cit.> with 1.78% WER, using fewer training data than the previous model, and on the Robin Technical Acquisition Speech Corpus (RTASC) <cit.> with 29.64% WER, using the same training data.
We can summarize our main contributions as follows:
(i) applying the technique of neural lateral inhibition to ASR;
(ii) performing an analysis of the improvements brought by the lateral inhibition layer;
(iii) to the best of our knowledge, creating the first publicly available Romanian Wav2Vec 2.0 model[https://huggingface.co/racai] (called RoWav2Vec2.0-VP-100k-LI) that was thoroughly evaluated on several benchmarks; and
(iv) obtaining state-of-the-art performance on two Romanian ASR datasets.
ยง LATERAL INHIBITION
Inspired by the human brain's biological process of lateral inhibition, the neural lateral inhibition layer has been successfully applied in named entity recognition <cit.>. This process accounts for exciting neurons reducing the activity of neighboring neurons in the human brain <cit.>. Also, it provides an increased perception of the visual cortex under challenging scenarios, such as low-lighting conditions <cit.>. Intuitively, we envisage that the new layer should be able to better focus on the actual voice data while possibly removing unwanted noise.
Following the original formulationย <cit.>, the lateral inhibition layer is described as follows:
F(x)=x ยท Diag(ฮ(x ยท ZeroDiag(W) + b ))
where x is the input vector of the layer (i.e., the embedding representation produced by the RoWav2Vec2.0-VP-100k-LI model), Diag(ยท) denotes a diagonal matrix having the diagonal set to the vector presented as a parameter, ZeroDiag(ยท) generates a matrix with the zero value on the diagonal, W is the weight matrix, b corresponds to the bias values, and ฮ(ยท) is the Heaviside function (see Equation <ref>).
ฮ(x) = {
1, x > 0
0, x โค 0
.
Following the analogy with the biological process, the Heaviside function determines which values can pass to the next layer. The decision is based on the adjacent values in the supplied embedding representation.
Equation <ref> is used for the forward pass, with the Heaviside function included, thereby providing a strict pass or reject functionality for the input values. However, in the backward pass, the non-differentiable Heaviside function is replaced with the parameterized sigmoid function <cit.> (see Equation <ref>, where k is the scaling parameter). This technique, known as surrogate gradient learning <cit.>, allows using a known derivative (see Equation <ref>) in the backward pass.
ฯ(x) = 1/1+e^-kx
ฯ'(x) = kฯ(x)ฯ(-x)
ยง EXPERIMENTAL SETTINGS
ยง.ยง Dataset
The fine-tuning of the RoWav2Vec2.0-VP-100k-LI model was done on a speech dataset whose composition contained ten Romanian corpora with transcribed audio files. The corpora contain recordings from several domains, including Wikipedia, News, Internet, and Legal. The resulting dataset has approximately 300 hours of transcribed speech from 222.7k utterances. It is composed of both reading and spontaneous speech, distributed in an imbalanced manner, with 229 hours of reading and 71 hours of spontaneous speech, respectively.
We further split our Romanian speech dataset into five subsets based on the total recording time by random sampling without replacement audio files until the desired size was reached: Small (S) - 10 minutes, Medium (M) - 1 hour, Large (L) - 10 hours, Extra Large (XL) - 100 hours, and Extra Extra Large (XXL) - the whole dataset. The split was necessary to evaluate the lateral inhibition performance in more extreme settings, i.e., with fewer labeled audio files.
ยง.ยง Fine-tuning
We used the primary fine-tuning mechanism for the Wav2Vec 2.0 model as introduced in the original paper <cit.>. Therefore, using the raw audio input, we project the contextualized embeddings c_i obtained by the model for each time step i into a tensor y_i whose dimensions match the number of letters in the Romanian alphabet, plus the space character and the blank token. We project the data using either the standard fully-connected layer or the lateral inhibition layer followed by a dense layer. Using the connectionist temporal classification algorithm <cit.>, we computed the loss between the predicted logits and target labels. We set k=10 for the lateral inhibition layer, which we believe is a good enough approximation of the surrogate gradient of the Heaviside function.
We employed the Adam method <cit.> to optimize the loss with a learning rate set to 3e-5 and a weight decay to 5e-3. We fine-tuned each model on two NVIDIA 1080 TI GPUs. Due to GPU memory limitations, we set the batch size to 4 with a gradient accumulation of 8.
In addition, we clipped the gradients from the back-propagation algorithm to 2 to improve training stability <cit.>.
ยง RESULTS
ยง.ยง Romanian ASR
We evaluate our models, namely RoWav2Vec2.0-VP-100k (i.e., without lateral inhibition) and RoWav2Vec2.0-VP-100k-LI (i.e., with lateral inhibition), on the test set of three corpora: Spontaneous Speech Corpus (SSC) <cit.>, RSC, and RTASC. Compared with previous works on Romanian ASR, the results of the evaluation regarding WER and character error rate (CER) are listed in Table <ref>. In all our experiments, the decoding phase employs a 4-gram KenLM language model <cit.> trained on the textual part of the corpus for contemporary Romanian language <cit.>.
Our model with lateral inhibition, trained on the full dataset (i.e., RoWav2Vec2.0-VP-100k-LI-XXL), obtains state-of-the-art performance on the RSC and RTASC corpora, achieving 1.78% WER and 29.64% WER, respectively[The high difference in WER between the two corpora comes from the type of utterances found in them: RSC contains common Romanian words and phonemes, while RTASC has more specific utterances from technology, with many words and phonemes borrowed from the English language.]. It improves the performance of the best Kaldi <cit.>-based ASR system, the Time Delay Neural Network - Recurrent Neural Network (TDNN-RNN) <cit.>, by 1.01% WER on RSC and also the performance of the Romanian DeepSpeech2 model <cit.> on RTASC by 7.57% WER.
However, our proposed models do not improve the performance on the SSC evaluation set, with our best variant (i.e., RoWav2Vec2.0-VP-100k-LI-XXL) falling behind 2.24% WER compared to the TDNN-RNN architecture. The main driver behind this difference is the need for more spontaneous speech data within our training corpus compared to the dataset used for training the state of the art. Specifically, the TDNN - Long Short-Term Memory (TDNN-LSTM), the Convolutional Neural Network - TDNN (CNN-TDNN), the TDNN, and the TDNN-RNN were all trained on a dataset with 235 hours of speech, namely 95 hours of read speech data from RSC and 140 hours of dedicated internal spontaneous speech data, similar to the one used in the SSC evaluation set. Meanwhile, we used only 71 hours of spontaneous speech data, approximately half the amount used to train the TDNN-based models.
On the other hand, we increased the number of read speech data by decreasing the amount of spontaneous speech data within our training corpus. Hence, the performance of our best variant on the RSC evaluation set may have benefited from this fact. However, RoWav2Vec2.0-VP-100k-LI-XL still achieves almost state-of-the-art performance with 1.80% WER on RSC, indicating that our methodology has not benefited too much from the increased amount of read speech data on this test set.
Apart from our best model, the rest of the variants performed reasonably well on each evaluation task, given the low amount of available training data. The RoWav2Vec2.0-VP-100k model obtained good results when fine-tuned on the L, XL, and XXL subsets, but the word error rate rapidly increased when the training dataset was switched to the more extreme cases (i.e., the M and S subsets). For instance, on the RSC dataset, the variants fine-tuned on the L, XL, and XXL subsets maintained a fairly good performance, achieving 4.80%, 2.31%, and 2.01% WER, respectively (or 3.95%, 1.80%, and 1.78% WER, respectively, with the lateral inhibition layer). However, the WER increased by more than three times on the RSC M subset and more than eight times on the RSC S subset, with our model obtaining 16.55% and 44.78% WER, respectively (or 13.92% and 35.00% WER with the lateral inhibition layer).
ยง.ยง Lateral Inhibition Layer Improvements
We further analyze the improvements brought by the lateral inhibition in the RoWav2Vec2.0-VP-100k-LI models on all five evaluation subsets. An illustration of the difference in performance obtained by our model fine-tuned on all subsets is depicted in Figure <ref>. We observe that the lateral inhibition layer decreases the error rates of the RoWav2Vec2.0-VP-100k-LI models in all our experiments. We also notice that, on average, the improvements become more significant for the smaller subsets. We believe this results from the increased regularization when the lateral inhibition layer is employed, mainly because it allows the model to focus better on the features of the actual human voice, thereby learning to distinguish the speech from the noise better even when the data is scarce.
We also compute the average relative improvement of the lateral inhibition mechanism to all the RoWav2Vec2.0-VP-100k-LI variants on each evaluated corpus. We depict the results in Figure <ref>. The greatest improvements are achieved on the RSC evaluation subsets, the lateral inhibition layer reducing the WER on average by 17.8% and the CER by 16.1%. The lowest average WER improvement (i.e., 9.0%) is obtained on the RTASC evaluation subsets. Also, the lowest CER improvement (i.e., 11.4%) is obtained on the SSC evaluation subsets. The average improvement over all evaluation subsets is 12.5% for WER and 13.1% for CER.
ยง CONCLUSIONS
Automatic speech recognition for low-resource languages remains an important research direction. In this work, we applied the recently introduced mechanism, namely the lateral inhibition layer, which helps the speech recognition neural networks to better distinguish between the human voice and the surrounding noise. We performed experiments on the Romanian language using the RoWav2Vec2.0-VP-100k-LI models and a custom dataset composed of 300 hours of speech. The results showed that the lateral inhibition layer reduces, on average, the WER by 12.5% over all the evaluated test sets. Furthermore, we achieved state-of-the-art performance on the RSC and RTASC datasets using this mechanism, obtaining 1.78% WER and 29.64% WER, respectively.
Future work considers experimenting with the lateral inhibition layer on languages other than Romanian and an evaluation of a speech dataset containing more than 300 hours. In addition, we intend to fine-tune other variants of the Wav2Vec 2.0 model, pre-trained on various datasets and with different methodologies, to validate that our results generalize beyond the pre-trained variant employed in this work.
ยง ACKNOWLEDGEMENTS
The research has been funded by the University Politehnica of Bucharest through the PubArt program.
IEEEtran
|
http://arxiv.org/abs/2306.06033v3
|
20230609165531
|
SoK: Analysis of User-Centered Studies Focusing on Healthcare Privacy & Security
|
[
"Faiza Tazi",
"Archana Nandakumar",
"Josiah Dykstra",
"Prashanth Rajivan",
"Sanchari Das"
] |
cs.HC
|
[
"cs.HC"
] |
Author name(s) for PDF metadata. Don't forget to anonymize for submission!
SoK: Analysis of User-Centered Studies Focusing on Healthcare Privacy & Security
Faiza Tazi^1, Archana Nandakumar^2, Josiah Dykstra^3, Prashanth Rajivan^2, Sanchari Das^1
University of Denver^1, University of Washington^2, Designer Security^3
=====================================================================================================================================================================================
Sensitive information is intrinsically tied to interactions in healthcare, and its protection is of paramount importance for achieving high-quality patient outcomes. Research in healthcare privacy and security is predominantly focused on understanding the factors that increase the susceptibility of users to privacy and security breaches. To understand further, we systematically review 26 research papers in this domain to explore the existing user studies in healthcare privacy and security. Following the review, we conducted a card-sorting exercise, allowing us to identify 12 themes integral to this subject such as โData Sharing,โ โRisk Awareness,โ and โPrivacy.โ Further to the identification of these themes, we performed an in-depth analysis of the 26 research papers report on the insights into the discourse within the research community about healthcare privacy and security, particularly from the user perspective.
ยง MOTIVATION
Security and privacy integration in the healthcare domain is essential to protect patients' dataย <cit.>, considering medical records include sensitive health and personal information. The healthcare industry is often a prime target for cybercriminals considering that these data sets could contain a plethora of sensitive information such as social security numbers, birth dates, employment information, emergency contacts, and insurance and billing data; these data are also notoriously difficult to monitor or safeguard after a breachย <cit.>. Furthermore, healthcare data are lucrative on the black market. Sahi et al. noted that sensitive medical data are sold for an average of $40-50 per recordย <cit.>. In light of this and to understand what is studied on the healthcare data privacy and security from the user side in research literature, we conducted a systematic literature review.
ยง METHOD
We conducted a systematic literature review including a corpus of 129 papers published up to December 10, 2021 of user studies with a focus on privacy and security of healthcare patients' data. Papers were excluded if they were presented as a work-in-progress (posters, extended abstracts, less than 4 pages long, etc.). We collected papers from seven digital databases: ACM Digital Library (DL), Google Scholar, SSRN, ScienceDirect, IEEE Xplore, PubMed, and MEDLINE. After the initial search to obtain the keywords, we collected the papers using keywords like ย Healthcare Data Security, Healthcare Data Breach, Healthcare Data Theft, Medical Data Theft, Medical Data Security, Medical Data Breach, Patient Data Security, Patient Data Theft, and Patient Data Breach through the Publish or Perishย [https://harzing.com/resources/publish-or-perish] software for retrieving articles from Google Scholar. After removing any duplicate articles we were left with 129 papers. We adapted the study design from prior systematic reviewsย <cit.>.
After analyzing the full text of the 129 papers, we excluded 49 papers from the set because the works though mentioned healthcare and the concerns of the data from the privacy and security lenses as a motivational factor were not directly focused on privacy and security of healthcare data. From the remaining n=80 papers, we consolidated the papers which consistently addressed healthcare data privacy and security throughout various stakeholders' perspective. We were left with 26 papers on which we conducted a card-sorting exercise involving all authors.
ยง RESULTS
Risk perception: It is challenging to circumscribe the perception of risk as risks do not have the same meaning for everyone. That is why user studies focusing on risk perception are critical, especially for the subject. Papers were categorized in the risk perception label when part of the study or its entirety explored participants' attitudes, and opinions on risks related to healthcare data. Risk perception was the most frequent label in our corpus where 61.54% of the papers were within this category.
Data sharing: 14 papers aimed to understand the perspective of participants on data sharing practices that would be acceptable to patients and beneficial to research communities. Results from these papers indicating that patients support data sharing if it benefits the public, or if the data is shared for personal health purposes. Nonetheless, people still have reservations about the privacy of sensitive data, data breaches, and medical bias.
Electronic Health Records (EHR): We found eight papers in our corpus pertaining to user interactions with EHR. These papers confirm through their results that participants have concerns over privacy and security, and are prudent about using EHR technologies. It was also determined that providers' reassurance positively impact patients' continuous and systematic usage of patient portal software in general and lowers their security concerns.
Risk Awareness: Despite the abundant potentialities for cyber risk in the healthcare sectorย <cit.>, there is a startling level of naivetรฉ among some healthcare providers. The results from the 8 papers relevant to risk awareness, show that the knowledge levels of providers regarding patient privacy, confidentiality and data sharing practices is average or lower.
Technology Adoption:
Technology adoption in the healthcare domain is crucial to its development. To this regard, eight papers in our corpus examined factors and inspected participants' requirements to improve user acceptance and adoption of some healthcare technologies. The results reported by these papers reveal that the security and privacy aspects bolster the acceptance and adoption of healthcare technologies.
Regulatory Compliance: Seven papers studied the ethical and legal aspects of healthcare data management. These papers mainly assess the HIPAA compliance of participants, as well as the cybersecurity conditions and behavior of healthcare practitioners and organizations. According to the CDC, โThe Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law that required the creation of national standards to protect sensitive patient health information from being disclosed without the patientโs consent or knowledgeโย <cit.>. Notably all the studies here determined that there needs to be more policies and reinforcement of behavior which can impede security.
Individual Differences: Comparisons can be based on experience level, hospital size, marriage status, country of origin, health status, or gender. We found seven papers from our corpus who did this type of analysis. In particular, Wilkowska and Ziefle show that females and healthy adults demand the highest security and privacy standards compared to males and the ailing elderlyย <cit.>. A different study,
investigated the extent to which security policies impact health information interoperability at different levels within the same hospitalsย <cit.>.
Secure Communications: In the case of healthcare, secure communications is not just a matter of security and privacy, but it can also be a medical concern.
We categorized five papers within this label. Most of these papers have results that show that patients still do not fully trust the existing communications technologies, except for Elger's studyย <cit.> where 85% of the participants had no privacy concerns regarding using a secure SMS system for private medical communications.
Mobile Applications: Three papers were related to mobile applications. These papers evaluate users' perceptions of mobile health apps regarding privacy, security, and quality of care. The results of these papers were somewhat different, where Schnall et al.ย <cit.> found that the majority of their participants were concerned over privacy of their sensitive healthcare data and people having access to their healthcare data. On the other hand, both Giguere et al.ย <cit.> and Richardson and Ancker'sย <cit.> studies found that the majority of participants are unconcerned about privacy when using such apps.
Social Influence:
Three papers in our corpus were categorized as social influence. These papers proved that participants were influenceable. Namely, Moqbel et al.ย <cit.> demonstrated that health professionals' reassurance and encouragement positively impact patients' continuous and systematic usage of patient portal software; not only that but participants were also influenced to lower their security concerns through the same encouragement.
Privacy:
Most of the papers in our corpus touch upon privacy, but three of these papers were directed exclusively towards the privacy of healthcare data. Accordingly, in their study Elgerย <cit.> assesses the knowledge and perceptions of physicians on healthcare data violations of privacy; results show that 11% of the participants recognized all the confidentiality violations in the test cases they were presented with.
Contact Tracing: Only two papers were categorized as contact tracing.
With the emergence of digital contact tracing applications, users have expressed privacy and security concernsย <cit.>. These concerns stem from apprehension of data breaches or having their data collected by government entitiesย <cit.>. However, this did not deter participants from approving COVID-19 contact tracing apps.
We have provided the details of the papers and the themes including the snapshot in the card sorting exercise in the Appendixย <ref>.
plain
ยง OVERVIEW OF SECURITY-FOCUSED USER STUDIES
|p0.7cm|p3.6cm|p3.4cm|p5.5cm|p0.9cm|
1|c|Study 1c|Goal 1c|Methods 1c|Principal Findings 1c|Labels
3c
โ continued from previous page
1|c|Study 1c|Goal 1c|Methods 1c|Principal Findings 1c|Labels
3|r|Continued on next page
<cit.> Understand reasons why hospital employees click on phishing emails Quantitative: partial least squared structural equation modeling
Workload has a significant negative effect on secure behavior
<cit.> Assess participants' attitudes towards privacy and security while using system developed for a medical study Mixed Methods: descriptive statistics + analysis of variance for quantitative data, thematic analysis for qualitative data The majority of participants are unconcerned about privacy and confidentiality when using SMS despite the fact that some participants expressed their concern about possible data leaks
<cit.> Assess HIPAA compliance, cybersecurity conditions and behavior of healthcare practitioners in private practices Quantitative: descriptive statistics 9.9% of the participants confirm they experienced at least one data breach in 2019 24.4% participants claim they have cyber insurance
<cit.> Assess security practices of healthcare organizations Quantitative: Ward's cluster analysis using minimum variance
Participating hospitals were clustered into three clusters:leaders, followers, and laggers Hospitals prioritize technical security solutions and data privacy over security management processes and performing regular audits
<cit.> Assess nurses' health information security (HIS) practices Quantitative: exploratory + confirmatory factor analysis The participant nurses' HIS intentions are affected by the amount of HIS losses they are able to handleโcoping appraisalโ (estimate=-1.477, p<0.01) HIS intentions have a considerable impact on coping appraisal (estimate=0.515, p<0.001)
<cit.> Evaluate the extent to which access to patients' physiological parameters (PPP) in hospitals can infringe on the patients' privacy Quantitative: bivariate analysis Patients need to have control over their own PPPsSpecialists are the more trusted than family doctors, nurses, and medical assistants
<cit.> Evaluate physicians' perceptions and understanding of confidentiality and medical data sharing Quantitative: Pearson's correlation + Multiple regression Physicians' mean score for knowledge regarding patient confidentiality and data sharing is 7.34 out of 14 and is positively correlated with their attitudes towards the subject matter which leads to privacy breaches
<cit.> Evaluate users' attitudes towards privacy and security of medical technology Mixed Methods: One-way ANOVA + F-Tests + Spearman's rank correlations for quantitative data and thematic analysis for qualitative data Participants with better health value privacy and security of medical technologies and control over data access more than participant with poor health
<cit.> Evaluate the perceptions of users of mobile health applications regarding privacy, security and quality of care Quantitative: multivariable logistic models + bivariate analysis In 2014 participants were more likely to think that mhealth improves the quality of healthcare, however they were just as concerned about privacy in 2013 (74%) as in 2014 (75%)
<cit.> Evaluate the perceptions of experts on using ML based privacy enhancing technologies (PETs) that enable automated analysis of encrypted healthcare data stored in the cloud Qualitative: thematic analysis Technical experts admonish prudence in trusting ML based PETs Medical experts call for patient safety assurances regarding these tools
<cit.> Investigate the extent at which security policies impact health information interoperability at different levels within the same hospitals Quantitative: logistic models Hospitals with access control implemented in workstations are 44% less likely to encounter technical interoperability (TI) issues. Hospitals using one EMR are 53% less like to encounter TI issues compared with hospitals using numerous EMR systems
<cit.> Assess the influence of healthcare providers' encouragement and patient security concerns in patient portal software continued usage Quantitative: partial least squares structural equation modeling Providers' reassurance and encouragement has a positive impact on patients' continuous use and systematic usage of patient portal software and lowers their security concerns
<cit.> Evaluate users' perceptions and trust factors in patient portal software Quantitative: logistic models Participants who value their portals for managing their healthcare are more likely to trust their portals.
<cit.> Evaluate Patients' perceptions of the risks and advantages of linking existing research data sources Quantitative: descriptive statistics 19.7% of the participants are weary about researchers having access to their deidentified data.90% of the participants are more assured when their unique identifiers were removed from the the dataset used for research and linkage
<cit.> Investigate admitting and registration protocols in hospital in order to establish best practices to curtail medical identity theft Mixed Methods: descriptive statistics for quantitative data, thematic analysis for qualitative data 78.5% of the participants confirmed that patient identities is verified at admission or registration 91.9% of which using driver's license. If the patient shows up without proof of identity, 59.5% of the participants affirmed that they provide the service without confirming the identity of the patient
<cit.> Understand the insecure practices within healthcare Qualitative: thematic analysis Three main impediments for security: security viewed as a barrier to patient care and productivity, Ignorance of consequences, dearth of policies and reinforcement of secure behaviour
<cit.> Understand security and privacy practices of physicians' offices' staff Qualitative: phenomenological approach Several insecure behaviours were observed such as password sharing, data left in insecure areas and absence of password use
<cit.> Evaluate the public's perceptions and acceptance of contact tracing technologies Quantitative: descriptive statistics + logistic models + chi-squared tests In March 2020, 68% of participants declared that it was acceptable to grant the government access to citizens' medical records vs only 35% participants in November of the same year Acceptance of privacy intrusive technologies diminished over time during the pandemic.
<cit.> Investigate the public's perceptions about the important concerns in the design of medical information commons (MIC) Qualitative: thematic analysis There needs to be a balance between the benefits of an MIC and the safeguards it implements to keep patients' data private
<cit.> Analyse the outlook of the mental health service users on satisfactory data sharing practices Qualitative: thematic analysis Participants expressed concern over the security and the high risk of large datasets. Participants conveyed the necessity to preserve the privacy and confidentiality of patients while taking into consideration the people who have access to privileged data.
<cit.> Investigate the participants' perceptions on healthcare data sharing process and establishing ways to gain their trust of the process Participants expressed concerns over being identified and security limitations of data sharing systems Participants declared that their primary care providers as well as hospital doctors and nurses should have access to their medical records participants approve and advocate for sharing healthcare data for direct care, but not for social care. Participants expressed concerns over privacy, security limitations and potentially having providers make biased decisions based on information found in their records
<cit.> Examine the factors that contribute to patients' intention of using an HIV mobile healthcare application including security, privacy, trust, risk and usability Qualitative: thematic analysis Participants expressed concerns over privacy and trust of their sensitive healthcare data and the people who would have access to their healthcare data Participants worried about the perceived risks including disclosure, tracking and data leaks
<cit.> Investigate how promises of confidentiality contribute to the participants' willingness to accept health clouds as an infrastructure for healthcare data sharing Quantitative: descriptive statistics + Comparison of means The promise of privacy increases the participants acceptance of health clouds in the case of sensitive and confidential healthcare data on the other hand, no statistical significance was found in the case of non-sensitive medical data
<cit.> Assess the understanding and healthcare data security awareness levels of participants Quantitative: descriptive statistics Participants' knowledge is lacking: (mean=2.6 where the average should be less than 2). Hospital management has the highest security awareness levels (mean=2.0667) while physicians have the lowest (mean=2.9202)
<cit.> Assess the knowledge and perceptions of physicians on healthcare data violations of privacy and confidentiality Quantitative: descriptive statistics + Comparison of means Barely 11% of the participants recognized all the confidentiality violations in the test cases they were presented with
<cit.> Analyze the privacy posture of patients who use secure electronic communication systems (ECS) compared to their perception on usability of these systems Qualitative: thematic analysis Patients use the ECS for subjects they view as unsubstantial and avoid it for intimate or personal details
An Overview of the Security Focused User Studies Including Goal of each Study, Methods and Principal Findings. The symbols in the "Labels" column refer to the labels derived during the card sorting exercise: = Regulatory Compliance, = Secure Communication,= Data Sharing, = EHR ,= Individual Differences, = Risk Awarness, = Tech Adoption, = Social Influence, = Risk Perception, = Mobile Healthcare, = Privacy, = Contact tracing
|
http://arxiv.org/abs/2306.02109v1
|
20230603132526
|
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
|
[
"Owen Queen",
"Thomas Hartvigsen",
"Teddy Koker",
"Huan He",
"Theodoros Tsiligkaridis",
"Marinka Zitnik"
] |
cs.LG
|
[
"cs.LG"
] |
What makes Individual I's a Collective We;
Coordination mechanisms & costs
[
July 31, 2023
============================================================================
Interpreting time series models is uniquely challenging because it requires identifying both the location of time series signals that drive model predictions and their matching to an interpretable temporal pattern. While explainers from other modalities can be applied to time series, their inductive biases do not transfer well to the inherently uninterpretable nature of time series.
We present , a time series consistency model for training explainers. trains an interpretable surrogate to mimic the behavior of a pretrained time series model. It addresses the issue of model faithfulness by introducing model behavior consistency, a novel formulation that preserves relations in the latent space induced by the pretrained model with relations in the latent space induced by . provides discrete attribution maps and, unlike existing interpretability methods, it learns a latent space of explanations that can be used in various ways, such as to provide landmarks to visually aggregate similar explanations and easily recognize temporal patterns.
We evaluate on 8 synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods. We also conduct case studies using physiological time series. Quantitative evaluations demonstrate that achieves the highest or second-highest performance in every metric compared to baselines across all datasets. Through case studies, we show that the novel components of show potential for training faithful, interpretable models that capture the behavior of pretrained time series models.
ยง INTRODUCTION
State-of-the-art time series models are high-capacity pre-trained neural networksย <cit.> often seen as black boxes due to their internal complexity and lack of interpretabilityย <cit.>. However, practical use requires techniques for auditing and interrogating these models to rationalize their predictions. Interpreting time series models poses a distinct set of challenges due to the need to achieve two goals: pinpointing the specific location of time series signals that influence the model's predictions, and aligning those signals with interpretable temporal patterns <cit.>. Applying explainers designed for other types of data is difficult, as their inductive biases struggle to adapt to the inherently uninterpretable nature of time series. The dynamic nature and multi-scale dependencies within time series data require temporal interpretability techniques.
Research in model understanding and interpretability developed post-hoc explainers that treat pretrained models as black boxes and do not need access to internal model parameters, activations, and gradients. Recent research, however, shows that such post hoc methods suffer from a lack of faithfulness and stability, among other issues <cit.>. A model can also be understood by investigating what parts of the input it attends to through attention mappingย <cit.> and measuring the impact of modifying individual computational steps within a modelย <cit.>. Another major line of inquiry investigates internal mechanisms by asking what information the model containsย <cit.>. For example, it has been found that even when a language model is conditioned to output falsehoods, it may include a hidden state that represents the true answer internallyย <cit.>. Such a gap between external failure modes and internal states can only be identified by probing model internals. Such representation probing has been used to characterize the behaviors of language models, but leveraging these strategies to understand time series models has yet to be attempted. These lines of inquiry drive the development of in-hoc explainers <cit.> that build inherent interpretability into the model through architectural modifications<cit.> or regularization<cit.>. However, no in-hoc explainers have been developed for time series data. While explainers designed for other modalities can be adapted to time series, their inherent biases do not translate effectively to the uninterpretable nature of time series data and can miss important structures in time series.
R0.5
< g r a p h i c s >
learns a latent space of explanations along with landmarks to summarize groups of informative temporal patterns in time series.
Explaining time series models is challenging for many reasons. First, large time series data are not visually interpretable, as opposed to imaging or text datasets. Next, time series often exhibit dense informative features, in contrast to more explored modalities such as imaging, where informative features are often sparse. In time series datasets, timestep-to-timestep transitions can be negligible and temporal patterns only show up when looking at time segments and long-term trends. In contrast, in text datasets, word-to-word transitions are informative for language modeling and understanding. Further, time series interpretability involves understanding dynamics of the model and identifying trends or patterns. Another key issue with applying prior methods is that they treat all time steps as separate features, ignoring potential time dependencies and contextual information; we need explanations that are temporally connected and visually digestible. While understanding predictions of individual samples is valuable, the ability to establish connections between explanations of various samples (for example, in an appropriate latent space) could help alleviate these challenges.
Present work
We present , a novel time series in-hoc explainer that produces interpretable attribution masks as explanations over time series inputs (Figureย <ref>). Our codebase is at <https://github.com/mims-harvard/TimeX>.
1 A key contribution of is the introduction of model behavior consistency, a novel formulation that ensures the preservation of relationships in the latent space induced by the pretrained model, as well as the latent space induced by .
2 In addition to achieving model behavior consistency, offers interpretable attribution maps, which are valuable tools for interpreting the model's predictions, generated using discrete straight-through estimators (STEs), a type of gradient estimators that enable end-to-end training of models.
3 Unlike existing interpretability methods, goes a step further by learning a latent space of explanations. By incorporating model behavior consistency and leveraging a latent space of explanations, not only provides discrete attribution maps but also enables visual aggregation of similar explanations and the recognition of temporal patterns.
4 We test our approach on 8 synthetic and real-world time series datasets, including datasets with carefully processed ground-truth explanations to quantitatively benchmark it and compare it to general explainers, state-of-the-art time series explainers, and in-hoc explainers.
ยง RELATED WORK
Model understanding and interpretability
As neural networks have grown, so has the need to help users interpret a network's behavior.
The vast majority of explainable AI research <cit.> has focused on natural language processing (NLP)ย <cit.> and computer vision (CV)ย <cit.>.
Commonly used techniques, such as Integrated Gradients <cit.> and Shapley Additive Explanations (SHAP) <cit.>, and their variants have originated from these domains and gained popularity. XAI has gained significant interest in NLP and CV due to the inherent interpretability of their data. However, this familiarity can introduce confirmation bias <cit.>. Recent works have expanded to other data modalities, including graphs <cit.> and time series <cit.>, as outlined below.
The literature primarily focuses on post-hoc explainability, where explanations are provided for a trained and frozen model's behavior <cit.>. However, saliency maps, a popular approach <cit.>, have pitfalls when generated post-hoc: they are surprisingly fragile <cit.>, and lack sensitivity to their explained models <cit.>. Surrogate-based approaches have also been proposed <cit.>, but these simplified surrogate models fall short compared to the original predictor they aim to explain.
Unlike post-hoc explainability, in-hoc methods aim for inherently interpretable models. This can be accomplished by modifying the model's architecture <cit.>, training procedure using jointly-trained explainers <cit.>, adversarial training <cit.>, regularization techniques <cit.>, or refactoring the latent space <cit.>. However, such models often struggle to achieve state-of-the-art predictive performance, and to date, these methods have seen limited use for time series.
Beyond instance-based explanations Several methods have been proposed to provide users with information beyond a single, instance-based saliency map. Prototype models strive to offer a representative sample or region in the latent space <cit.>. Such methods are inherently interpretable, as predictions are directly tied to patterns in the feature space. Further, explainability through human-interpretable exemplars has been gaining popularity.
Concept-based methods decompose model predictions into human-interpretable concepts.
Many works rely on annotated datasets with hand-picked concepts (e.g., โstripesโ in an image of a zebra).
Relying on access to a-priori defined concepts, concept bottleneck models learn a layer that attributes each neuron to one concept <cit.>.
This limitation has spurred research in concept discovery by composing existing concepts <cit.> or grounding detected objects to natural language <cit.>.
However, the computer vision focus of these works poses limited transfer to other domains like time series.
Time series explainability
In contrast to other modalities, time series often have multiple variables, and their discriminative information is spread over many timesteps.
Building on these challenges, recent works have begun exploring XAI for time series <cit.>.
Many methods modify saliency maps <cit.> or surrogate methods <cit.> to work with time series data.
Two representative methods are WinIT <cit.> and Dynamask <cit.>.
WinIT learns saliency maps with temporal feature importances, while Dynamask regularizes saliency maps to include temporal smoothing.
However, these methods rely on perturbing timesteps <cit.>, causing them to suffer from a lack of faithfulness.
Common perturbation choices in CV, like masking with zeros, make less sense for time series <cit.>.
Perturbed time series may be out-of-distribution for the model due to shifts in shape <cit.>, resulting in unfaithful explanations akin to adversarial perturbation <cit.>.
ยง PROBLEM FORMULATION
Notation
Given is a time series dataset ๐ = (๐ณ, ๐ด) = { (๐ฑ_i, y_i) | i = 1, ..., N} where ๐ฑ_i are input samples and y_i are labels associated to each sample. Each sample ๐ฑ_i โโ^T ร d is said to have T time steps and d sensors. A feature is defined as a time-sensor pair, where the time t and sensor k for input ๐ฑ_i is ๐ฑ_i[t,k]. Without loss of generality, our model is defined for univariate (d=1) and multivariate (d>1) settings. Each y_i โ{ 1, 2, ..., C } belongs to one of C classes. A classifier model consists of an encoder G and predictor F. The encoder G produces an embedding of input ๐ฑ_i, i.e., G(๐ฑ_i) = ๐ณ_i โโ^d_z, while the predictor produces some prediction from the embedding in the form of a logit, i.e., F(G(๐ฑ_i)) = ลท_i โ [0,1]^C where _jลท_i[j] โ{1,...,C} is the predicted label. The latent space induced by G is defined as Z, e.g., G: ๐ณโ Z. We will refer to F(G(ยท)) as the reference model while G is the reference encoder and F is the reference predictor. An explanation is defined as a continuous map of the features that conveys the relative importance of each feature for the prediction. The explanation for sample ๐ฑ_i is given as E(๐ฑ_i) โโ^T ร d where for any times t_1,t_2 and sensors k_1,k_2, E(๐ฑ_i[t_1,k_1]) > E(๐ฑ_i[t_2,k_2]) implies that ๐ฑ_i[t_1,k_1] is a more important feature for the task than ๐ฑ_i[t_2,k_2].
ยง.ยง problem formulation
creates an inherently-interpretable surrogate model for pretrained time series models. The surrogate model produces explanations by optimizing two main objectives: interpretability and faithfulness to model behavior. First, we generate interpretable explanations via an attribution map E(๐ฑ_i) that identifies succinct, connected regions of input that are important for the prediction. To ensure faithfulness to the reference model, we introduce a novel objective for training : model behavior consistency (MBC). With MBC, a model learns to mimic internal layers and predictions of the reference model, yielding a time series a high-fidelity explainer. MBC is defined as:
The explanations E and explanation encoder G^E are consistent with the pretrained model G and predictor F on dataset ๐ if the following two requirements are satisfied:
* [Consistent reference encoder]: Relationship between ๐ณ_i = G(๐ฑ_i) and ๐ณ_j = G(๐ฑ_j) in the space of reference encoder is preserved by the explainer, ๐ณ^E_i = G^E(E(๐ฑ_i)) and ๐ณ^E_j = G^E(E(๐ฑ_j)), such that: D_Z(๐ณ_i, ๐ณ_j) โ D_Z^E(๐ณ^E_i, ๐ณ^E_j) for samples ๐ฑ_i, ๐ฑ_j โ๐.
* [Consistent reference predictor]: Relationship between reference predictor ลท_i = F(๐ณ_i) and latent explanation predictor ลท^E_i = F(๐ณ^E_i) is preserved, ลท_i โลท^E_i for every sample ๐ฑ_i โ๐.
Our central problem formulation is defined as realizing the MBC between a reference model and an interpretable model:
[] Given are pretrained time series encoder G and predictor F that are trained on a time series dataset ๐. provides explanations E(๐ฑ_i) for every sample ๐ฑ_i โ๐ in the form of interpretable attribution maps. These explanations satisfy model behavior consistency through the latent representation space of explanations Z^E generated by the explanation encoder G^E.
is designed to counter several challenges in interpreting time series models. First, avoids the pitfall known as the occlusion problem <cit.>. Occlusion occurs when some features in an input ๐ฑ_i are perturbed in an effort that the predictor forgets those features. Since it is well-known that occlusion can produce out-of-distribution samples <cit.>, this can cause unpredictable shifts in the behavior of a fixed, pretrained modelย <cit.>. In contrast, avoids directly masking input samples to G. First, trains an interpretable surrogate G^E to match the behavior of G. Second, MBC is designed to improve the faithfulness of to G. By learning to mimic multiple states of F(G(ยท)) using the MBC objective, learns highly-faithful explanations, unlike many post-hoc explainers that provide no explicit optimization of faithfulness. Finally, 's explanations are driven by learning a latent explanation space, offering richer interpretability data.
ยง METHOD
We now present , an approach to train an interpretable surrogate model to provide explanations for a pretrained time series model. learns explanations through a consistency learning objective where an explanation generator H^E and explanation encoder G^E are trained to match intermediate feature spaces and the predicted label space. We will break down in the following sections by components: H^E, the explanation generator, G^E, the explanation encoder, and the training objective of G^E(H^E(ยท)), followed by a discussion of practical considerations of . An overview of is depicted in Figure <ref>.
ยง.ยง Explanation generation
Generating an explanation involves producing a mask M_๐ณ where if M_๐ณ[t_1,k_1] > M_๐ณ[t_2,k_2], then M_๐ณ[t_1,k_1] is considered as more important for the prediction than M_๐ณ[t_2, k_2]. Explanation generation is performed through an explanation generator H^E: ๐ณโ๐ฉโ [0,1]^Tร d. We learn ๐ฉ based on a procedure proposed by <cit.>, but we adapt their procedure for time series. Intuitively, ๐ฉ parameterizes a Bernoulli at each time-sensor pair, and the mask M_๐ณ is sampled from this Bernoulli distribution during training, i.e., M_๐ณโผโ_๐ฉ(M_๐ณ | ๐ณ) = โ_t,kBern(๐ฉ_t,k). This parameterization is directly interpretable as attribution scores: a low ๐ฉ_t,k means that time-sensor pair (t,k) has a low probability of being masked-in. Thus, ๐ฉ is also the explanation for ๐ฑ_i, i.e., E(๐ฑ) = ๐ฉ.
The generation of ๐ฉ is regularized through a divergence with Bernoulli distributions Bern(r), where r is a user-chosen hyperparameter. Denote the desired distribution of ๐ฉ as โ(M_๐ณ) = โ_(t,k)Bern(r). Then the objective becomes:
โ_m(๐ฉ) = ๐ผ[D_KL(โ_๐ฉ(M_๐ณ | ๐ณ) || โ(M_๐ณ))] = โ_t,k๐ฉ_t,klog๐ฉ_t,k/r + (1 - ๐ฉ_t,k)log1 - ๐ฉ_t,k/1 - r
The sampling of M_๐ณโผโ_๐ฉ(M_๐ณ | ๐ณ) is performed via the Gumbel-Softmax trick <cit.>, which is a differentiable approximation of categorical sampling. Importantly, M_๐ณ is stochastically generated, which as discussed in <cit.>, regularizes the model to learn robust explanations.
To generate interpretable attribution masks, optimizes for the connectedness of predicted distributions:
โ_con(๐ฉ) = 1/T ร dโ_k = 1^dโ_t=1^T-1โ((๐ฉ_t,k - ๐ฉ_t+1,k)^2).
The generator of explanations H^E learns directly on input time series samples ๐ณ to return ๐ฉ. We build a transformer encoder-decoder structure for H^E, using an autoregressive transformer decoder and a sigmoid activation to output probabilities for each time-sensor pair.
ยง.ยง Explanation encoding
We now describe how to embed explanations with the explanation encoder G^E. Intuitively, G^E learns on the masked distribution of ๐ณ, which can be denoted as ๐ณ^m. Motivated by the occlusion problem, we avoid directly applying the masks onto the pre-trained, frozen G, as ๐ณ^m and ๐ณ are fundamentally different distributions. Therefore, we copy the weights of G into G^E
and fine-tune G^E on ๐ณ^m.
Discretizing attribution masks When passing inputs to G^E, it is important for the end-to-end optimization to completely ignore regions identified as unimportant by H^E. Therefore, we use a straight-through estimator (STE)ย <cit.> to obtain a discrete mask M_๐ณโ{0,1}^Tร d. Introduced by <cit.>, STEs utilize a surrogate function to approximate the gradient of a non-differentiable operation used in the forward pass, such as binarization.
Applying masks to time series samples We use two types of masking procedures: attention masking and direct-value masking. First, we employ differentiable attention masking through a multiplicative operation proposed by Nguyen et al. <cit.>. When attention masking does not apply, based on architecture choice or the use of multivariate inputs, we a direct-value masking procedure. We approximate a baseline distribution: ๐น_๐ณ = โ_t,k๐ฉ(ฮผ_tk, ฯ^2_tk), where ฮผ_tk and ฯ^2_tk are the mean and variance over time-sensor pairs. Masking is then performed through a multiplicative replacement as: ๐ฑ^m_i = (M_๐ณโ๐ฑ_i) + (1 - M_๐ณ) โ b, where b โผ๐น_๐ณ.
Justification for discrete masking
It is important that masks M_๐ณ are discrete as opposed to continuous. Previous works have considered masking techniques <cit.> with continuous masks. However, continuous masking has a distinctly different interpretation: it applies a continuous deformation of the input towards a baseline value. While such an approach is reasonable for data modalities with discrete structures, such as sequences of tokens (as in <cit.>) or nodes in graphs <cit.>, such deformation may result in a change of the shape of time series data, which is known to be important for prediction <cit.>. As a toy example, consider an input time series ๐ฑ_i where the predictive pattern is driven by feature ๐ฑ_i[t_1,k_1] being larger than all other features. If M_๐ณ is continuous, then it is possible that for a less important feature ๐ฑ_i[t_2,k_2], M_๐ณ[t_1,k_1] < M_๐ณ[t_2,k_2] while (M_๐ณ[t_1,k_1] โ๐ฑ_i[t_1,k_1]) > (M_๐ณ[t_2,k_2] โ๐ฑ_i[t_2,k_2]), thereby preserving the predictive pattern while the mask indicates that ๐ฑ_i[t_2,k_2] is more important than ๐ฑ_i[t_1,k_1]. If a surrogate model is trained on M_๐ณโ๐ฑ_i, M_๐ณ may violate the ordinality expected by an attribution map as defined in Section <ref>. Discrete masking alleviates this issue by forcing M_๐ณ to be binary, removing the possibility of confounds created by continuous masking. Therefore, discrete masking is necessary when learning interpretable masks on continuous time series.
ยง.ยง Model behavior consistency
The challenge lies in training G^E(H^E(ยท)) to faithfully represent F(G(ยท)). We approach this by considering the latent spaces of G and G^E. If G considers ๐ฑ_i and ๐ฑ_j to be similar in Z, we expect that a faithful G^E would encode E(๐ฑ_i) and E(๐ฑ_j) similarly. However, directly aligning G and G^E is not suitable due to potential differences in the geometry of the explanation embedding space compared to the full input latent space. To address this, we introduce model behavior consistency (MBC), a novel self-supervised objective that trains the explainer model to mimic the behavior of the original model without strict alignment between the spaces. Denote the latent space induced by G and G^E as Z and Z^E, respectively. The MBC objective is thus defined as:
โ_MBC(Z, Z^E) = โ_๐ณ_i,๐ณ_j โ Zโ_๐ณ_i^E,๐ณ_j^E โ Z^E (D_Z(๐ณ_i, ๐ณ_j) - D_Z^E(๐ณ_i^E,๐ณ_j^E))^2,
where D_Z and D_Z^E are distance functions on the reference model's latent space and the explanation encoder's latent space, respectively. This objective encourages distances to be similar across both spaces, encouraging Z^E to retain a similar local topology to Z without performing direct alignment. This is closely related to cycle-consistency loss, specifically cross-modal cycle-consistency loss asย <cit.>. We use cosine similarity for D_Z and D_Z^E throughout experiments in this study, but any distance can be defined on each respective space.
In addition to MBC, we use a label consistency (LC) objective to optimize . We train a predictor F^E on Z^E to output logits consistent with those output by F. We use a Jensen-Shannon Divergence (D_JS) between the logits of both predictors:
โ_LC(Z, Z^E) = โ_๐ณ_i,๐ณ_j โ Zโ_๐ณ_i^E,๐ณ_j^E โ Z^E(D_JS(F(๐ณ_i) || F(๐ณ_j)) - D_JS(F^E(๐ณ^E_i) || F^E(๐ณ^E_j)))^2
Our total loss function on Z^E can then be defined as a combination of losses: โ_Z^E = โ_MBC + ฮป_LCโ_LC.
Consistency learning justification MBC offers three key benefits for explainability.
1 MBC enables consistency optimization across two latent spaces Z and Z^E without requiring that both ๐ฑ_i and E(๐ฑ_i) be encoded by the same model, allowing the learning of E on a separate model F^E(G^E(ยท)) โ F(G(ยท)). This avoids the out-of-distribution problems induced by directly masking inputs to G.
2 MBC provides a comprehensive representation of model behavior for explainer optimization. This is in contrast to perturbation explanations <cit.> which seek a label-preserving perturbation P on F(G(ยท)) where F(G(P(๐ฑ_i))) โ F(G(๐ฑ_i)). By using G(๐ฑ_i) and F(G(๐ฑ_i)) to capture the behavior of the reference model, MBC's objective is richer than a simple label-preserving objective.
3 While MBC is stronger than label matching alone, it is more flexible than direct alignment. An alignment objective, which enforces ๐ณ_i โ๐ณ_i^E, inhibits G^E from learning important features of explanations not represented in Z. The nuance and novelty of MBC are in learning a latent space that is faithful to model behavior while being flexible enough to encode rich relational structure about explanations that can be exploited to learn additional features such as landmark explanations. Further discussion of the utility of MBC is in Appendix <ref>.
ยง.ยง Learning explanation landmarks and training models
Leveraging 's latent space, we learn landmark explanations ๐ณ^L โโ^d_z. Such landmarks are desirable as they allow users to compare similar explanation patterns across samples used by the predictor. Landmarks are learned by a landmark consistency loss, and their optimization is detached from the gradients of the explanations so as to not harm explanation quality. Denote the landmark matrix as ๐โโ^n_L ร d_z where n_L corresponds to the number of landmarks (a user-chosen value) and d_z is the dimensionality of Z^E. For each sample explanation embedding ๐ณ^E_i, we use Gumbel-Softmax STE GS to stochastically match ๐ณ^E_i to the nearest landmark in the embedding space. Denote the vector of similarities to each ๐ณ^E_i as s(๐ณ^E_i, L). Then the assimilation A is described as:
A(๐ณ^E_i; ๐) = GS(softmax(s(sg(๐ณ^E_i), ๐))) ๐,
where sg denotes the stop-grad function. The objective for learning landmarks is then โ_MBC(Z, A(Z^E; ๐)), optimizing the consistency between the assimilated prototypes and the reference model's latent space. Landmarks are initialized as a random sample of explanation embeddings from Z^E, but then are allowed to change via gradient descent. After learning landmarks, we can measure the quality of each landmark by the number of ๐ณ^E_i embeddings closest to it in latent space. We filter out any landmarks that are not sufficiently close to any samples (described in Appendix <ref>).
training
The overall loss function for has four components which we sum to produce our total loss: โ = โ_MBC + ฮป_LCโ_LC + ฮป_E (โ_m + ฮป_conโ_con), where ฮป_LC, ฮป_E, ฮป_conโโ are weights for the label consistency loss and total explanation loss, and connective explanation loss, respectively. can be optimized in an end-to-end fashion, and it requires little hyperparameter choices from the user. The user must also choose the r parameter for the explanation regularization. We find that explanation performance is stable across choices of r (as found in <cit.>), so we set r = 0.5 to remain consistent throughout experiments. A lower r value may be provided if the underlying predictive signal is known to be sparse. In total, optimizes H^E, G^E, and F^E.
ยง EXPERIMENTAL SETUP
Datasets
We design 4 synthetic datasets with known ground-truth explanations: FreqShapes, SeqComb-UV, SeqComb-MV, and LowVar. Datasets are designed to capture diverse temporal dynamics in both univariate and multivariate settings. We employ 4 datasets from real-world time series classification tasks: ECG <cit.> - ECG arrhythmia detection; PAM <cit.> - human activity recognition; Epilepsy <cit.> - EEG seizure detection; and Boiler <cit.> - mechanical fault detection. We define ground-truth explanations for ECG as QRS intervals, which are known regions of ECG signals where arrhythmias can be detected. Such R, P, and T wave intervals are extracted following <cit.>. Dataset details are given in Appendix <ref> and <ref>.
Baselines
We evaluate the method against five explainability baselines. As a general explainer, we use integrated gradients (IG) <cit.>; for recent time series-specific explainers, we use Dynamask <cit.>, and WinIT <cit.>; for an explainer that uses contrastive learning, we use CoRTX <cit.>; and for an in-hoc explainer which has been demonstrated for time series, we use SGT + Grad <cit.>.
Evaluation We consider two approaches. Ground-truth explanations: Generated explanations are compared to ground-truth explanations, i.e., known predictive signals in each input time series sample when interpreting a strong predictor, following established setupsย <cit.>. We use the area under precision (AUP) and area under recall (AUR) curves to evaluate the quality of explanationsย <cit.>. We also use the explanation AUPRC, which combines the results of AUP and AUR. For all metrics, higher values are better. Definitions of metrics are in Appendix <ref>. Feature importance under occlusion: We occlude the bottom p-percentile of features as identified by the explainer and measure the change in prediction AUROC (Sec.ย <ref>). The most important features a strong explainer identifies should retain prediction performance under occlusion when p is high. To control for potential misinterpretations based on the occlusion problem, we include a random explainer reference. Our experiments use transformersย <cit.> with time-based positional encoding. Hyperparameters, experimental, training, and compute details are given in Appendix <ref>.
ยง RESULTS
R1: Comparison to existing methods on synthetic and real-world datasets
Synthetic datasets We compare to existing explainers on the task of identifying important signals in time series datasets. Tables <ref>-<ref> show results for univariate and multivariate datasets, respectively. Across univariate and multivariate settings, is the best explainer on 10/12 (3 metrics in 4 datasets) with an average improvement in the explanation AUPRC (10.01%), AUP (6.01%), and AUR (3.35%) over the strongest baselines. Specifically, improves ground-truth explanation in terms of AUP by 3.07% on FreqShapes, 6.3% on SeqComb-UV, 8.43% on SeqComb-MV, and 6.24% on LowVar over the strongest baseline on each dataset. In all of these settings, AUR is less important than AUP since the predictive signals have redundant information. achieves high AUR because it is optimized to output smooth masks over time, tending to include more of entire subsequence patterns rather than sparse portions, which is an important property for human interpretation. We show 's explanations in Appendix <ref> to visualize this property.
Real-world datasets: arrhythmia detection We demonstrate on ECG arrhythmia detection. 's attribution maps show a state-of-the-art performance for finding relevant QRS intervals driving the arrhythmia diagnosis and outperform the strongest baseline by 5.39% (AUPRC) and 9.83% (AUR) (Tableย <ref>). Integrated gradients achieves a slightly higher AUP, whereas state-of-the-art time series explainers perform poorly.
Notably, 's explanations are significantly better in AUR, identifying larger segments of the QRS interval rather than individual timesteps.
Ablation study on ECG data We conduct ablations on the ECG data using (Tableย <ref>). First, we show that the STE improves performance as opposed to soft attention masking, resulting in an AUPRC performance gain of 9.44%; this validates our claims about the pitfalls of soft masking for time series. Note that this drop in performance becomes more significant when including direct-value masking, as we show in Appendix <ref>. Second, we use SimCLR loss to align Z^E to Z as opposed to MBC; SimCLR loss is able to achieve comparable results in AUPRC and AUR, but the AUP is 13.6% lower than the base . Third, we experiment with the usefulness of MBC and LC objectives. MBC alone produces poor explanations with AUPRC at 65.8% lower score than the base model. LC alone does better than MBC alone, but its AUPRC is still 21.5% lower than the base model. MBC and LC in conjunction produce high-quality explanations, showing the value in including more intermediate states for optimizing G^E(H^E(ยท)). Extensive ablations are provided in Appendix <ref>.
R2: Occlusion experiments on real-world datasets
We evaluate explanations by occluding important features from the reference model and observing changes in classification <cit.>. Given a generated explanation E(๐ฑ_i), the bottom p-percentile of features are occluded; we expect
the classification performance to drop significantly when replacing important features (identified by the explainer) with baseline values. To counter misinterpretation induced by the occlusion problem (Sec.ย <ref>), we compare the performance under occlusion to random explanations. We adopt the masking procedure described in Sec.ย <ref>, performing attention masking where applicable and direct-value masking otherwise.
Figure <ref> compares to Dynamask, a strong time-series explainer. On all datasets, 's explanations are either at or above the performance of Dynamask, and both methods perform above the random baseline. On Boiler dataset, we demonstrate an average of 27.8% better classification AUROC across each threshold compared to Dynamask, with up to 37.4% better AUROC at the 0.75 threshold. This gap in performance between and Dynamask is likely because the underlying predictor for Boiler is weaker than that of Epilepsy or PAM, achieving 0.834 AUROC compared to 0.979 for PAM and 0.939 for Epilepsy. We hypothesize that outperforms Dynamask because it considers simply changes in predicted labels under perturbation while optimizes for consistency across both labels and embedding spaces in the surrogate and reference models. performs well across both univariate (Epilepsy) and multivariate (PAM and Boiler) datasets.
R3: Landmark explanation analysis on ECG
To demonstrate 's landmarks, we show how landmarks serve as summaries of diverse patterns in an ECG dataset. Figure <ref> visualizes the learned landmarks in the latent space of explanations. We choose four representative landmarks based on the previously-described landmark ranking strategy (Sec.ย <ref>). Every landmark occupies different regions of the latent space, capturing diverse types of explanations generated by the model. We show the three nearest explanations for the top two landmarks in terms of the nearest neighbor in the latent space. Explanations 1, 2, and 3 are all similar to each other while distinctly different from 4, 5, and 6, both in terms of attribution and temporal structure. This visualization shows how landmarks can partition the latent space of explanations into interpretable temporal patterns.
ยง CONCLUSION
We develop , an interpretable surrogate model for interpreting time series models. By introducing the novel concept of model behavior consistency (i.e., preserving relations in the latent space induced by the pretrained model when compared to relations in the latent space induced by ), we ensure that mimics the behavior of a pretrained time series model, aligning influential time series signals with interpretable temporal patterns. The generation of attribution maps and the utilization of a latent space of explanations distinguish from existing methods. Results on synthetic and real-world datasets, as well as case studies involving physiological time series, demonstrate the superior performance of compared to state-of-the-art interpretability methods. 's innovative components offer promising potential for training interpretable models that capture the behavior of pretrained time series models.
Limitations
While is not limited to a specific task as an explainer, our experiments focus on time series classification. can be used to explain other downstream tasks assuming we can access the latent pretrained space, meaning it could be used to examine general pretrained models for time series. However, the lack of such pretrained time series models and the lack of datasets with reliable ground-truth explanations restricted our testing in this area. One limitation of our approach is its parameter efficiency due to the separate optimization of the explanation-tuned model. Larger models may require adopting parameter-efficient tuning strategies.
ยง ACKNOWLEDGEMENTS
We gratefully acknowledge the support of the Under Secretary of Defense for Research and Engineering under Air Force Contract No.ย FA8702-15-D-0001 and awards from NIH under No.ย R01HD108794, Harvard Data Science Initiative, Amazon Faculty Research, Google Research Scholar Program, Bayer Early Excellence in Science, AstraZeneca Research, and Roche Alliance with Distinguished Scientists. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders. The authors declare that there are no conflict of interests.
unsrt
ยง FURTHER DISCUSSION OF BACKGROUND
Straight-through estimators Discrete operations, such as thresholding, are often avoided in neural network architectures due to difficulties differentiating discrete functions. To circumvent these issues, <cit.> introduce the straight-through estimator (STE), which uses a surrogate function during backpropagation to approximate the gradient for a non-differentiable operation. STEs have seen usage in quantized neural networks yin2019understanding. This method shows empirical performance even though there is little theoretical justification behind it cheng2019straight.
Self-supervised learning Methods in self-supervised learning (SSL) have become a common pretraining technique for settings in which large, unlabeled datasets are available rani2023self, radford2021learning, chen2020simple, he2020momentum. Common approaches for self-supervised learning are contrastive learning, which seeks to learn representations for samples under invariant data augmentations, and metric learning, which aims to learn a latent space in which a distance function captures some pre-defined relations on data wang2019multi. Consistency learning has emerged as another promising SSL approach; intuitively, this family of methods seeks to learn latent spaces in which similar pairs are expected to be embedded similarly, i.e., preserving some consistent properties. Consistency learning has seen use in aligning videos dwibedi2019temporal, enhancing latent geometry for multimodal contrastive learning <cit.>, and pretraining time series models across time and frequency domains <cit.>.
ยง FURTHER THEORETICAL DISCUSSIONS
ยง.ยง Differentiable attention masking
As is described in Section <ref>, we use differentiable attention maskingย <cit.>, which is defined as such:
ฮฑ^m = (softmax(๐๐^T/โ(d_k)) โ๐_๐ณ)๐,
where ๐, ๐, ๐ represent query, key and values operators, d_k is used as a normalization factor, and ๐_๐ณ is a mask with self-attention values. This procedure is fully differentiable, and given that ๐_๐ณ is binarized via the STE, it sets all attention values to zero that are to be ignored based on output from H^E.
ยง.ยง Further discussion on the utility of model behavior consistency
The model behavior consistency (MBC) framework in is a method to train an interpretable surrogate model G^E. In Section <ref>, we discuss the conceptual advances of this approach. Here, we will outline another advantage of the approachโpreserving classification performanceโand a brief discussion on the broader uses of MBC in other domains and applications.
Training an in-hoc model is often challenging, as the inherent interpretability mechanism can hinder the performance and expressiveness of the method; this is an advantage of post-hoc or surrogate methods. MBC allows one to preserve the performance of the underlying predictor. , a surrogate method, allows one to keep the predictions from a pre-trained time series encoder and develop explanations on top of it, which is practical for real-world use when a drop in classification performance is highly undesirable.
MBC is not limited to time series classification tasks. We demonstrate the utility of MBC for time series due to the particularly challenging nature of the data modality and the lack of available time series explainers. However, MBC gives a general framework for learning interpretable surrogate models through learning the H^E and G^E modules. MBC also has the potential to be applied to tasks outside of classification; since MBC is defined on the embedding space, any model with such an embedding space could be matched through a surrogate model as in . This opens the possibility to learn on general pre-trained models or even more complex tasks such as forecasting. Finally, we see MBC as having potential beyond explainability as well; one could imagine MBC being a way to distill knowledge into smaller modelsย xing2022selfmatch,li2022shadow,chen2022knowledge. We leave these discussions and experiments for future work.
ยง.ยง Explanation landmark selection strategy
We proceed to describe how landmarks are selected for final interpretation. As described in Section <ref>, landmarks are initialized with the embeddings given by G for a random number of training samples. Practically, we stratify this selection across classes in the training set. Landmarks are then updated during the learning procedure. After learning landmarks, not every landmark will be helpful as an explanation; thus, we perform a filtration procedure. Intuitively, this filtration consists of detecting landmarks for which the landmark is the nearest landmark neighbor for many samples. This procedure is described in Algorithm <ref>.
ยง ADDITIONAL EXPERIMENTS AND EXPERIMENTAL DETAILS
ยง.ยง Description of datasets
We conduct experiments using both synthetic and real-world datasets. This section describes each synthetic and real-world dataset, including how ground-truth explanations are generated when applicable.
ยง.ยง.ยง Synthetic datasets
We employ synthetic datasets with known ground-truth explanations to study the capability to identify the underlying predictive signal. We follow standard practices for designing synthetic datasets, including tasks that are predictive and not susceptible to shortcut learning geirhos2020shortcut induced by logical shortcuts. These principles are defined in <cit.> concerning graphs, but we extend these to synthetic datasets for time series. Each time series is initialized with a non-autoregressive moving average (NARMA) noise base, and then the described patterns are inserted. We will briefly describe the construction of each time series dataset in this section, and the codebase contains full details at https://anonymous.4open.science/r/TimeX-C1AD. We designed four synthetic datasets to test different time series dynamics:
FreqShapes Predictive signal is determined by the frequency of occurrence of an anomaly signal. To construct the dataset, take two upward and downward spike shapes and two frequencies, 10 and 17 time steps. There are four classes, each with a different combination of the attributes: class 0 has a downward spike occurring every 10 time steps, class 1 has an upward spike occurring every 10 time steps, class 2 has a downward spike occurring every 17 time steps, and class 3 has an upward spike occurring every 17 time steps. Ground-truth explanations are the locations of the upward and downward spikes.
SeqComb-UV Predictive signal is defined by the presence of two shapes of subsequences: increasing (I) and decreasing (D) trends. First, two subsequence regions are chosen within the time series so neither subsequence overlaps; each subsequence is 10-20 time steps long. Then, a pattern is inserted based on the class identity; the increasing or decreasing trend is created with a sinusoidal noise with a randomly-chosen wavelength. Class 0 is null, following a strategy in <cit.> that recommends using null classes for simple logical identification tasks in synthetic datasets. Class 1 is I, I; class 2 is D, D; and class 3 is I, D. Thus, the model is tasked with identifying both subsequences to classify each sample. Ground-truth explanations are the I and/or D sequences determining class labels.
SeqComb-MV This dataset is a multivariate version of SeqComb-UV. The construction and class structure is equivalent, but the I and D subsequences are distributed across different sensors in the input. Upon constructing the samples, the subsequences are chosen to be on random sensors throughout the input. Ground-truth explanations are given as the predictive subsequences on their respective sensors, i.e., the explainer is required to identify the time points at which the causal signal occurs and the sensors upon which they occur.
LowVar Predictive signal is defined by regions of low variance over time that occur in a multivariate time series sample. Similar to SeqComb datasets, we choose a random subsequence in the input and, in that subsequence, replace the NARMA background sequence with Gaussian noise at a low variance. The subsequence is further discriminated by the mean of the Gaussian noise and the sensor on which the low variance sequence occurs. For class 0, the subsequence is at mean -1.5 on sensor 0; for class 1, the subsequence is at mean 1.5 on sensor 0; for class 2, the subsequence is at mean -1.5 on sensor 1; for class 3, the subsequence is at mean 1.5 on sensor 1. This task is distinctly different from other synthetic datasets, requiring recognition of a subsequence that is not anomalous from the rest of the sequence. This presents a more challenging explanation task; a simple change-point detection algorithm could not determine the explanation for this dataset.
We create 5,000 training samples, 1,000 testing samples, and 100 validation samples for each dataset. A summary of the dimensions of each dataset can be found in Table <ref>.
ยง.ยง.ยง Real-world datasets
We employ four datasets from real-world time series classification tasks: PAM <cit.> - human activity recognition; ECG <cit.> - ECG arrhythmia detection; Epilepsy <cit.> - EEG seizure detection; and Boiler <cit.> - automatic fault detection.
PAM <cit.> It measures
the daily living activities of 9 subjects with three inertial measurement units. We excluded the ninth subject due to the short length
of sensor readouts. We segment the continuous signals into samples with a time window of 600
and the overlapping rate of 50%. PAM initially has 18 activities of daily life. We exclude the ones associated with fewer than 500 samples, leaving us with eight activities. After modification, the PAM dataset
contains 5,333 segments (samples) of sensory signals. Each sample is measured by 17 sensors
and contains 600 continuous observations with a sampling frequency of 100 Hz.
PAM is labeled into eight classes where each class represents an activity of daily living. PAM does not
include static attributes and the samples are approximately balanced across all eight classes.
MIT-BIH (ECG) <cit.> The MIT-BIH dataset
has ECG recordings from 47 subjects recorded at the sampling rate of 360Hz. The raw dataset was then window-sliced into 92511 samples of 360 timestamps each. Two cardiologists have labeled each beat independently. Of the available annotations, we choose to use three for classification: normal reading (N), left bundle branch block beat (L), and right bundle branch block beat (R). We choose these because L and R diagnoses are known to rely on the QRS interval surawicz2009aha, floria2021incomplete, which will then become our ground-truth explanation (see Section <ref>). The Arrhythmia classification problem involves classifying each fragment of ECG recordings into different beat categories.
Epilepsy <cit.> The dataset contains single-channel EEG measurements from 500 subjects. For every subject, the brain activity was recorded for 23.6 seconds. The dataset was then divided and shuffled (to mitigate sample-subject association) into 11,500 samples of 1 second each, sampled at 178 Hz. The raw dataset features five classification labels corresponding to different states of subjects or measurement locations โ eyes open, eyes closed, EEG measured in the healthy brain region, EEG measured in the tumor region, and whether the subject has a seizure episode. To emphasize the distinction between positive and negative samples, we merge the first four classes into one, and each time series sample has a binary label indicating whether an individual is experiencing a seizure. There are 11,500 EEG samples in total.
Boiler <cit.> This dataset consists of simulations of hot water heating boilers that undergo different kinds of mechanical faults. Various mechanical sensors are recorded over time to derive a time series dataset. The learning task is to detect the mechanical fault of the blowdown valve of each boiler. The dataset is particularly challenging because it includes a large dimension-to-length ratio, unlike the other datasets which contain many more time steps than sensors (Table <ref>).
ยง.ยง Descriptions of baseline methods
We now describe each baseline method in further detail.
IG <cit.> Integrated gradients is a classical attribution method that utilizes the gradients of the model to form an explanation. The method compares the gradients to a baseline value and performs Riemannian integration to derive the explanation. Integrated gradients is a popular data type agnostic interpretability methodย agarwal2022openxai, but it has no inductive biases specific for time series. We use the Captum kokhlikyan2020captum implementation of this method, including default hyperparameters such as the baseline value.
Dynamask <cit.> This explainer is built specifically for time series and uses a perturbation-based procedure to generate explanations. The method performs iterative occlusion of various input portions, learning a mask that deforms the input time series towards a carefully-determined baseline value. This method is different from in a few key ways. First, it performs continuous masking; performs discrete masking through STEs. Second, it measures perturbation impact on the original model F(G(ยท)); trains a surrogate model G^E to learn the explanations and measure the impact of masking the input. Third, Dynamask learns the explanations iteratively for each sample; trains the surrogate which can then output explanations in one forward pass of H^E.
WinIT <cit.> This explainer is a feature removal explainer, similar to Dynamask. WinIT measures the impact of removing features from a time series on the final prediction value. It removes the impact of certain time intervals, learning feature dependencies across time steps. WinIT uses a generative model to perform in-distribution replacement of masked-out features. WinIT improves on a previous time series explainer, FIT <cit.>, which is a popular baseline in time series explainability literature but is excluded in our work because WinIT is more recent and improves on FIT both conceptually and empirically.
CoRTX <cit.> Contrastive real-time explainer (CoRTX) is an explainer method that utilizes contrastive learning to approximate SHAP <cit.> values. This method is developed for computer vision, but we implement a custom version that works with time series encoders and explanation generators. We include this method because it uses self-supervised learning to learn explanations. also uses a self-supervised objective to learn explanations, but our method differs from CoRTX in several ways. First, CoRTX performs augmentation-based contrastive learning while we use MBC, which avoids the definition of negatives or the careful choice of augmentations specific to the data modality. Second, CoRTX fundamentally attempts to approximate SHAP values via a small number of SHAP explanations. In contrast, includes a masking system that can produce masks without having to fine-tune a model on a set of explanations that are derived from an external method. CorRTX has close parallels to ours in using self-supervised learning but is fundamentally different from .
SGT + Grad <cit.> Saliency-guided training (SGT), an in-hoc explainer, is based on a modification to the training procedure. During training, features with low gradients are masked out to "guide" the model to focus on regions that are more important for the prediction. The method is not an explainer alone but requires using another post-hoc explainer to derive explanations. In our experiments, we consider saliency explanations, which are recommended by the SGT authors. The authors found that this method can improve performance on time series data. For this reason, we include it as one of our baselines to demonstrate the effectiveness of against modern in-hoc explainers.
ยง.ยง Hyperparameter selection
We list hyperparameters for each experiment performed in this work. For the ground-truth attribution experiments (Section <ref>, results R1), the hyperparameters are listed in Table <ref>. The hyperparameters used for the occlusion experiment (Section <ref>, results R2) with real-world datasets is in Table <ref>. We also list the architecture hyperparameters for the predictors trained on each dataset in Tables <ref>-<ref>.
A few abbreviations are used for hyperparameters that are not mentioned in the main text. "Weight decay" refers to an L1 regularization on the model weights; the value for weight decay is equivalent to the weight on that term in the loss function compared to the rest of the loss terms (Section <ref>). "Scheduler?" refers to using a learning rate scheduler that decreases the learning rate by a factor of 10 if a plateau occurs. We use a scheduler that delays decreasing learning rates until after 20 epochs; not every experiment utilizes the scheduler as it is based on which choice yields lower validation loss upon convergence. "Distance norm." refers to a normalization of the distances in โ_MBC; the loss is divided by the variance of the distances on the Z embedding space. ฯ is the temperature parameter used for the Gumbel-Softmax reparameterization <cit.>, Section <ref>. d_h refers to the dimensionality of hidden layers in the transformer predictor. Finally, "Norm. embedding" refers to an architecture choice that normalizes Z when training the predictor; this is used to prevent a poor latent space when a collapse is observed via poor latent space geometry.
A few other notes on implementation and design of : The architecture of H^E uses the same size of G^E and encoder for H^E as for the predictor on each task. The number of transformer decoder layers is fixed at 2. Please reference the codebase for more details on these hyperparameters and implementations https://anonymous.4open.science/r/TimeX-C1AD.
ยง.ยง Evaluation details
Following <cit.>, we use AUP and AUR to evaluate the goodness of identification of salient attributes as a binary classification task, which is defined in <ref>:
Let Q be a matrix in {0,1}^T ร d_X whose elements indicate the true saliency of the inputs contained in xโโ^T ร d_X. By definition, Q_t,i = 1 if the feature x_t,i is salient and 0 otherwise. Let M be a mask in {0,1}^T ร d_X obtained with a saliency method. Let ฯโ (0,1) be the detection threshold for m_t,i to indicate that the feature x_t,i is salient. This allows to convert the mask into an estimator Qฬ_t,i(ฯ) via:
Qฬ_t,i(ฯ) = {[ 1 if m_t,iโฅฯ; 0 else. ].
By considering the sets of truly salient indexes and the set of indexes selected by the saliency method:
A = { (t,i) โ [1:T] ร [1:d_X] | q_t,i = 1 }
ร ( ฯ ) = { (t,i) โ [1:T] ร [1:d_X] |qฬ_t,i (ฯ) = 1 } .
the precision and recall curves that map each threshold to a precision and recall score:
P : (0,1) โถ [0,1] : ฯโผ| A โฉร(ฯ) |/|ร(ฯ) |
R : (0,1) โถ [0,1] : ฯโผ| A โฉร(ฯ) |/| A |.
The AUP and AUR scores are the area under these curves:
AUP = โซ_0^1 P(ฯ) dฯ
AUR = โซ_0^1 R(ฯ) dฯ.
Groud-truth explanations for ECG datasets We extract ground-truth explanations via a QRS detection strategy following <cit.> because an initial set of beat labels was produced by a simple slope-sensitive QRS detector, which were then given to two cardiologists, who worked on them independently. The cardiologists added additional beat labels where the detector missed beats, deleted false detections as necessary, and changed the labels for all abnormal beats. We employ Neurokit [https://github.com/neuropsychology/NeuroKit] to extract QRS complexes and also take care to ensure that the QRS is the proper explanation for each class. We consider two types of arrhythmias: left bundle branch block beat and right bundle branch block beat, to categorize our "abnormal" class. We perform the ground-truth evaluation on only the abnormal class, as the normal class signifies negative information, which may be harder to pinpoint based on model logic.
Statistical analysis We evaluate each experiment on a 5-fold cross-validation of each dataset. We then report average performance and standard error across all folds of evaluation for each experiment, which results in the error bars seen in all tables throughout this work.
ยง.ยง Visualization of explanations
Figure <ref> shows an example of explainer versus IG and Dynamask. Shown is the SeqComb-UV dataset, which has increasing and decreasing subsequences that determine the class label. Each explainer identifies the regions driving the prediction. IG identifies very sparse portions of the predictive region, choosing only one point out of the sequences for the explanation; this is not reasonable when scaling to large and noisier datasets where the signal might not be as clear. Dynamask seems to miss some important subsequences, identifying one or two subsequences. In contrast, identifies a larger portion of the important subsequences due to the connection loss in Equation <ref>. This property becomes crucial when scaling to time series datasets with more noise as it becomes more difficult to intuitively deduce the causal signal through visual inspection.
ยง.ยง Further ablation experiments
We present a more in-depth study of ablations on with respect to three datasets: FreqShapes (univariate), SeqComb-MV (multivariate), and ECG (real-world). This is an extension to the ablations on the ECG dataset in Section <ref>, R1 in Table <ref>.
Ablation 1: No STE We now conduct an experiment examining the effectiveness of using the STE for training . Table <ref> shows the results of this ablation experiment. Using the STE provides over a 17% increase in AUPRC for attribution identification for every dataset. Furthermore, AUR is better when using an STE for every dataset, but the AUP is better for SeqComb-MV without the STE than with the STE. Using the STE also shows benefits in both the univariate (FreqShapes, ECG) and multivariate (SeqComb-MV) settings. In conclusion, the STE provides a noticeable benefit over a continuous masking approach, giving empirical evidence for the claims made in Section <ref>.
Ablation 2: SimCLR vs. MBC We now test a classical SimCLR chen2020simple contrastive learning loss against our proposed model behavior consistency (MBC). The SimCLR objective is designed to decrease the distance between explanation embeddings and embeddings in the reference model's latent space. We do not perform data augmentations as in the original SimCLR work. The SimCLR loss that we use is given as:
โ_SimCLR(Z, Z^E) = 1/Nโ_๐ณ_i โ Z, ๐ณ^E_i โ Z^E -logexp(D(๐ณ_i, ๐ณ^E_i))/โ_j โ iexp(D(๐ณ_j, ๐ณ^E_i))
For each SimCLR trial, we fixed the number of sampled negatives at 32 and kept all other parameters equal. In addition, an early stopping strategy was performed where the stopping value was based on cosine similarity between explanation embeddings and reference sample embeddings (higher similarity is better).
SimCLR loss provides a valuable objective for training relative to baseline explainers, but MBC optimization produces more robust explanations. SimCLR delivers a slightly better AUPRC for ECG, but its AUPRC values are below that of MBC for FreqShapes and SeqComb-MV. SimCLR loss yields explanations with consistently lower AUP; AUP is closest for SeqComb-MV with only a 3.4% drop from MBC, but it is at a 17.0% decline for FreqShapes and a 13.6% drop for ECG. It is important to note that in addition to increased performance, MBC loss is more computationally efficient than SimCLR loss, avoiding inference on negative samples.
Ablation 3: Effect of MBC and LC losses We now examine the effectiveness of using both model behavior consistency (Eq. <ref>) (MBC) and label consistency (Eq. <ref>) (LC) losses. Table <ref> shows that using LC and MBC in combination is always better than using either one alone. In isolation, LC performs better than MBC, which is expected given its (obviously) higher correlation with the classification predictions than MBC, which relies on an earlier embedding space. Using both losses results in a powerful explainer that achieves over 27.5% higher AUPRC than MBC or LC alone. MBC and LC work together to capture rich information about the model's behavior, allowing to be a state-of-the-art explainer.
ยง.ยง Implementation and computing resources
Implementation We implemented all methods in this study using Python 3.8+ and PyTorch 2.0. In our experiments, we employed the Vanilla Transformer <cit.> as the classification model for all methods. To ensure strong underlying predictors for explainability evaluation, as suggested by Faber et al. <cit.>, we verified that the classification models achieved satisfactory performance on the testing set. Complete classification results are in Table <ref>. We followed the hyperparameters recommended by the respective authors for all baseline methods.
Computational resources For computational resources, we use a GPU cluster with various GPUs, ranging from 32GB Tesla V100s GPU to 48GB RTX8000 GPU. , and all models were only trained on a single GPU at any given time. The average experiment runtime in this work was around 5 minutes per fold, with ECG taking the longest at approximately 13 minutes per fold when training to convergence.
ยง.ยง Flexible use of with different time series architectures
We now study the ability of to work with different underlying time series architectures. This means that of the original architecture, G and G^E are now an alternative architecture, while H^E remains as described in Section <ref>. Since experiments in the main text are based on transformer architectures, we now use a convolutional neural network (CNN) and long-short term memory (LSTM) network as the underlying predictors with the following hyperparameters:
* LSTM: 3 layer bidirectional LSTM + MLP on mean of last hidden states
* CNN: 3 layer CNN + MLP on meanpool
Tables <ref> <ref> show the results of against strong baselines with a CNN predictor. retains the state-of-the-art prediction observed for the transformer-based architecture, achieving the best AUPRC on SeqComb-MV and ECG datasets. However, the performance for FreqShapes saturates at very high values for both and IG, making the comparison more difficult for AUPRC.
Tables <ref>,<ref> show the results of against strong baselines with an LSTM predictor. performs very well for both FreqShapes and ECG datasets, achieving the highest AUPRC, AUP, and AUR for both datasets. For SeqComb-MV, did not converge. However, no explainer performed well for this task, achieving lower results than for the transformer and CNN predictors.
unsrt
appendix_citations
|
http://arxiv.org/abs/2306.07348v1
|
20230612181509
|
Oblique rings from migrating exomoons: A possible origin for long-period exoplanets with enlarged radii
|
[
"Melaine Saillenfest",
"Sophia Sulis",
"Paul Charpentier",
"Alexandre Santerne"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
Saillenfest et al.
IMCCE, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Universitรฉ, Universitรฉ de Lille, 75014 Paris, France
[email protected]
Universitรฉ Aix Marseille, CNRS, CNES, LAM, Marseille, France
Universitรฉ de Toulouse, CNRS, IRAP, Toulouse, France
The extremely low density of several long-period exoplanets in mature systems is still unexplained โ with HIP 41378 f being archetypical of this category. It has been proposed that such planets could actually have normal densities but be surrounded by a ring observed approximately face on, mimicking the transit depth of a puffy planet. This configuration would imply that the equator of the planet is nearly perpendicular to its orbit plane, which is at odds with the formation process of gas giants. Yet, in the context of the Solar System planets, it has recently been shown that after gigayears of evolution, the tidal migration of a moon can naturally lead to a very tilted planet with a ring.
As exomoons are expected to be ubiquitous around giant exoplanets, this mechanism may be responsible for the anomalous radii of some observed exoplanets. In preparation for the future discoveries of the PLATO mission, we present a simple method for checking the plausibility of this mechanism for a given exoplanet.
Analytical formulas give the probability density function of the relevant precession harmonics of the planet. For each harmonic, simple criteria set the moon mass and other properties required for the mechanism to operate.
We applied this methodology to HIP 41378 f, and we show that in order to reproduce the observed configuration, a hypothetical former moon should have had a moon-to-planet mass ratio of a few times 10^-4 (i.e. roughly the mass of our Moon) and have migrated over a distance of a few planet's radii on a gigayear timescale. These orders of magnitude match the properties of moons expected to exist around gaseous exoplanets.
We conclude that the migration of a former moon is a viable formation pathway for the proposed ring and tilt of HIP 41378 f. This example strengthens the ring hypothesis and motivates its application to other promising targets.
Oblique rings from migrating exomoons: A possible origin for long-period exoplanets with enlarged radii
Melaine Saillenfest1
Sophia Sulis2
Paul Charpentier3
Alexandre Santerne2
Received 26 April 2023 / Accepted 7 June 2023
=====================================================================================================================
ยง INTRODUCTION
The so-called super-puff exoplanets have moderate masses (typically โฒ 15ย M_โ) but surprisingly large radii (โณ 6ย R_โ), giving them extremely low bulk densities (โฒ 0.3ย g cm^-3; see e.g. ). Although relatively rare, super-puffs form a growing class of exoplanets. Among the puffiest exoplanets with the longest orbital periods, we can cite the iconic HIP 41378 f, Kepler-87 c, Kepler-79 d, Kepler-177 c, and Kepler-51 b, c, and d. Super-puffs must be distinguished from inflated hot Jupiters, which show a correlation between stellar irradiation and radius inflation (see e.g. ). This correlation indicates that hot Jupiters have extended atmospheres connected in some way to their close proximity to the star (see e.g. ). A similar conclusion can be reached for short-period sub-Neptunes <cit.>, but not for distant super-puffs, because they have much cooler equilibrium temperatures and undergo negligible star-planet tidal dissipation.
Initiated by the preprint of <cit.>, the low density of exoplanet HIP 41378 f, in particular, immediately raised much discussion. HIP 41378 f is mature (2.1^+0.4_-0.3ย Gyr; ) and has a long period (542ย days) and low equilibrium temperature (300ย K). Its low density (0.09ยฑ0.02ย g cm^-3) puts this planet among the puffiest exoplanets known to date. Even though other super-puffs are known, most of them are likely young and/or have shorter periods <cit.>. Instead of a radius inflation, <cit.> propose that HIP 41378 f could be a standard Neptune-sized planet surrounded by an inclined opaque ring that would mimic the transit depth of an inflated planet. As no significant distortion is visible in the transit ingress and egress of HIP 41378 f, the hypothetical ring should be optically thick and seen roughly face on. This configuration would imply that the obliquity of the planet[Not to be confused with the stellar obliquity (i.e. the angle between the spin axis of the star and the orbit pole of a given planet). Throughout this article, the term obliquity is exclusively used for the planetary obliquity (i.e. the angle between the spin axis of the planet and its orbit pole).] is nearly 90^โ.
The ring hypothesis was investigated by <cit.> for other super-puff exoplanets. Good candidates are Kepler-87 c, Kepler-79 d, and Kepler-177 c, even though their moderate temperatures โ as that of HIP 41378 f โ do not allow for water ice to exist around them. Therefore, unlike Saturn's ring, their rings would need to be composed of porous rocky particles. According to the results of <cit.>, HIP 41378 f is currently the best candidate for a ring. Its long period would protect a ring against destructive irradiation levels and a strong warp due to the stellar torque; it also results in negligible star-planet tidal dissipation, which means that no particular mechanism would be required for the planet to maintain a large obliquity[High-obliquity equilibrium states also exist for short-period planets <cit.>; however, because of tidal despinning and obliquity damping, their obliquity needs to be continuously forced through dynamical interactions involving several planets (see also ).]. The low eccentricity of HIP 41378 f also guarantees a small level of orbital perturbations for the ring particles.
In order to determine the planets' atmospheric properties and test the ring hypothesis, near-infrared transmission spectra have been acquired for Kepler-51 b and d <cit.>, Kepler-79 d <cit.>, and HIP 41378 f <cit.>. These spectra ended up being featureless, ruling out clear, low-metallicity atmospheres. The ring hypothesis is therefore not contradicted for these planets, but flat spectra can also be produced by high-altitude hazes or high-metallicity atmospheres. In fact, convincing atmospheric models have been put forward for Kepler-51 b and d, as well as Kepler-79 d (see also ). Interestingly, these models of extended atmospheres appear to be inapplicable to HIP 41378 f as it is too massive (M=12ยฑ 3ย M_โ), too cold, and too old.
The question of the possible physical composition of HIP 41378 f was explicitly tackled by <cit.>. The authors show that photoevaporation is not nearly enough to explain the extreme density disparity between planetย f and other planets in the system. Moreover, the observed mass and radius of HIP 41378 f would require an envelope-to-core mass fraction larger than 75% together with a high entropy (e.g. produced by recent collisions). Such a massive envelope is unlikely from the perspective of planetary formation, as it would require runaway gas accretion to have started precisely during the dissipation of the gas disc, and planet HIP 41378 f may not be massive enough anyway to have triggered runaway accretion.
Hence, the ring hypothesis appears to be favoured for HIP 41378 f, and it may apply as well to a restricted number of other observed super-puffs. Tidal rings are confined below the Roche limit, very close to their host planets. As such, they are strongly coupled to the centrifugal bulge of the planets, and they directly materialise their equatorial planes. In order to produce a substantial increase in a planet's transit depth (i.e. a very noticeable super-puff), its ring must be oriented roughly in the sky plane. This means that the planet's spin axis must point roughly along the observer's direction; its obliquity is therefore ฮตโ 90^โ as proposed by <cit.>. Such an exotic configuration may seem questionable from a formation point of view. Because of the angular momentum acquired during gas accretion, gaseous planets are expected to form with low obliquities. The obliquities of the Solar System giant planets are therefore interpreted as strong tracers of their dynamical evolution, and much effort is put into understanding their origin (see e.g. ). In this context, the ring hypothesis for super-puffs would greatly benefit from an underlying mechanism that may be responsible for their unusual configuration. The existence of such a mechanism would not certify whether a given planet does possess a ring or not, but it would show whether known dynamical processes are able to (or are even likely to) produce the proposed configuration.
In the Solar System, a substantial tidal migration of moons has recently been observed to be at play around gaseous planets (see ) โ even though it involves mechanisms of energy dissipation that are vastly different from those responsible for the well known rapid migration of our Moon (see e.g. ). These results have strong implications for the orbital dynamics of moons around gaseous planets, but also for the gigayear-timescale dynamics of planetary spin axes. Indeed, moons affect the spin-axis precession rate of planets in a way that is intimately related to their distance (see e.g. ). The migration of a moon is therefore accompanied with a variation in the planet's spin-axis precession rate. In turn, this variation can drive the planet into a so-called secular spin-orbit resonance, that is, a resonance between the planet's spin-axis precession and one harmonic of its orbital nodal precession. As a matter of fact, this kind of resonances abound in multi-planetary systems. Provided that a planet has a substantially massive migrating moon, it may therefore be guaranteed to encounter one of these resonances sooner or later during its evolution. Once captured in resonance, the still ongoing migration of the moon produces a gradual tilting of the planet's spin axis (unless, as for the Earth, resonances are so numerous that they overlap massively; see ). This phenomenon is probably responsible for the 27^โ obliquity of Saturn <cit.>, and it is predicted to happen to Jupiter in the future <cit.>. It may also have played a role in the tilting of Uranus <cit.>.
When the planet's obliquity reaches ฮตโณ 70^โ, however, regular moons are known to be unstable in some range of distance <cit.>. Interestingly, the migration of a single moon makes the system converge to this unstable zone, putting a dramatic end to the tilting process (see ). At this point, the moon may be ejected or be destructed below the planet's Roche limit, eventually forming a tidal disc of debris. In the latter case, the final state of the system is a ringed planet with very high obliquity. This final state recalls the exotic configuration proposed for super-puff exoplanets. It would therefore be valuable to determine whether this mechanism could apply to them and provide a plausible dynamical background to the ring hypothesis.
In this article, we aim to present a generic methodology to assess whether the migrating-moon mechanism can realistically produce a tilted ring around a given exoplanet. Even though the number of known distant super-puffs is small today, the future PLATO mission <cit.> will considerably increase our knowledge of the population of long-period exoplanets โ including their masses through an intensive radial-velocity follow-up. In this context, we need efficient methods for a routine characterisation of the newly discovered planets and identification of the most interesting targets for follow up. For this reason, we design our methodology to be applicable even if the minimum amount of information about the planetary system is available (masses, periods, and sky-plane inclinations).
The article is organised as follows. In Sect.ย <ref>, we recall the basics of the tilting mechanism. In Sect.ย <ref>, we compute the probability density function of the dominant orbital precession frequencies of a planet, and we present an example of application to the super-puff exoplanet HIP 41378 f. From these results, we estimate in Sect.ย <ref> the mass and migration rate that a moon around this planet would need in order to trigger the full tilting mechanism. In Sect.ย <ref>, we check that the resonance is large enough to enable an adiabatic capture and tilting, and we illustrate this mechanism with numerical simulations. We then discuss our results in Sect.ย <ref> and conclude in Sect.ย <ref>.
ยง BASIC MECHANISM
As shown by <cit.>, the tilting of a planet from a low obliquity ฮต up to ฮตโ 90^โ can be achieved on a gigayear timescale via the tidal migration of a moon. This process occurs through the adiabatic drift of the system along the centre of a secular spin-orbit resonance. In this section, we recall the physical quantities involved and the conditions required to trigger this process.
We write I the orbital inclination of the planet and ฮฉ its longitude of ascending node. We decompose the inclination dynamics of the planet in a quasi-periodic series truncated to N terms:
ฮถ = sinI/2exp(i ฮฉ) = โ_j=1^NS_jexp[i ฯ_j(t)] ,
where S_j is a positive real constant, and ฯ_j(t) = ฮฝ_j t + ฯ_j^(0) evolves linearly over time t with frequency ฮฝ_j. Resonance capture from a low obliquity is possible only for resonances with a harmonic having a negative frequency ฮฝ_j such that |ฮฝ_j|โฉพ p, where
p = 3/2๐ขM_โ/a^3(1-e^2)^3/2J_2/ฯฮป
is the characteristic spin-axis precession rate of the planet. In this expression, ๐ข is the gravitational constant, M_โ is the mass of the star, a and e are the semi-major axis and eccentricity of the planet on its orbit around the star, J_2 is the second zonal gravity coefficient of the planet, ฯ is its spin rate, and ฮป is its normalised polar moment of inertia. The parameters J_2 and ฮป must be defined through the same normalising radius R (which is generally chosen as the equatorial radius of the planet).
The influence of a regular moon on the long-term spin-axis dynamics of the planet can be quantified by its non-dimensional `mass parameter' ฮท defined by
ฮท = 1/2m/Mr_M^2/J_2R^2 ,
where m is the mass of the moon, M is the mass of the planet, and r_M is the following characteristic length[r_M is called `mid-point radius' by <cit.>. It is sometimes defined as the Laplace radius in other publications, either with or without the leading factor 2.]:
r_M^5 = 2M/M_โJ_2R^2ย a^3(1-e^2)^3/2 .
Under the hypothesis that the moon's mass ratio m/M is small (which does not necessarily imply that ฮท is small), <cit.> show that all resonances with a nodal harmonic having a negative frequency ฮฝ_j verifying
pโฉฝ|ฮฝ_j|โฉฝ pฮท/2
can allow the planet's obliquity to grow from ฮต=0^โ to ฮต=90^โ. This condition is illustrated in Fig.ย <ref>. Knowing the harmonics ฮฝ_j of the planet's orbital precession, Eq.ย (<ref>) allows one to compute the minimum mass required for the moon to produce the tilting. As resonances converge to an unstable region, the moon is ultimately lost at the end of the tilting process (see Fig.ย <ref>).
When Eq.ย (<ref>) is verified, the adiabatic capture and tilting of the planet within a given resonance requires an adequate hierarchy of timescales. First, we introduce the timescale ฯ of secular oscillations of the moon around its equilibrium `Laplace plane' (see ) as ฯ = 2ฯ/ฮบ, where
ฮบ^2 = 9/4M_โ/M r_M^3/a^3(1-e^2)^3๐ขM_โ/a^3 .
An adiabatic capture in resonance requires that ฯ is much shorter than the spin-axis precession timescale of the planet T=2ฯ/p; this conditions is generally well verified in practice.
Then, a given observed planet may have been adiabatically tilted via a resonance only if the timescale T_lib of libration inside the resonance is much smaller than the age of the system. For a given secular spin-orbit resonance, the value of T_lib near the resonance centre can be computed as T_lib = 2ฯ/ฮผ, where
ฮผ^2 = (p')^2(ฮฒ^2/sin^2ฮต_0 + ฮฒsinฮต_0) .
In this expression, ฮต_0 is the planet's obliquity at the resonance centre and p' is a modified version of p that takes into account the presence of the planet's moon (see ). We define the non-dimensional variables ฮณ = ฯ_1/p' and ฮฒ = ฯ_2/p', where
ฯ_1 = -(ฮฝ_k - 2โ_j=1^Nฮฝ_jS_j^2) ,
ฯ_2 = -S_k(2ฮฝ_k + ฮฝ_kS_k^2 - 2โ_j=1^Nฮฝ_jS_j^2) ,
and k is the index in Eq.ย (<ref>) of the considered resonance. T_lib depends on the distance of the moon through p' and ฮต_0. However, an upper bound for T_lib is obtained at the time of resonance capture, for which ฮณ^2/3 + ฮฒ^2/3 = 1 (see ). In this case, p' is equal to p' = (ฯ_1^2/3 + ฯ_2^2/3)^3/2, and the planet's obliquity at the centre of the resonance is
cosฮต_0 = ฮณ - ฮณ^1/3 + โ(ฮณ^2 + ฮณ^2/3 - ฮณ^4/3) .
Thanks to these expressions, we can compute T_lib from Eq.ย (<ref>) as a mere function of the planet's orbital dynamics in Eq.ย (<ref>).
ยง ORBITAL PRECESSION MODES OF THE PLANET
To apply this mechanism to a given planet, we need to know its orbital precession spectrum, which depends on planet-planet mutual interactions. However, the masses and orbital elements of exoplanets are generally not well known. For given parameters and their uncertainties, the most simple way to explore the variety of possible long-term orbital solutions is to use the Lagrange-Laplace system (see e.g. ).
ยง.ยง The Lagrange-Laplace proper modes
The Lagrange-Laplace system is a secular theory at second order in eccentricity and inclination. As such, it assumes that all eccentricities and inclinations are small and it neglects the long-term influence of mean-motion resonances. Small mutual inclinations are indeed strongly favoured in multi-planetary systems in which most planets are observed to transit their star. This is the case of HIP 41378, around which the transits of five planets are observed <cit.>. Eccentricities are also expected to be small in multi-planetary systems for stability reasons. Moreover, according to the statistical distribution of multi-planetary systems <cit.> and to theoretical arguments about chaotic diffusion (which leads to the statistical equipartition of angular momentum deficit; see ), planets having small mutual inclinations tend to have small eccentricities, and vice versa. Hence, the use of the Lagrange-Laplace theory is generally justified in this regard for multi-planetary systems. Neglecting the long-term effect of mean-motion resonances may seem more questionable, as many pairs of exoplanets are observed to be close to important resonances (see e.g. ). Yet, the strongest mean-motion resonances in planetary systems โ and those enabling smooth captures โ are of eccentricity type. As such, they mainly affect eccentricities. Here, instead, we are only interested in the inclination degree of freedom of the planets because it is by far the main driver of their long-term spin-axis dynamics. The planets' eccentricity dynamics only enter into play at order three and beyond (see ), so mean-motion resonances can safely be ignored in this analysis.
As above, we describe the nodal precession and inclination dynamics of a planet k in the planetary system through the complex variable
ฮถ_k = sinI_k/2exp(i ฮฉ_k) ,
where I_k is the orbital inclination of planet k and ฮฉ_k is its longitude of ascending node. The Lagrange-Laplace system gives the linear equation of motion
dฮถ/dt = iB ฮถ ,
in which ฮถ is the vector containing the ฮถ_k variable of all planets and B is a constant matrix that only depends on the masses and semi-major axes of the planets (see e.g. ). The solution of this equation for a given planet k has the form of a quasi-periodic series as in Eq.ย (<ref>):
ฮถ_k(t) = โ_j=1^N_pS_jexp[i(ฮฝ_j t + ฯ_j^(0))] ,
where the number of terms N is equal to the number of planets N_p in the system. Equationย (<ref>) is a linear combination of proper modes whose frequencies ฮฝ_j are the eigenvalues of the matrix B. As B only depends on the masses and semi-major axes of the planets, this is also the case of the frequencies ฮฝ_j. Because of the conservation of total angular momentum, one of the frequencies ฮฝ_j is identically equal to zero; the related constant term in Eq.ย (<ref>) gives the orientation of the system's invariant plane.
Thanks to the fast computation of the solution of the Lagrange-Laplace system (which amounts to a mere matrix inversion), millions of trials can be performed at virtually no cost. In order to explore the distribution of possible values for the frequencies ฮฝ_j, the first step is to draw the masses and semi-major axes of the N_p planets from their respective statistical distributions โ which represent our knowledge of their values. A similar approach was followed by <cit.> in their study of the compact multi-planetary systems observed by Kepler. Each sequence of masses and semi-major axes for the N_p planets represent a possible realisation of the planetary system. In case the mass of a given planet has not been measured, a broad distribution of mass can be adopted (e.g. a uniform distribution in a given interval, or a law drawn for an assumed mass-radius relationship; see below). From a large number of realisations of the planetary system, a histogram for each frequency ฮฝ_j can be computed. These histograms define the possible locations of secular spin-orbit resonances given our current knowledge of the planetary system.
In practice, the largest values of the Lagrange-Laplace matrix B in Eq.ย (<ref>) are often located along its diagonal (meaning that the planetary system is only weakly coupled); this implies that each planet k has its own dominant proper mode, which appears in Eq.ย (<ref>) as the term with largest amplitude. The frequency of the dominant proper mode of planet k is usually noted s_k. In the context of the Lagrange-Laplace approximation, the quasi-periodic series in Eq.ย (<ref>) contains exactly N=N_p terms and the frequencies ฮฝ_j are each equal to one of the s_k. More generally, the orbital evolution of any planet in a stable system can be written as in Eq.ย (<ref>), but where N tends to infinity and each harmonic ฮฝ_j is a linear combination of the fundamental frequencies of the system (see Sect.ย <ref>). The first few strongest harmonics of the series are however proper modes given by the Lagrange-Laplace approximation; hence, the analysis presented here can be thought of as the dominant component of a more general theory.
While building the histogram for each proper mode s_k, a complication may arise. Indeed, if the masses and semi-major axes of the planets have large uncertainties, the distributions of the various frequencies may overlap. In this case, identifying each eigenvalue ฮฝ_j of matrix B as the correct proper mode s_k requires some caution. As the hierarchy of proper modes depends on the planetary system considered, a specific identification process is required. As an example, we subsequently present the case of the HIP 41378 system.
ยง.ยง Application to the HIP 41378 system
HIP 41378 is bright F-type star[Also known as K2-93 and EPICย 211311380.] which harbours at least five planets called b, c, d, e, and f <cit.>. Dynamical analysis reveals that planetsย b and c are slightly off the 2:1 mean-motion resonance, similarly to many Kepler planets (see e.g. ). A tentative detection of a sixth planet โ planetย g โ is reported in the preprint of <cit.> close to the 2:1 mean-motion resonance with planetย c. As of today, only planetsย b, c, and f have been observed during successive transits (see ) and unambiguously detected in radial velocity <cit.>. Therefore, only planetsย b, c, and f have secured periods and masses.
Two transits of planetย d have been observed by the K2 mission but they are separated by a three-year observation gap, leading to a discrete set of possible periods <cit.>. From stability considerations, and thanks to additional observations by TESS, this discrete set is further reduced to only two likely values (278 and 371ย days; see ). In contrast, only one transit of planetย e has been observed so far, so its period suffers from large uncertainties. The best period estimate for planetย e is 260^+160_-60ย days <cit.>. The period of 369ยฑ 10ย days obtained by <cit.> is compatible with this estimate, and it results in a mass of 12ยฑ 5ย M_โ for planetย e. The mass of planetย d, however, is unknown.
As explained in Sect.ย <ref>, planet HIP 41378 f is a paradigmatic case of distant super-puff. Its period is about 542ย days, and it has a radius of 9.2ยฑ 0.1ย R_โ and mass 12ยฑ 3ย M_โ, giving it a bulk density of 0.09ยฑ 0.02ย g cm^-3 <cit.>. Under the ring hypothesis, current data suggests a planet with radius R=3.7ยฑ0.3ย R_โ surrounded by a ring with radius 2.6ยฑ0.2ย R and inclination 25ยฑ 4^โ from the sky plane <cit.>. This new planetary radius yields a bulk planet density of 1.2ยฑ 0.4ย g cm^-3, similar to that of Uranus. The hypothetical equatorial ring provides an indirect measure of the obliquity of the planet, namely[The ring obtained by <cit.> is inclined by i_r = 25ยฑ 4^โ from the sky plane and rotated by ฮธ = 95ยฑ 17^โ from the transit direction. The spin-orbit obliquity ฮต of the planet is given by cosฮต = cos Icos i_r + sin Isin i_rcosฮธ, where I = 89.97ยฑ 0.01^โ is the orbital inclination of HIP 41378 f.] ฮต=92ยฑ 7^โ.
In order to compute the orbital precession modes of HIP 41378 f, our choice of prior for the masses and semi-major axes of the planets must reflect our partial knowledge of the HIP 41378 system. We sort the planets by increasing orbital periods such that the indexes k=(1,2,3,4,5,6) correspond to the planets (b, c, g, d, e, f). We assume all masses and semi-major axes to have Gaussian distributions centred on the best-fit values of <cit.> given in Tableย <ref>. Planetย d needs a specific treatment: even though its period has tentatively been confirmed by <cit.>, it has still not been detected by the radial velocity method, so its mass is highly uncertain. We choose to remain as agnostic as possible as regards its mass, and draw it from a Gaussian fit to the mass-radius distribution of all known exoplanets having a radius between 3 and 4ย R_โ and a mass measurement. From the Nasa Exoplanet Archive[<https://exoplanetarchive.ipac.caltech.edu>] on date 2022-11-23, we obtain a central mass value of 12.7ย M_โ and a standard deviation of 6.0ย M_โ. The high tail of this distribution may not be compatible with radial velocity measurements; yet, this broad interval gives us confidence that the actual mass of planetย d is contained in our analysis. The low tail of the distribution (from which we cut the portion <0.1ย M_โ) corresponds to cases in which planetย d barely exists at all. The system may also contain additional massive planets that have not been discovered yet. Hence, we stress that the analysis below represents our current knowledge of the system and it may need to be revisited in the future.
For the HIP 41378 system as considered in Tableย <ref>, a look at the diagonal and off-diagonal values in the Lagrange-Laplace matrix B reveals a peculiar hierarchical configuration. The system is composed of two weakly coupled subsystems: i) the inner subsystem (planetsย 1-2-3) is characterised by planetsย 1 and 3 interacting with each other and affecting the motion of the low-mass planetย 2; and ii) the outer subsystem (planets 4-5-6) is made of the two strongly coupled planetsย 4 and 5, interacting as a whole with planetย 6.
This peculiar hierarchy can be visualised by solving the Lagrange-Laplace system a first time using reasonable values for the parameters. The exact values of the parameters do not matter for now; this first step only serves as a guide to identify the frequencies and choose an adequate naming convention. Figureย <ref> shows an example obtained from the nominal masses and semi-major axes of the planets. We name the proper frequencies according to their qualitative role in the dynamics: s_1 is the precession frequency of planetsย 1 and 3 about their total angular momentum vector; s_2 is the precession frequency of the low-mass planetย 2 under the action of planetsย 1 and 3; s_3 is the slow rigid precession of the inner and outer subsystems (planets 1-2-3 and 4-5-6); s_4 is the precession frequency of planetsย 4 and 5 about their total angular momentum vector; s_5 is identically zero; s_6 is the precession frequency of planetย 6 and planetsย 4-5 about their total angular momentum vector. We stress that all precession modes actually appear in the dynamics of all planets (see Eq.ย <ref>), but this qualitative description gives us a good idea of the relative importance of each term in the orbital evolution of each planet.
In order to compute the probability density function of each frequency s_j given our current knowledge of the planetary system, we drew 10^6 realisations of the star's mass and planets' masses and semi-major axes. For each of these realisations, we computed the eigenvalues of the Lagrange-Laplace matrix B and identified them to the frequencies s_j according to their qualitative role described above. In practice, this identification can be made by choosing fictitious initial conditions ฮถ_k(t=0) designed to magnify the specific term we are looking for. For instance, the frequency s_3 would appear as strongly dominant for all planets if we set ฮถ_k(t=0)=0 for k={1,2,3} and ฮถ_k(t=0)=โ(2)/2 for k={4,5,6}. Then, one may identify s_2 as the dominant term in the solution of planetย 2 by setting ฮถ_2(t=0)=โ(2)/2 and ฮถ_k(t=0)=0 for kโ 2, etc. This way, all frequencies can be correctly identified one by one. Moreover, we remind the reader that the frequencies s_j only depend on the masses and semi-major axes of the planets, so they do not depend on the fictitious initial conditions chosen here, and they are not plagued with our ignorance of the actual orientations of the planets' orbital planes.
Figureย <ref> shows the frequency distribution for each inclination proper mode obtained from our 10^6 realisations of the system. Frequency s_4 has a broad distribution due to the large uncertainties in the masses of planetsย d and e. Frequency s_3, on the contrary, is very peaked, which means that the hierarchy of the two subsystems is a robust property of the HIP 41378 system โ unless it contains additional massive planets yet to be discovered. In order to quantify the relative importance of each parameter in the value of each frequency, a correlation analysis can be performed on our large sample of realisations. Here, the small spread in frequency s_3 results to be essentially due to the uncertainty in the mass of planetย d (see Appendixย <ref>).
As illustrated in Fig.ย <ref>, frequency s_3 is expected to have a strong contribution in the motion of all planets. Next to it, the dominant inclination proper mode of planetย f has frequency s_6. This frequency would produce a strong (if not the strongest) secular spin-orbit resonance for this planet. Figureย <ref> shows that despite observational uncertainties, frequency s_6 has a relatively peaked distribution. Its most probable value is -136ย โ year^-1, with 68.3% occurrences within [-181,-97]ย โ year^-1, 95.4% occurrences within [-241,-65]ย โ year^-1, and 99.7% occurrences within [-405,-35]ย โ year^-1. As shown in Appendixย <ref>, the value of s_6 is essentially set by the mass of the perturbing planetย e, with a Spearman correlation coefficient ฯ_Sโ -0.8. The value of s_6 is only weakly (|ฯ_S|โฒ 0.3) correlated with the parameters of planetย f itself. This low correlation allows us to investigate different values for the frequency s_6 independently of the mass and semi-major axis of planetย f (that we fix, from now on and in the rest of the article, to their nominal values in Tableย <ref>).
ยง PROPERTIES OF THE HYPOTHETICAL FORMER MOON
Knowing the dominant harmonics in the orbital precession of a planet, Eq.ย (<ref>) gives the conditions required to tilt the planet and form a ring through the tidal migration and disruption of a moon. In addition to the mass and orbital elements of the planet, Eq.ย (<ref>) depends on the planet's normalising radius R, its oblateness coefficient J_2, and the product ฯฮป. For a given super-puff exoplanet, we may assume that the anomalous planet's density is entirely due to the existence of a ring; therefore, the value of R can be chosen so as to produce a conventional bulk density (e.g. that of Uranus or Neptune). In the specific case of HIP 41378 f, <cit.> show that, under the ring hypothesis, its true radius would be 3.7_-0.2^+0.3ย R_โ. Hence, we adopt the value R=3.7ย R_โ below as our normalising radius.
For given values of the parameters J_2 and ฯฮป, Eq.ย (<ref>) provides a direct relation between the frequency ฮฝ_j of the resonance and the minimum mass m_min of the former moon. Even though J_2 and ฯฮป are completely unknown for exoplanets, we know that they are related, and in first approximation J_2โฯ^2 (planets spinning faster are more flattened; see e.g. ). For a given moon mass m, the condition |ฮฝ_j|โฉฝ pฮท/2 in Eq.ย (<ref>) corresponds to a power law J_2โฯ^5/2. Because of the coincidental near match between these two exponents (2 and 5/2), our total ignorance of J_2 and ฯฮป does not affect much our estimate of m_min: we may just set J_2 and ฯฮป to realistic values (e.g. obtained from the Solar System planets) and be assured to obtain relevant results โ unless the planet has a particularly exotic internal structure which violates J_2โฯ^2. This property is verified in Appendixย <ref> in the case of planet HIP 41378 f. As the mass and radius proposed by <cit.> for HIP 41378 f are relatively close to those of Uranus, we choose to apply Eq.ย (<ref>) using the parameters J_2 and ฯฮป of Uranus (see e.g. ).
Independently of the resonance considered, Eq.ย (<ref>) can be fulfilled only if the mass parameter ฮท of the moon is ฮทโฉพ 2. Using the J_2 value of Uranus, this condition translates into m/Mโฉพ 1.2ร 10^-4. This is the minimum mass ever that the former moon of HIP 41378 f should have had. For a larger moon, the minimum mass m_min needed to tilt the planet is proportional to the frequency ฮฝ_j of the considered resonance. The top horizontal axis in Fig.ย <ref> shows the values of m_min computed from Eq.ย (<ref>) using the parameters J_2 and ฮปฯ of Uranus (the tics start at 1.2ร 10^-4 and go from right to left).
The characteristic spin-axis precession rate of HIP 41378 f computed from Eq.ย (<ref>) is pโ 25โ year^-1. According to the left inequality in Eq.ย (<ref>), this value almost certainly rules out a resonance with frequency s_3, because frequency s_3 sharply peaks at s_3=-15.7โ year^-1 (see Fig.ย <ref>). The fact that p>|s_3| means that the s_3 resonance is located in the green portion of Fig.ย <ref>; therefore no capture from a low obliquity is possible in this resonance whatever the mass of the moon. Frequency s_6, on the contrary, is the closest resonance reachable by HIP 41378 f. This resonance is expected to be strong for planetย f, if not the strongest (see Sect.ย <ref>). Figureย <ref> shows that a capture and full tilting within the s_6 resonance requires a moon with minimum mass ratio ranging between about 2ร 10^-4 and 10ร 10^-4. This corresponds to an absolute mass ranging roughly between Triton's mass and the mass of our Moon, respectively. More precisely, when the parameters J_2 and ฯฮป of Uranus are assumed for HIP 41378 f, the value of frequency s_6=-136^+101_-269ย โ year^-1 obtained in Sect.ย <ref> translates into a minimum moon mass m_min/M=6^+13_-5ร 10^-4 (3ฯ uncertainty).
This mass range seems realistic when viewed in the context of the regular moons of the Solar System giant planets. For comparison, the moon-to-planet mass ratio of Titan is 2ร 10^-4, and the summed masses of the largest moons of Jupiter and Uranus yield ratios of about 2ร 10^-4 and 1ร 10^-4, respectively. This similarity among planets motivated the work of <cit.>, who found that the formation mechanism of moons around the Solar System giant planets may naturally lead to a common mass scaling, with final mass ratios of a few times 10^-4. Yet, these results do not rule out the existence of larger moons, either because of differing external conditions during their formation, or because of different formation processes (see e.g. the discussion by ).
In order to fully incline the planet starting from a low obliquity, the distance that the migrating moon needs to cover depends on the resonance considered, but Fig.ย <ref> shows that one can expect in general a migration from a_mโ 0.5 r_M to 1 r_M. Using the J_2 value of Uranus, Eq.ย (<ref>) gives a characteristic length r_Mโ 11ย R for planet HIP 41378 f, which implies that the moon would need to migrate from roughly 5 to 10ย R. Given that r_M is proportional to J_2^1/5, other realistic values of J_2 may change these distances by a small amount (see discussion in Appendixย <ref>).
The HIP 41378 system is 2.1^+0.4_-0.3ย Gyr-old <cit.>. As the whole tilting mechanism must have been completed before today, the required migration range for the moon can be translated into a minimum migration rate. In the case of HIP 41378 f, we obtain a migration rate of about 6ย cm year^-1 in average. This velocity is comparable to the Moon's migration rate from the Earth <cit.>, and about two times less than the migration rates of Ganymede from Jupiter <cit.> or Titan from Saturn <cit.>. In order to power this migration through tidal dissipation within the planet, classical formulas with constant parameters (see e.g. ) imply that the planet's dissipation coefficient needs to be higher than k_2/Qโ 3ร 10^-5 for a moon mass m/M=2ร 10^-4, and higher than k_2/Qโ 6ร 10^-6 for a moon mass m/M=10^-3. For comparison, the value measured for Jupiter's satellite Io is k_2/Q=1.102ยฑ 0.203ร 10^-5 <cit.>, and the value measured globally for Saturn's main satellites is k_2/Q=1.59ยฑ 0.74ร 10^-4 <cit.> with a large spread for individual moons extending to much higher values (see ).
ยง ADIABATIC RESONANCE CAPTURE
The analysis above shows that when assuming realistic values for the unknown parameters J_2 and ฯฮป, the constraints obtained for the planet HIP 41378 f and its hypothetical former moon match well the properties expected for giant planets and moons (i.e. distance, mass, migration rate, and tidal dissipation), at least when viewed in the context of the Solar System. Yet, in order for a planet to be captured and adiabatically tilted within a given resonance, this resonance must be large enough. The width of secular spin-orbit resonances scales as the square root of the amplitude of the term in the orbital series (see Eq.ย <ref>). The s_6 term is expected to be among the dominant terms for planet HIP 41378 f, but its amplitude may still be small, depending on the mutual inclinations between the planets' orbital planes. In order to compute the mutual inclinations of the planets, we need their orbital inclinations I_k and longitudes of ascending nodes ฮฉ_k.
As shown in Tableย <ref>, the orbital inclinations I_k of transiting planets with respect to the sky plane are tightly constrained from observations, apart from the mirror degeneracy with respect to 90^โ. As for the longitudes of nodes ฮฉ_k in the sky plane, they are not constrained from transit photometry, but we know that their values are likely to be close to each other. Indeed, for a given set of orbital inclinations I_k, mutual inclinations between the planets' orbital planes are minimum if their longitudes of node ฮฉ_k are equal. As a general rule, low mutual inclinations minimise the planets' orbital excitation, and a low orbital excitation is expected in multi-planetary systems for stability reasons.
In systems observed by the transit method, low mutual inclinations are expected also because they maximise the probability of observing several transiting planets. Gravitational interactions produce a precession of the planets' orbital planes, possibly making some of them evolve in and out of transit configuration (see e.g. ). Using the Lagrange-Laplace theory, it is straightforward to compute the fraction of time that a planet spends in and out of transit configuration (see e.g. Fig.ย <ref>). In the HIP 41378 system as described in Tableย <ref>, only the innermost planet may possibly transit 100% of the time, even if we set all the ฮฉ_k values of the planets to be equal. Due to orbital precession, the probability to observe five transiting planets (as today) is 30% at best, and the probability to observe six is lower than 5%. As such, the HIP 41378 system would not be classified as `continually mutually transiting' <cit.>.
The level of orbital excitation of a planetary system can be quantified as a function of the dispersion of their longitudes of ascending node ฮฉ_k in the sky plane. As shown in Appendixย <ref>, allowing for just a few degrees dispersion in ฮฉ_k can increase the amplitude S_j of several modes in Eq.ย (<ref>) by orders of magnitude, drastically reducing transit probabilities. In the HIP 41378 system, the level of dispersion of the planets' longitudes of node ฮฉ_k is therefore likely to be very small, perhaps less than 1^โ, but their actual values are unknown.
Here, we are interested in the possibility for a planet to be captured in secular spin-orbit resonance from a low initial obliquity. In this context, the larger the resonance, the easier the capture (see e.g. ); hence, we actually just need a lower bound for the resonance widths, that is, a lower bound for the amplitudes S_j in Eq.ย (<ref>). If we show that the resonance capture operates flawlessly for this lower bound, then we can be assured that it will operate as well or even better for the true amplitudes S_j. To this aim, we consider that: i) the orbital inclinations of all planets with respect to the sky plane lie on the same side of 90^โ, and ii) all planets have exactly the same longitude of ascending node ฮฉ_k in the sky plane. When applied to the HIP 41378 system, this idealised system gives the solution shown in Tableย <ref> for planetย f.
In order to produce a resonance capture, the migration of the moon must be slow compared to the oscillations of the resonance angle, so that the parameter change is close to the adiabatic regime (see e.g. ). For a given resonance, the oscillation frequency near the resonance centre can be computed through Eq.ย (<ref>); the frequency scales as the square root of the amplitude S_j. When applying Eq.ย (<ref>) to HIP 41378 f by considering the orbital series in Tableย <ref>, one finds that the libration period of the s_6 resonance angle when the separatrix appears is T_libโ 547 000ย years. This value is much smaller than the age of the system (2.1^+0.4_-0.3ย Gyr; see ). Therefore, even when considering the minimum possible width of the resonance, the available time span is more than enough for the planet to oscillate many times within the s_6 resonance, allowing an adiabatic drift to occur within this resonance.
This point can be verified by performing a numerical integration of the coupled equations of motion of the planet's spin axis and the orbit of its moon. We used the same setting as <cit.>: we integrated the secular equations of <cit.> expanded at quadrupole order, and forced the orbital evolution of the planet with the quasi-periodic series in Tableย <ref>. A typical example of evolution is displayed in Fig.ย <ref>. In this example, the mass of the moon is m/M=7ร 10^-4 (i.e. about the mass of Jupiter's moon Europa), and we made the moon migrate outwards at a constant rate, chosen to emulate a tidal parameter k_2/Qโ 10^-5. For such a tidal parameter, the moon is expected to migrate from a distance a_m=5ย R to a distance a_m=10ย R in about 1.2ย Gyr. The planet was initialised with an obliquity of 0.05ย rad and a random precession phase. The eccentricity of the moon and its inclination with respect to its local Laplace plane were both initialised to 10^-4, with random argument of pericentre and longitude of ascending node. As expected, Fig.ย <ref> shows that the adiabatic capture and tilting in resonance s_6 is guaranteed on a gigayear timescale. Due to the large separation between timescales, the obliquity oscillations of the planet inside the resonance are not even noticeable in the figure, but they build up in the curve width.
When the system reaches the unstable region, the eccentricity of the moon increases rapidly, which produces chaotic jumps in the planet's obliquity. Indeed, near the border of the unstable region, the timescale for the moon's eccentricity to be multiplied by 100 is a few times the characteristic timescale ฯ defined in Eq.ย (<ref>). Here, one obtains ฯโ 100ย yr, which means that the eccentricity increase is extremely fast compared to the planet's spin-axis precession timescale (Tโ 52 000ย yr), to the oscillations of the planet inside the resonance (T_libโ 547 000ย yr), or to the tidal eccentricity damping of the moon (whose timescale is a few millions of years; see e.g. ).
The simulation in Fig.ย <ref> is stopped when the moon's pericentre goes below the Roche limit of the planet. At this point, the moon is expected to be disrupted into pieces which would rapidly reorganise into an equatorial disc confined inside the Roche limit (see e.g. ). As the moon is lost, the planet is suddenly released from any kind of spin-orbit coupling, and its obliquity remains permanently frozen. In the example shown in Fig.ย <ref>, the final obliquity of the planet is about 77^โ. This value is roughly compatible at 2ฯ with the obliquity ฮต=92ยฑ 7^โ proposed by <cit.>. However, we stress that the final obliquity of the planet is the result of a chaotic phase; its value strongly depends on initial conditions, on the mass of the moon, and on the widths of nearby secular spin-orbit resonances <cit.>. More massive moons and larger resonances increase the obliquity excitation of the planet during the chaotic phase. Due to chaos, obliquity values larger than 90^โ can be reached, but the detailed exploration of possible outcomes would require a precise knowledge of the orbital dynamics of the planet. Without this knowledge, we can only conclude that the obliquity of the planet ends up within the hatched blue region in Fig.ย <ref>, that is, between about[The closed-form expression for the border of the unstable region is cos^2ฮต = (51 + 25โ(3))/726; see <cit.>.] 70^โ and 110^โ.
ยง DISCUSSIONS
ยง.ยง Refining the tilting mechanism
Under the ring hypothesis, we have presented a proof of concept for producing the unusual configuration proposed for super-puff exoplanets through the tidal migration of a former moon. We have considered the effect of a single massive moon on the planet's spin axis dynamics. This does not mean that the planet only had one moon โ we expect it to possibly have many โ but that this big moon gathered most of the mass of the satellite system, similarly to Titan around Saturn. Now that this big moon is lost, the remaining moons (either pre-existing or formed in the debris ring) are expected to be very small and undetectable with current facilities.
The presence of several pre-existing big moons, as the Galilean satellites around Jupiter, would complicate the picture outlined here. Through their mutual gravitational perturbations, several massive moons could either inhibit or facilitate the tilting process (see ). The exploration of this more complicate scenario is out of the scope of this article. More generally, additional work can refine the scenario proposed here for a given target exoplanet, including the efficiency of ring formation, the distribution of possible final obliquities, and the combined effect of several massive moons. However, this level of detail would require an in-depth knowledge of the orbital dynamics of the planetary system.
In the case of HIP 41378 f, confirmed periods and masses are still missing for planetsย d, planetย e, and the candidate planetย g. The analysis presented here reflects our current understanding of the system, and some results may change in case of substantial modifications in the system's hierarchy. Our correlation analysis shows that planetsย d and g are only weakly coupled with the frequency s_6 of the resonance involved. A mass measurement for these planets would therefore not alter much the picture outlined above. However, substantial changes could be produced if future observations reveal a substantially different mass or period for planetย e, or if the system contains an additional outer planet; the calculations presented here should therefore be updated. In this respect, the simplicity of the analytical formulas involved is a great advantage.
ยง.ยง The true nature of super-puffs
Future characterisation of super-puff exoplanets is fundamental to assess the actual nature of their anomalously large radii. Unfortunately, due to the nearly face-on configuration of the proposed ring, an unambiguous detection of the ring by transit photometry or by the Rossiter-McLaughlin effect would be challenging with current instruments <cit.>. Spectroscopic observations are much more promising. Even though the spectra of several super-puffs have been revealed to be featureless in near infrared <cit.>, rings are expected to be transparent in far infrared, which would strongly reduce the transit depth of the planet. As noted by <cit.>, mid-infrared observations by the JWST would be enough to break the degeneracy between high-altitude hazes, a high-metallicity atmosphere, or the ring hypothesis. The nominal JWST mission offers only two opportunities to observe a transit of HIP 41378 f: October 2025 and March 2027. Considering their high scientific value, these opportunities should not be missed. In addition, the high cadence and high photometric resolution of the future PLATO mission may allow small distortions in the transit light curve to be detected (due to the non-zero inclination of the ring with respect to the sky plane and/or to a possible thin inner gap in the ring; see ).
ยง.ยง The rarity of enlarged planets
Due to the generic nature of the mechanism presented here, one may wonder why in this case we do not observe many distant exoplanets with anomalously large radii. This rarity can be explained by several factors. First, the transit and radial-velocity methods are strongly biased towards the detection of short-period exoplanets <cit.>. In this regard, the detection of HIP 41378 f with a period of 542ย days is already an exception (the transit probability is 0.5%). In turn, the long-period planets observed in direct imaging are strongly biased towards young systems, which cannot have gone through the gigayear adiabatic tilting process described here. As of today, this leaves us with only a handful of exoplanet detections for which this mechanism may have played a role.
The second rarity factor is geometric: a strong radius enhancement able to cast suspicion requires a roughly face-on ring. The mechanism proposed here produces a final planetary obliquity more or less equal to 90^โ, which is a necessary condition for observing a transiting face-on ring, but is not sufficient: the precession phase ฯ of the planet must also have an adequate value. For a ring with typical radius 2.5ย R, the increase in transit depth leading to underestimating the planet density by a factor q>10 requires a precession phase within ยฑ 40^โ of the exact face-on configuration (see e.g. ). As shown in Fig.ย <ref>, this occurs about 45% of the time. This fraction is lowered if we consider the ring to have an inner optically thin gap similar to Saturn's ring.
Finally, even though the mechanism described in this article is generic, not all giant planets are expected to reach the final instability phase in only a few gigayears. Depending on the initial configuration of their moons and the geometry of the available resonances, the planet's obliquity may only have time to increase by a few tens of degrees during its lifetime. In the Solar System, which is aged 4.5ย Gyr, only Uranus may have completed the final stage today <cit.>. In contrast, Jupiter is only starting the tilting phase[As Jupiter possesses four massive moons interacting with each other, its tilting process is somewhat different from what is presented here. Jupiter may never be able to reach an obliquity close to 90^โ even if it was given infinite time.] <cit.>, while Saturn is seemingly halfway in <cit.> โ and it may have recently been ejected from resonance (see ).
Hence, even though many exoplanets are probably affected by this mechanism, the conjunction of observational biases, ring geometry, and the long timescales at play drastically reduces the probability of detecting targets as exquisite as HIP 41378 f. In this regard, the future PLATO mission is particularly promising, as its observing strategy is tailored to long-period planets, and it will be accompanied by an intensive radial-velocity follow-up to get accurate planet masses and detect possible non-transiting companions. Hopefully, the PLATO discoveries will enable us to estimate the fraction of the exoplanet population that may have gone through the mechanism described in this article.
ยง CONCLUSION
The apparent enlarged radius of some long-period exoplanets may be due to the presence of a ring observed roughly face on <cit.>. Despite their unconventional configuration, such hypothetical rings and the nearly 90^โ obliquity of their host planets can be the natural end state of former migrating moons. This mechanism involves the capture of the planet in secular spin-orbit resonance as the moon migrates away on a gigayear timescale. The planet is then gradually tilted until the moon is destabilised and may be destructed into a debris disc.
For a given exoplanet, the plausibility of this formation mechanism can be assessed through simple analytical calculations. First, we need to determine the list of secular spin-orbit resonances that may tilt the planet. The frequencies ฮฝ_j of the main orbital precession harmonics of the planet can be obtained through the Lagrange-Lagrange theory; in this theory, orbital frequencies are the eigenvalues of a matrix which depends only on the masses and spacings of the planets contained in the system. The probability density function of each frequency can be built from numerous realisations of the system (e.g. 10^6 or more) which are sampled according to our uncertainties on the parameters. Simple correlation analysis can then quantify the influence of each planet in the frequency values.
Then, for each frequency ฮฝ_j, the simple formula in Eq.ย (<ref>) gives the minimum mass of a moon that the planet must have in order to trigger an adequate secular spin-orbit resonance. This formula depends on the unknown parameters J_2 and ฯฮป of the planet, but thanks to the approximate relation J_2โฯ^2, this lack of knowledge only weakly affects the final result. The moon-to-planet mass ratio obtained is the first plausibility check of this dynamical mechanism. Moons with mass ratio m/Mโผ 10^-4 or smaller are expected to be ubiquitous around gaseous planets (see e.g. ). Substantially larger moons cannot be categorically ruled out, but they would require non-generic formation pathways such as captures or giant impacts, and are therefore much less likely (see e.g. ).
A second consistency check is provided by the age of the planetary system considered. The Laplace radius of the planet (see Eq.ย <ref>) sets the distance over which the moon needs to migrate to fully tilt the planet. The migration range obtained must have been covered by the moon in a smaller timespan than the age of the system. As the migration of moons is powered by tidal dissipation inside the planet, the required distance and migration timescale can be translated into a tidal parameter k_2/Q for the planet. Expected values are of the order of 10^-5 to 10^-4 from a Solar System perspective <cit.>.
The last plausibility check is the consistency of timescales between the age of the planetary system and the hypothesis of an adiabatic capture into resonance. An adiabatic capture requires the libration period inside the resonance to be much shorter that the age of the system, such that many oscillations of the resonance angle may possibly have occurred during the tilting of the planet. The characteristic libration frequency is given in Eq.ย (<ref>); it depends on the frequencies ฮฝ_j obtained above, but also on the amplitudes S_j of the corresponding harmonics in the planet's orbital precession spectrum. Using the Lagrange-Laplace theory, the computation of these amplitudes requires to know the inclinations I_k and longitudes of nodes ฮฉ_k of the planets (e.g. measured in the sky plane). As the libration frequency scales as the square root of the amplitude of the resonant term, only a lower bound for S_j is actually needed. This lower bound can be obtained even in case the longitudes of nodes ฮฉ_k of the planets are unknown, allowing one to compute a maximum value for the libration period inside the resonance considered.
We applied this methodology to the planet HIP 41378 f, and obtained that all consistency checks are fulfilled. In order to tilt the planet through an adequate resonance, the hypothetical exomoon must have had a moon-to-planet mass ratio ranging from m/Mโณ 2ร 10^-4 to 10ร 10^-4, that is, a mass comparable to that of Neptune's moon Triton, Jupiter's moon Europa, or to that of our own Moon. Even though such small exomoons are very hard to detect due to the weakness of their observational signals <cit.>, we expect them to be ubiquitous around giant exoplanets. Provided that the exomoon was initially formed at a distance of about 3 to 10 planetary radii (similarly to Jupiter's moons Io or Europa), its outward migration leads to a guaranteed capture of HIP 41378 f in a secular spin-orbit resonance. The migration timescale required for the moon is found to be in line with what is observed in the Solar System, with a corresponding tidal dissipation factor k_2/Q larger than 2ร 10^-5 (for the smallest possible moon) or larger than about 6ร 10^-6 (for a bigger moon). Finally, the libration timescale inside the resonance is found to be orders of magnitudes smaller than the age of the system (2.1^+0.4_-0.3ย Gyr; see ), allowing for the whole tilting mechanism to have possibly occurred.
All these requirements are confirmed by an example of fully coupled numerical integration of the planet's spin axis and the moon's orbit. The planet's spin axis is gradually tilted until its obliquity ฮต reaches values in the interval [70^โ,110^โ], and its moon becomes unstable <cit.>. Due to the short instability timescale of the exomoon (ฯโ 100ย yr in the case of HIP 41378 f), its eccentricity increase is likely to cause catastrophic events, such as collision chains between small inner moons or a tidal disruption of the moon itself when its pericentre goes below the planet's Roche limit (see e.g. ). Hence, we argue the dynamical mechanism described here, which may be responsible for the tilting of planet HIP 41378 f to an obliquity ฮตโ 90^โ, can also naturally provide the material for its hypothetical ring.
We stress, however, that even though this dynamical mechanism is physically realistic for HIP 41378 f, this does not imply that it necessarily happened. Planet HIP 41378 f may have had too small and/or too distant moons for the mechanism to operate, and the anomalous transit depth and flat spectrum of this planet may still be due to a particularly tenuous atmosphere covered with high-altitude hazes <cit.>. Yet, our analysis does provide further significance to the high-obliquity ring hypothesis, by showing that such an unusual configuration is not only feasible in a physical point of view, but even expected for some fraction of exoplanets resembling HIP 41378 f โ that is, for old and distant exoplanets in multi-planetary systems. As detailed above, checking the plausibility of this mechanism only requires a limited knowledge of the planetary system considered, and this methodology can be applied to other super-puff exoplanets, and in particular to the potential future discoveries of PLATO.
The authors thank the anonymous referee for her/his inspiring comments. This work was supported by the Programme National de Planรฉtologie (PNP) of CNRS/INSU, co-funded by CNES.
aa
ยง CORRELATIONS BETWEEN PARAMETERS AND ORBITAL PRECESSION FREQUENCIES
In a long-term stable planetary system, the orbital motion of the planets can be approximated by quasi-periodic series as in Eq.ย (<ref>), where the frequencies ฮฝ_j are integer combinations of the fundamental frequencies of the system. In the Lagrange-Laplace approximation, the fundamental frequencies s_k of the series governing the inclination dynamics of the planets are the eigenvalues of the matrix B (see e.g. ) which only depends on the masses and semi-major axes of the planets.
The values of the frequencies s_k and their role in the dynamics are intrinsic properties of the matrix B. As the fundamental frequencies reflect the gravitational couplings between the planets, some parameters contribute much more than others in the value of a given frequency; however, this contribution is not linear and it is not obvious a priori which parameters contribute the most. Here, we are mostly interested in the frequencies s_3 and s_6 as defined in Sect.ย <ref>, because they are expected to dominate the inclination dynamics of planet HIP 41378 f. Figuresย <ref> and <ref> show the scatter of the values of s_3 and s_6 as a function of all parameters.
From these scatter plots, it is visible that the value of s_3 mostly depends on M_4, while the value of s_6 mostly depends on M_5. The strength of these correlations can be quantified by Spearman's correlation coefficient ฯ_S. The coefficients obtained are given in Figs.ย <ref> and <ref> for each parameter. The correlations between s_3 and M_4 (ฯ_S=-0.85) and between s_6 and M_5 (ฯ_S=-0.76) are the strongest by more than a factor of two.
ยง INFLUENCE OF THE UNKNOWN PHYSICAL PARAMETERS OF THE PLANET
In Sect.ย <ref>, we estimate the properties of a hypothetical former moon needed to tilt a planet from a low obliquity and create a ring of debris. The formulas, however, depend on the unknown parameters J_2 and ฯฮป of the planet. In this section, we consider them as free parameters and study their influence on our results.
The values of J_2 and ฯฮป are related and depend on the interior properties of the planet. In the simplest case of a homogeneous planet, J_2 and ฯ are linked through the law of Maclaurin's ellipsoid (see e.g. ), while ฮป=2/5. This law simplifies to J_2โฯ^2 for a nearly spherical planet, and it can be rewritten as
J_2 โ25/8R^3/๐ขM(ฯฮป)^2 .
Even though planets are not homogeneous, this approximate relation gives an idea of where to look for realistic combinations of parameters in the plane (ฯฮป,J_2) without any assumption on its composition. Figureย <ref> shows that the Solar System giant planets do fall roughly along this curve.
For a given frequency ฮฝ_j in the orbital precession spectrum of a planet, the conditions pโฉฝ |ฮฝ_j| in Eq.ย (<ref>) corresponds to a straight line J_2โฯฮป, whereas the condition |ฮฝ_j|โฉฝ pฮท/2 corresponds to a power law J_2โ(ฯฮป)^5/2. Both conditions can be visualised in Fig.ย <ref> for planet HIP 41378 f using three different values of the frequency ฮฝ_j (most probable value of s_6 and 95.4% bounds; see Sect.ย <ref>). Because the exponent 2 in Eq.ย (<ref>) is close to 5/2, the realistic combinations of J_2 and ฯฮป (which are located in a rough neighbourhood of the dotted curve) follow more or less the level curves of the minimum moon mass m_min. As a consequence, our estimate of the minimum moon mass is not affected much by our total ignorance of the parameters J_2 and ฯฮป. Whatever realistic values are chosen, we obtain a minimum moon mass m_min/Mโ 6ร 10^-4 for the most probable value of s_6 (Fig.ย <ref>b), with a dispersion at 95.4% ranging from m_min/Mโ 3ร 10^-4 (Fig.ย <ref>a) to m_min/Mโ 1ร 10^-3 (Fig.ย <ref>c). These values are similar to those obtained in Sect.ย (<ref>) using the parameters of Uranus.
The value of characteristic length r_M, however, depends on J_2, but not on ฯฮป (see Eq.ย <ref>). The distance covered by the migrating moon depends therefore on the value chosen for J_2. Yet, this dependence is not very steep (r_Mโ J_2^1/5). Moreover, extreme values of J_2, either very small or very large, can be ruled out because giant planets are expected to spin at a fraction of their breakup velocity (see e.g. ). As a consequence, the value obtained for r_M only varies by a small amount when considering realistic values of J_2 (see the right vertical axis in Fig.ย <ref>).
ยง INCLINATION AMPLITUDES AND TRANSIT PROBABILITIES
In the Lagrange-Laplace approximation, the long-term inclination dynamics of planets are described by quasi-periodic series as in Eq.ย (<ref>), in which the frequencies solely depend on the masses and spacing of the planets. The orientations of the planets' orbital planes enter into play only in the amplitudes of the terms of the series. Apart from the zero-frequency term (which merely gives the orientation of the planetary invariant plane), the amplitudes S_j in Eq.ย (<ref>) depend on the mutual inclinations between the planets' orbital planes. The mutual inclination ฮจ_ik of two planets i and k can be written as
cosฮจ_ik = cos I_icos I_k + sin I_isin I_kcos (ฮฉ_i-ฮฉ_k) ,
where I and ฮฉ are the orbital inclination and longitude of ascending node of the planets measured with respect to a given reference plane (e.g. the sky plane). As a simple rule of thumb, the larger the mutual inclinations between each pair of planets, the larger the amplitudes S_j. For unknown longitudes of node ฮฉ_k, the orbital inclinations I_k of a set of planets provide minimum and maximum bounds to the mutual inclinations between each pair of planets. From Eq.ย (<ref>), the minimum mutual inclination of two planets is ฮจ_ik = |I_i-I_k| reached for ฮฉ_i-ฮฉ_k = 0. In practice, inclination values measured from transit data have a mirror degeneracy with respect to 90^โ (see Tableย <ref> of the main text); ฮจ_ik is minimised if I_i and I_k lie on the same side of 90^โ.
Here, we quantify the influence of the unknown longitudes of nodes on the orbital dynamics of planet HIP 41378 f by drawing randomly the longitudes of nodes of all planets in a given interval ฮฮฉ_0 and building a histogram of the amplitudes S_j obtained. The result is shown in Fig.ย <ref> for an interval ฮฮฉ_0 ranging from 0^โ to 2^โ. As expected, the terms s_1 and s_2 have very small amplitudes; they contribute negligibly to the dynamics of planet HIP 41378 f. The dominant terms of the dynamics are s_3 and s_6, with the qualitative roles described in Sect.ย <ref>. The amplitudes of all terms are generally minimum for ฮฮฉ_0=0, and they cover a wider and wider range of possibilities when we allow the planets' longitudes of nodes to be substantially distinct. This effect is particularly visible for the s_4 term, for which a dispersion as small as ฮฮฉ_0=1^โ can make the amplitude increase by a factor 1000 or so.
Figureย <ref> shows the probability of observing five transiting planets or more in the same experiment as in Fig.ย <ref>. For a given realisation of the planetary system, the transit probability of a set of planets is the fraction of time their orbits simultaneously pass in front of the star. If we assume the same longitude of node for all planets in the HIP 41378 system (i.e. ฮฮฉ_0=0), then five planets or more transit about 30% of the time. If we allow a dispersion of ฮฮฉ_0=1^โ, this fraction can be reduced to less than 5%.
|
http://arxiv.org/abs/2306.03855v1
|
20230606164915
|
Faster real root decision algorithm for symmetric polynomials
|
[
"George Labahn",
"Cordian Riener",
"Mohab Safey El Din",
"รric Schost",
"Thi Xuan Vu"
] |
cs.SC
|
[
"cs.SC",
"math.AG"
] |
In this paper, we consider the problem of deciding the existence of real
solutions to a system of polynomial equations having real coefficients, and
which are invariant under the action of the symmetric group. We construct and
analyze a Monte Carlo probabilistic algorithm which solves this problem, under
some regularity assumptions on the input, by taking advantage of the symmetry
invariance property.
The complexity of our algorithm is polynomial in d^s, n+d d, and
ns+1, where n is the number of variables and d is the
maximal degree of s input polynomials defining the real algebraic set under
study. In particular, this complexity is polynomial in n when d and s
are fixed and is equal to n^O(1)2^n when d=n.
Learning with a Mole: Transferable latent spatial representations for navigation without reconstruction
Guillaume Bono
Leonid Antsfeld
Assem Sadek
Gianluca Monaci
Christian Wolf
Naver Labs Europe, Meylan, France
[email protected]
July 31, 2023
==================================================================================================================================================================
ยง INTRODUCTION
Let f=(f_1, โฆ, f_s) be polynomials in the multivariate
polynomial ring [x_1, โฆ, x_n] and let V(f) โ^n be
the algebraic set defined by f. We denote by V_(f) := V(f)
โฉ^n the set of solutions in ^n to the system f. In
addition we assume that all f_i's are invariant under the action of the
symmetric group S_n, that is, are symmetric polynomials (or
equivalently, S_n-invariant polynomials).
Under this invariance property, we design an algorithm
which, on input f, decides whether V_(f) is empty or not. As is typical for such
problems, we assume that the Jacobian matrix of f with respect to
x_1, โฆ, x_n has rank s at any point of V(f). In this case
the Jacobian criterion <cit.>
implies that the complex algebraic set V(f) is smooth and
(n-s)-equidimensional (or empty).
Previous work.
The real root decision problem for polynomial systems of equations
(and more generally systems of inequalities) lies at the foundations
of computational real algebraic geometry. Algorithms for solving
polynomial systems over the real numbers start with
Fourierย <cit.> who provided a first algorithm for solving
linear systems of inequalities (rediscovered in 1919 by
Dinesย <cit.>). These algorithms are important because they
make the first connection with elimination theory. Tarski's
theoremย <cit.> states that the projection of a semi-algebraic
set on a coordinate subspace is a semi-algebraic set. This theorem,
and its algorithmic counterpart which relies on Sturm's theorem for
real root counting in the univariate case, enable recursive
algorithmic patterns (eliminating variables one after another). The
first algorithm with an elementary recursive complexity, Cylindrical Algebraic Decomposition, is due to Collins (see
<cit.> and references inย <cit.> for various further
improvements).
It turns out that these algorithms run in time doubly exponential in
nย <cit.>. Note that some variants actually solve the
quantifier elimination problem, a much more general and difficult
computational problem than the real root decision problem.
Algorithms which solve the real root decision problem in time singly
exponential in n and polynomial in the maximum degree of the input
were pioneered by Grigoriev and Vorobjovย <cit.>
and Renegarย <cit.>, and further improved by Cannyย <cit.>,
Heintz, Roy and Solernรณย <cit.> and Basu, Pollack and
Royย <cit.>. The method used in this framework is referred to as
the critical point method. It reduces the real root decision
problem to the computation of finitely many complex critical points of
a polynomial map which reaches extrema at each connected component of
the semi-algebraic set under study.
The algorithm proposed here for solving the real root decision problem
for systems of
symmetric polynomial equations also builds on the
critical point method.
It borrows ideas from probabilistic algorithms which have been
designed to obtain sharper complexity estimates (e.g. cubic either in
some Bรฉzout bound attached to some critical point system or in some
geometric intrinsic degree) and obtain practical performances that
reflect the complexity gainsย <cit.>. These algorithms make use of geometric resolution
or symbolic homotopy techniques to control the complexity of the
algebraic elimination stepย (see e.g.ย <cit.> and
references therein), and of regularity assumptions to easily derive
critical point systems from the input polynomials.
Under the Jacobian criterion assumptions, critical points are defined
as the intersection of the affine variety V(f) with a determinantal
variety derived from a certain Jacobian matrix. The design of
dedicated algebraic elimination algorithms for this particular setting
has attracted some attention alreadyย <cit.>. When adding the symmetry
property to
polynomials defining the variety and the polynomial map for
which one computes the critical points, significant improvements have
been achieved recently inย <cit.> by using the
symbolic homotopy algorithms inย <cit.>.
These improvements, which allows one to obtain complexity gains
related to the combinatorial complexity of the symmetric group, also
borrow ideas from algebraic algorithms working with data which are
invariant by the action of this groupย <cit.>. We
emphasize that taking advantage of symmetries in data is a topical and
difficult issue, which involves a variety of
methodologiesย <cit.>.
Inย <cit.>, Timofte proves a breakthrough result which is
now known as the degree principle. It states that
a symmetric
polynomial of degree d with real coefficients has real solutions if
and only if one of these solutions has at most d distinct
coordinates.
This shows that when d is fixed and n grows, the real root
decision problem can be solved in polynomial time. This is far better
than computing at least one sample point per connected component (see
alsoย <cit.>), and is
one of the rare interesting cases where the best known algorithms for
these two problems admit different complexities.
This is also the starting point of several results which enhance the
real root decision problem and polynomial optimization under some
S_n-invariance property for classes of problems where d remains
fixed and n grows (seeย <cit.> and <cit.> for equivariant systems).
Main contributions.
Being able to leverage S_n-invariance for critical point
computations is not sufficient to solve root decision problems more
efficiently using the critical point method. Additional techniques are
needed.
Indeed, to solve the real root decision problem by finding the
critical points of a polynomial map ฯ, one typically defines
ฯ as the distance from points on the variety to a generic
point. This map reaches extrema at each connected component of the
semi-algebraic set under study. However, the map ฯ is not
symmetric. If it was, our problem would be solved by the critical
point algorithm of <cit.>. Unfortunately there
does not appear to be an obvious symmetric map that fits the bill.
Instead, our approach is to apply the critical point method on
individual S_n-orbits, with suitable ฯ found for each
orbit. Thus while we cannot use the critical point algorithm of
<cit.> directly we can make use of the various
subroutines used in it to construct a fast decision
procedure.
Intuitively, working with S_n-orbits is the same as separately
searching for real points having distinct coordinates, or real points
having two or more coordinates which are the same, or groups of
coordinates each of which has equal coordinates and so on. In each
case an orbit can be described by points having n or fewer pairwise
distinct coordinates, a key observation in constructing generic maps
invariant for each orbit.
Let f=(f_1, โฆ, f_s) be symmetric
polynomials in [x_1,
โฆ, x_n] having maximal degree d. Assume that the Jacobian
matrix of f with respect to x_1, โฆ, x_n has rank s at any
point of V(f). Then there is a Monte Carlo algorithm Real_emptiness which solves the real root decision problem for
f with
(d^6s+2n^11n+d n^6(n+d n
+ ns+1) )
โ(d^sn+d nns+1)^O(1)
operations in . Here the notion indicates that polylogarithmic factors are omitted.
The remainder of the paper proceeds as follows. The next section
reviews known material, on invariant polynomials over products of
symmetric groups, the tools we use to work with S_n-orbits, and our
data structures. Section <ref> discusses our smoothness
requirement and shows that it is preserved by alternate
representations of invariant polynomials. Section <ref>
shows how we construct critical point functions along with their
critical point set. This is followed in Section <ref> by a
description of our algorithm along with a proof of correctness and
complexity. The paper ends with a section on topics for future
research.
ยง PRELIMINARIES
ยง.ยง Invariant Polynomials
We briefly review some properties of polynomials invariant under the
action of S_t_1รโฏร S_t_k, with S_t_i the
symmetric group on t_i elements, for all i. In this paragraph, we work
with variables = (_1, โฆ, _k), with each
_iย =ย (z_1, i,โฆ, z_t_i, i); for all i, the group
S_t_i permutes the variables _i. For j โฅ 0, we denote by
E_j,i = โ_1 โค m_1 <m_2 < โฏ < m_j โค t_i z_m_1,i z_m_2,iโฏ z_m_j,i,
the elementary polynomial in the variables z_i, with
each E_j, i having degreeย j, and by
_j,i = z_1, i^j + โฏ + z_t_i, i^j
the j-th Newton sum in the variables _i, for i=1,โฆ,k.
The following two results are well-known.
For i=1,โฆ,k, let e_i = (e_1,i, โฆ, e_t_i, i) be a set
of t_i new variables and let E_i = (E_1, i, โฆ, E_t_i,
i); we write e=(e_1,โฆ,e_k) and E= (E_1,โฆ,
E_k).
Let gโ [_1, โฆ, _k] be invariant under the
action of S_t_1รโฏร S_t_k. Then there exists
a unique ฮณ_g in [e] such that g = ฮถ_g(E).
Similarly, let _j,i be new variables, and consider the sequences
_i=(_1, i, โฆ, _t_i, i) and = (_1, โฆ,
_k), together with their polynomial counterparts _i=(_1,
i, โฆ, _t_i, i) and = (_1, โฆ, _k).
Let gโ [_1, โฆ, _k] be invariant under the
action of S_t_1รโฏร S_t_k. Then there exists
a unique ฮถ_g in [] such that g = ฮณ_g().
Let
g = 2 (z_1,1 z_2,1 + z_1,1^2 + 2 z_1,1 z_2,1 + z_2,1^2)(z_1,2^2 + z_2,2^2),
a polynomial invariant under S_2 ร S_2, with _1 =
(z_1,1, z_2,1), _2 = (z_1,2, z_2,2), k=2 and t_1=t_2=2. In this
case, we have
g = (3 _1,1^2 - _1,2) _2,2
and hence
ฮณ_g = (3 _1,1^2 - _1,2) _2,2โ[_1,1,
_1,2, _2,1, _2,2].
ยง.ยง Describing S_n-orbits via Partitions
S_n-orbits are subsets of ^n that play a central role in
our algorithm. In this section, we review notation and description of
S_n-orbits, along with the form of the output used in
<cit.>.
A simple way to parameterize S_n-orbits is through the use of
partitions of n. A sequence ฮป = (n_1^t_1 โฆ
n_k^t_k), where n_1 < โฏ < n_k and n_i's and t_i's are
positive integers, is called a partition of n if n_1t_1+โฏ +
n_kt_k = n. The length of the partition ฮป is defined as
โ := t_1 + โฏ + t_k.
For a partition ฮป = (n_1^t_1 โฆ n_k^t_k) of n,
we use the notation from <cit.> and
let U_ฮป denote the set of all points u in ^n
that can be written as
u = ( u_1, 1, โฆ, u_1, 1_n_1, ย โฆ,ย u_t_1, 1, โฆ, u_t_1,1_n_1,ย โฆ,
ย u_1,k, โฆ, u_1,k_n_k,ย โฆ,ย u_t_k, k, โฆ, u_t_k, k_n_k).
For any point u in ^n, we define
its type as the unique partition ฮป of n such that
there exists ฯโ S_n such that ฯ(u) โ U_ฮป,
with the u_i,j's in (<ref>) pairwise distinct.
Points of a given type ฮป = (n_1^t_1 โฆ n_k^t_k)
are stabilized by the action of S_ฮป := S_t_1รโฏร S_t_k, the cartesian product of symmetric groups S_t_i.
For a partition ฮป as above, we can then define a mapping F_ฮป : U_ฮปโ^โ as
uย asย inย (<ref>)โฆ
(E_1, i(u_1, i, โฆ,
u_t_i, i), โฆ,E_t_i, i(u_1, i, โฆ,u_t_i, i))_1โค i โค k,
where E_j, i(u_1, i, โฆ,u_t_i, i) is the
j-th elementary
symmetric function in u_1, i, โฆ,u_t_i, i for i=1, โฆ, k
and j=1, โฆ, t_i. One can think of the map F_ฮป as a
compression of orbits. By applying this map, we can represent an
S_n-orbit ๐ช of type ฮป by the single point
F_ฮป(๐ชโฉ U_ฮป).
Furthermore, the map F_ฮป is onto: for any c =
(c_1,1, โฆ, c_t_k, k) โ^โ, we define
polynomials ฯ_1(u), โฆ, ฯ_k(u) by
ฯ_i(T) = T^t_i -c_1, iT^t_i-1 + โฏ + (-1)^t_ic_t_i, i.
We can then find a point
uโ^n in the preimage F_ฮป^ย -1(c) by
finding the roots u_1, i, โฆ, u_t_i, i of ฯ_i(T).
ยง.ยง Zero-Dimensional Parametrizations
The subroutines we use from <cit.> give their
output in terms of zero-dimensional parametrizations, which are
defined as follows. Let W โ^n be a variety of
dimension zero, defined over . A zero-dimensional
parametrization = ((v, v_1, โฆ, v_n), ฮผ) of W is
(i) a squarefree polynomial v in [t], where t is a
new indeterminate, and (v) = |W|,
(ii) polynomials v_1, โฆ, v_n in [t] such that
(v_i) < (v) for all i and
W = {(v_1(ฯ)/v'(ฯ), โฆ,
v_n(ฯ)/v'(ฯ)) โ^n : v(ฯ) = 0 },
(iii) a linear form ฮผ in n variables such that ฮผ(v_1, โฆ,
v_n) = tv' (so the roots of v are the values taken by ฮผ on W).
When these conditions hold, we write W = Z(). Representing the
points of W by means of rational functions with v' as denominator
is not necessary, but allows for a sharp control of the bit-size of
the output.
ยง PRESERVING SMOOTHNESS
In our main algorithm, we assume that our input system
f=(f_1,โฆ,f_s) satisfies the following smoothness condition
(A): the Jacobian matrix of f has rank s at any point of V(f).
In this section, we discuss consequences of this assumption for
symmetric polynomials.
Mapping to orbits: the map _ฮป. For
a partition ฮป = (n_1^t_1 โฆ n_k^t_k) of n, we
define the -algebra homomorphism _ฮป:[x_1,
โฆ, x_n] โ[z_1, โฆ, z_k], with
_i=(z_1, i,โฆ, z_t_i, i) for all i, which maps the
variables x_1, โฆ, x_n to
z_1, 1, โฆ, z_1, 1_n_1, ย โฆ,ย z_t_1, 1, โฆ, z_t_1,1_n_1,ย โฆ,
ย z_1,k, โฆ, z_1,k_n_k,ย โฆ,ย z_t_k, k, โฆ, z_t_k, k_n_k.
The operator _ฮป extends to vectors of polynomials and
polynomial matrices entry-wise. The key observation here is that if
f is symmetric,
then its image through _ฮป is
S_t_1รโฏร S_t_k-invariant.
Fix a partition ฮป = (n_1^t_1 โฆ n_k^t_k) of n,
and let โ be its length. Set
I_j,i:={ฯ_j,i+1,โฆ,ฯ_j,i+n_i}, 1 โค i โค k; 1โค j โค t_i
with
ฯ_j,i:=โ_r=1^i-1t_rn_r+(j-1)n_i. Variables x_m, for m in
I_j,i, are precisely those that map to z_j,i under _ฮป. Define
further the matrix Zโ^โร n with โ = t_1 +โฏ +
t_k, where rows are indexed by pairs (j,i) as above and columns by m โ{1,โฆ,n}. For all such (j,i), the entry of row index (j,i) and column
index m โ I_j,i is set to 1/n_i, all others are zero. In
other words, Z = diag(Z_1, โฆ, Z_k), where
Z_i = (
1/n_i โฏ 1/n_i
0 โฏ 0
0 1/n_i โฏ 1/n_i โฏ 0
โฎ โฑ โฎ
0 0 โฏ 1/n_i โฏ 1/n_i)
is a matrix in ^t_i ร n_it_i.
Consider the partition ฮป = (2^2 3^1) of n=7. Then
n_1=2, t_1=2, n_2=3, t_2=1 and the length of ฮป is
3. In this case,
Z =
( 1/2 1/2
1/2 1/2
1/3 1/3 1/3).
Let f= (f_1, โฆ, f_s) โ[x_1, โฆ, x_n] be a sequence of symmetric
polynomials, and let ฮป be a partition of n. Then
_ฮป(_x_1, โฆ, x_n(f))= _z_1, โฆ,
z_k(_ฮป(f)) ยทZ,
where Z is the matrix
defined above.
For any polynomial f in [x_1, โฆ,
x_n], applying the operator _ฮป on f evaluates f at
x_m = z_j,i for 1 โค i โค k, 1โค j โค t_i and m in
I_j,i. By the multivariable chain rule,
โ_ฮป(f)/โ z_j,i = โ_m โ
I_j,i_ฮป( โ f/โ
x_m ).
If f is symmetric,
for m,m'
in I_j,i, we then have
_ฮป( โ f/โ x_m ) =
_ฮป( โ f/โ
x_m' ),
so that, for m in I_j,i,
_ฮป( โ f/โ x_m ) = 1/n_iโ_ฮป(f)/โ z_j,i.
This argument can be extended to a sequence of polynomials to obtain
our claim.
We continue Example <ref> with a single S_7-invariant
polynomial f= โ_1 โค i โค j โค 7x_ix_j. Then
_ฮป(f) = 3z_1,1^2+ 3z_2,1^2 +
6z_1,2^2+6z_1,1z_1,2+4z_1,1z_2,1+6z_1,2z_2,1,
and so
(_ฮป(f)) = (6 z_1,1 + 6 z_1,2 + 4 z_2,1, 4 z_1,1 +
6 z_1,2 + 6 z_2,1, 6 z_1,1 + 12 z_1,2 + 6 z_2,1).
This
implies that (_ฮป(f)) ยทZ is equal to (u, u, v, v, w, w, w),
with
u=3 z_1,1 + 3 z_1,2 + 2 z_2,1, v=2 z_1,1 + 3 z_1,2 + 3 z_2,1, w=2 z_1,1 + 4 z_1,2 + 2 z_2,1.
This is precisely _ฮป((f)).
Under the assumptions of the previous lemma, if f satisfies
condition ( A), then _ฮป(f) โ[z_1, โฆ,
z_k] does as well.
Let ฮฑ = (ฮฑ_1,1, โฆ, ฮฑ_t_1, 1, โฆ,
ฮฑ_1, k, โฆ, ฮฑ_t_k k) be a zero of _ฮป(f)
in ^โ. We have to prove that _z_1, โฆ,
z_k(_ฮป(f))(ฮฑ) has a trivial left kernel.
Consider the point
ฮต = (ฮฑ_1, 1,
โฆ, ฮฑ_1, 1_n_1, โฆ, ฮฑ_t_1, 1,
โฆ, ฮฑ_t_1,1_n_1, โฆ,
ฮฑ_1,k,
โฆ, ฮฑ_1,k_n_k, โฆ, ฮฑ_t_k, k, โฆ,
ฮฑ_t_k, k_n_k) โ^n,
which lies in V(f).
In particular, for any g in [x_1,โฆ,x_n], we
have _ฮป(g)(ฮฑ) = g(ฮต).
Applying this to the Jacobian matrix of f, we obtain
_ฮป((f))(ฮฑ) = (f)(ฮต).
Since by assumption f is symmetric,
the previous lemma implies that
(f)(ฮต) = _z_1, โฆ, z_k(_ฮป(f))(ฮฑ) ยท
Z.
Since (f)(ฮต) has rank s (by condition
A), the left kernel of (f)(ฮต) is
trivial.
It follows that the left kernel of _z_1, โฆ,
z_k(_ฮป(f))(ฮฑ) is also trivial.
When we represent S_t_1รโฏร S_t_k-invariant
functions in terms of Newton sums, we can show that the new
representation also preserves condition (A).
Assume (g_1, โฆ, g_s)โ[z_1, โฆ, z_k] is
S_t_1รโฏร S_t_k- invariant and satisfies
condition ( A). If we set h_i = ฮณ_g_i for all i,
then (h_1, โฆ, h_s) also satisfies condition ( A).
The Jacobian matrix (g) of (g_1, โฆ, g_s) factors as
(g) = ()() ยท, where = diag(V_1, โฆ, V_k)
with each V_i a row-scaled Vandermonde matrix given by
V_i = ( 1
2
โฑ
t_i)
( 1 1 โฏ 1
z_1, i z_2, i โฏ z_t_i, i
โฎ โฎ
z_1, i^t_i-1
z_2, i^t_i-1 โฏ z_t_i, i^t_i-1) .
Let ฮท be a point in the vanishing set of (h_1, โฆ,
h_s) and let ฮต be in ^-1(ฮท). If
() is rank deficient at ฮท then ()()(ฮต) is also rank deficient. This implies that the rank of
(g)(ฮต), which is bounded above by those of
()()(ฮต) and (ฮต), is
deficient.
Similarly, instead of using a row-scaled Vandermonde matrix V_i as
in (<ref>), we can use V_i as the Jacobian matrix of
elementary symmetric functions in z_i. This gives a similar result
but for the polynomials ฮถ_g_1, โฆ, ฮถ_g_s.
Assume (g_1, โฆ, g_s)โ[z_1, โฆ, z_k] is
S_t_1รโฏร S_t_k- invariant and satisfies
condition ( A). Then the sequence of polynomials
(ฮถ_g_1, โฆ, ฮถ_g_s) also satisfies condition (
A).
ยง CRITICAL LOCI
If W โ^โ is an equidimensional algebraic set, and ฯ
a polynomial function defined on W, a non-singular point wโ W
is called a critical point of ฯ on W if the gradient of
ฯ at w is normal to the tangent space T_w W of W at
w.
If g=(g_1,โฆ,g_s) are generators of the ideal associated to W, then
T_wW is the right kernel of the Jacobian matrix (g) of g
evaluated at w. In the cases we will consider, this matrix will have rank
s at all points of W (that is, g satisfies condition A). The set
of critical points of the restriction of ฯ to W is then defined by the
vanishing of g, and of the (s+1)-minors of the Jacobian matrix (g,
ฯ) of g and ฯ.
ยง.ยง Finiteness through genericity
Let g = (g_1, โฆ, g_s) in [z_1, โฆ. z_k] with each
g_i invariant under the action of S_t_1รโฏร
S_t_k; we write โ = t_1+โฏ +t_k. We introduce some useful
S_t_1รโฏร S_t_k-invariant mappings and discuss
the properties of their critical points on V(g) โ^โ.
For 1โค i โค k, let _i = (_1, i, โฆ,
__i, i) be new indeterminates, and recall that P_j,
i is the j-th Newton sum for the variables z_i. Set
_ =
โ_i=1^k c_i _t_i+1, i +
โ_i=1^kโ_j=1^_i_j,i_j,i
where c_i=1 if
_i is odd and c_i=0 if _i is even. So
_ has even degree and is invariant under the action
of S_t_1รโฏร S_t_k.
For =(_1,โฆ,_k) in ^_1รโฏร^_k, with each _i in ^_i, we
denote by _ the polynomials in [z_1, โฆ, z_k]
obtained by evaluating the indeterminates _i at _i in _, for all
i.
Further, we denote by โ^โ the open set consisting of
points w = (w_1, โฆ, w_k) such that the coordinates of w_i are pairwise distinct for i=1, โฆ, k. Note that depends on the
partition ฮป = (n_1^t_1โฆ n_k^t_k); when needed because of the
use of different partitions, we will denote it by _ฮป.
Let g = (g_1, โฆ, g_s) be S_t_1รโฏร S_t_k-invariant
polynomials in [z_1, โฆ, z_k]. Suppose further that g satisfies
condition (A). Then there exists a non-empty Zariski open set
โ^_1รโฏร^_k such that for โ, the restriction of
_ to V(g) has finitely many critical points in .
ยง.ยง Proof of Proposition <ref>
For new variables _1, โฆ, _s, we denote by
_ the polynomials
_ = (g_1, โฆ, g_s, [_1 โฏ_sย 1] ยท(g, _) ).
For =(_1,โฆ,_k) in ^_1รโฏร^_k, with each _i
in ^_i, we denote by _ the polynomials in
[_1,โฆ,_s,z_1, โฆ. z_k] obtained by evaluating
_i at _i in _, for all i. Finally,
denote by ฯ the projection from the (L,z)-space ^s + โ to the z-space ^โ.
Suppose that g satisfies condition (A). Then for โ^_1รโฏร^_k,
ฯ(V(_)) is the critical locus of the restriction of the
map _ to V(g).
For any โ^_1รโฏร^_k, we denote by W(_, g) the set of critical
points of the restriction of _ to V(g). Since g satisfies
condition (A), the set W(_, g) is given by
{wย | g_1(w) = โฏ = g_s(w) = 0, ((g,_)(w)) โค s }.
Consider w in W(_, g) and a nonzero vector c
in the left kernel of (g,_)(w), of the form =ฬง(c_1,โฆ,c_s,c_s+1). The last coordinate c_s+1 cannot
vanish, as otherwise (c_1,โฆ,c_s) would be a nonzero vector in
the left kernel of (g)(w) (which is ruled out by
condition (A)). Dividing through by c_s+1, the point
(c',w), with c'_i=c_i/c_s+1 for i=1,โฆ,s, is a
solution of _.
Conversely, take ( โ,w) in V(_). Thus, w
cancels g, and (g, _) has rank less than s+1 at
w, so that ฯ(V(_a)) is in W(_, g).
Let _ and ฮณ__ be defined as in
(<ref>) and Lemmaย <ref>, respectively. For
i=1,โฆ,k, set Q_i = ฮณ__t_i+1,i, and
let h_1,โฆ,h_s =ฮณ_g_1,โฆ,ฮณ_g_s.
In particular, Lemma
<ref> implies that ฮณ__ is given
by
โ_i=1^k c_i Q_i +
โ_i=1^kโ_j=1^_i_j,i_j,i.
The sequence _ can be rewritten as
h_1โ, โฆ, h_sโ,
[_1ย โฆย _sย 1]
(โ h_1/โ_1,1 โฏ โ
h_1/โ_t_k, k
โฎ โฎ
โ h_s/โ_1, k โฏ โ
h_s/โ_t_k,
k
c_1 โ
Q_1/โ_1,1+_1,1 โฏ c_kโ
Q_k/โ_t_k, k + _t_k, k)_(z)ยท,
where is a multi-row-scaled Vandermonde matrix which is the Jacobian
matrix of with respect to z. This matrix has full rank at any point
w in the open set defined in
Subsectionย <ref>.
In particular, for any โ^_1รโฏร^_k,
the intersection of V(_) with ^sร is contained
in the preimage by the map Idร of the vanishing set
of the sequence
_ : h_1, โฆ, h_s,
[_1ย โฏย _sย 1]
(โ h_1/โ_1,1 โฏ โ
h_1/โ_t_k, k
โฎ โฎ
โ h_s/โ_1, 1 โฏ โ
h_s/โ_t_k, k
c_1โ
Q_1/โ_1,1+ a_1,1 โฏ c_kโ
Q_k/โ_t_k, k+ a_t_k, k
).
Since for all 1โค i โค k, _i defines a map with finite fibers (by
Newton identities and Vieta's formula, the preimage by of some point is
the set of roots of some polynomial of degree t_i), we deduce that and
consequently Idร define maps with finite fibers. Thus
If V(_) is finite, then V(_)โฉ (^sร) is
finite.
It remains to investigate finiteness properties of V(_).
Suppose that satisfies condition (A). Then, there exists
a non-empty Zariski open set โ^_1รโฏร^_k such that for any โ,
โจ_โฉโ[_1,โฆ,_s,z_1,
โฆ, z_k] is a radical ideal whose zero-set is finite.
Let Wโ^t_1รโฏร^t_k be the
vanishing set of (h_1, โฆ, h_s). Consider now the map
(ฮท, w)โ^sร W
โ
- ( โ_i=1^sฮท_iโ
h_i/โ_1,1+c_1โ Q_1/โ_1,1)_(w), โฆ, -(
โ_i=1^sฮท_iโ h_i/โ_t_k, k+c_kโ Q_k/โ_t_k,k)_(w).
By Sard's theorem <cit.>, the set of critical values of this map
is contained in a proper Zariski closed set โฌ of
^t_1รโฏร^t_k. Since
satisfies condition (A), for outside โฌ, the
Jacobian matrix of _ has full rank at any (ฮท,
w) with w in W. Hence, by the Jacobian criterion <cit.>, the ideal generated by _
in [_1,โฆ,_s,z_1, โฆ, z_k] is radical and is of
dimension at most zero.
Let be the non-empty Zariski open set defined in Prop
<ref>. Since g satisfies condition ( A), Lemma
<ref> implies that, for any โ, the critical
locus of the map _ restricted to V(g) is equal to
ฯ(V(_)). In addition, the sequence () also satisfies
condition ( A) by Lemma <ref>. Then, by
Prop.ย <ref>, for any โ, the algebraic set
defined by _ is finite.
By Lemmaย <ref>, this implies that V(_) contains
finitely many points in ^sร๐ฐ. This finishes
our proof of Prop.ย <ref>.
Using techniques fromย <cit.>, one could give a simple
exponential upper bound the degree of a hypersurface containing the
complement of .
ยง.ยง Finding extrema using proper maps
A real valued function ฯ : ^n โ is proper at
x โ if there exists an ฮต>0 such that
ฯ^-1([x-ฮต,x+ฮต]) is compact. Such functions
are of interest because a proper polynomial restricted to a real
algebraic set W reaches extrema on each connected component of W. Using
<cit.> one can construct proper
polynomials in the following way.
Let F = F_k(x_1, โฆ, x_n) + F_k-1(x_1, โฆ, x_n) + โฏ +
F_0(x_1, โฆ, x_n): ^n โ be a real polynomial, where
F_i is the homogeneous component of degree i of F. Assume
further that the leading form F_k of F is positive definite; then,
F is proper. In particular, the map P_2m + โ_i=0^2m-1ฮป_i P_i, with P_i the Newton sums in x_1,โฆ,x_n and
all ฮป_i in , is proper. We can extend this to blocks of
variables.
Let z_1, โฆ, z_k be blocks of t_1, โฆ,
t_k variables, respectively. If P_j,i := z_1,i^j + โฏ +
z_t_i,i^j, then for any m_1, โฆ, m_k โฅ 1 and coefficients
ฮป_i,j in , the
map
โ_i=1^k P_2m_i,i + โ_i=1^k โ_j=0^2m_i-1ฮป_j,i P_j,i
is
proper.
ยง MAIN RESULT
Let f = (f_1, โฆ, f_s) be a sequence of symmetric polynomials
in [x_1, โฆ, x_n] that satisfies condition (A). In this section we present an algorithm and its complexity to decide
whether the real locus of V(f) is empty or not.
To exploit the symmetry of f and to decide whether the set
V_(f) is empty or not, our main idea is slicing the variety
V(f) with hyperplanes which are encoded by a partition ฮป of
n. This way, we obtain a new polynomial system which is invariant
under the action S_ฮป := S_t_1รโฏร S_t_k
of symmetric groups. We proved in Lemma <ref> that
this new system also satisfies condition (A). We then use the
critical point method to decide whether the real locus of the
algebraic variety defined by this new system is empty or not by taking
a S_ฮป-invariant map as defined in the previous section.
ยง.ยง Critical points along S_n-orbits
Let g = (g_1, โฆ, g_s) be a sequence of S_ฮป-invariant
polynomials and ฯ be a S_ฮป-invariant map in [z_1,
โฆ, z_k], with z_i=(z_1, i, โฆ, z_t_i, i) for all
i. As before, we set โ = t_1 + โฏ + t_k, and we assume that
s โคโ. Assume further that the sequence g satisfies
condition ( A). Let ฯ be a S_ฮป-invariant map in [z_1,
โฆ, z_k].
Let ฮถ_ฯ and ฮถ_g in [e_1, โฆ, e_k], where
e_i = (e_1,i, โฆ, e_t_i, i) is a set of t_i new
variables, be such that
ฯ = ฮถ_ฯ(E_1, โฆ, E_k) and g =
ฮถ_g(E_1, โฆ, E_k).
Here E_i = (E_1, i, โฆ, E_t_i, i) denotes the
vector of elementary symmetric polynomials in variables z_i, with
each E_j, i having degreeย j for all j, i.
Let g, ฯ, and ฮป as above. Assume further that
ฮถ_ฯ has finitely many critical points on
V(ฮถ_g). Then there exists a randomized algorithm Critical_points (g, ฯ, ฮป) which returns a
zero-dimensional parametrization of the critical points of
ฮถ_ฯ restricted to V(ฮถ_g).
The algorithm uses
(ฮด^2c_ฮป(e_ฮป + c_ฮป^5)n^4ฮ)
operations in , where
c_ฮป = (g_1)โฏ(g_s)
ยท E_โ-s(ฮด-1, โฆ, ฮด-โ)/t_1!โฏ t_k!,
ฮ =
n^2n+ฮดฮด + n^4 ns+1, and
e_ฮป = n((g_1)+1)โฏ ((g_s)+1)ยท
E_โ-s(ฮด, โฆ, ฮด-โ+1)/t_1!โฏ t_k!,
with ฮด = max((g), (ฯ)).
The number of solutions is at most c_ฮป.
The Critical_points procedure contains two steps: first
finding ฮถ_g and ฮถ_ฯ from g and ฯ and
then computing a representation for the set W(ฮถ_ฯ,
ฮถ_g) of critical points of ฮถ_ฯ on
V(ฮถ_g). The first step can be done using the algorithm
Symmetric_Coordinates from <cit.>, which uses (โ+ฮดฮด^2) operations in .
Since the sequence g satisfies condition ( A),
Lemmaย <ref> implies that ฮถ_g also
satisfies condition ( A). Then, the set W(ฮถ_ฯ,
ฮถ_g) is the zero set of ฮถ_g and all the
(s+1)-minors of (ฮถ_g, ฮถ_ฯ). In particular,
when โ = s, W(ฮถ_ฯ, ฮถ_g)=V(ฮถ_g).
Since each E_j, i has degree j, it is natural to assign a
weight j to the variable e_j, i, so that the polynomial ring
[e_1, โฆ, e_k] is weighted of weights (1, โฆ, t_1,
โฆ, 1, โฆ, t_k). The weighted degrees of ฮถ_g and
ฮถ_ฯ are then equal to those of g and ฯ,
respectively. To compute a zero-dimensional parametrization for
W(ฮถ_ฯ, ฮถ_g) we use the symbolic homotopy method for
weighted domain given in <cit.> (see
also <cit.> for a detailed complexity
analysis). This procedure is randomized and requires
(ฮด^2c_ฮป(e_ฮป + c_ฮป^5)n^4ฮ) operations in .
Furthermore, results from <cit.> also
imply that the number of points in the output is at most c_ฮป.
Thus, the total complexity of the Critical_points algorithm
is then ( ฮด^2c_ฮป(e_ฮป +
c_ฮป^5)n^4ฮ) operations in .
ยง.ยง The Decide procedure
Consider a partition ฮป = (n_1^t_1 โฆ n_k^t_k) of
n, and let
_ฮป = (v, v_1,1, โฆ, v_t_1, 1, โฆ,
v_1,k,โฆ, v_t_k, k,ฮผ)
be a parametrization which encodes a
finite set W_ฮปโ^โ. This set lies in the target space
of the algebraic map F_ฮป : U_ฮปโ^โ
defined in Subsectionย <ref> as
u = ( u_1, 1, โฆ, u_1, 1_n_1, ย โฆ,ย u_t_k, k, โฆ, u_t_k, k_n_k)
โฆ (E_1, i(u_1, i, โฆ,
u_t_i, i), โฆ,E_t_i, i(u_1, i, โฆ,u_t_i, i))_1โค i โค k,
where E_j, i(u_1, i, โฆ,u_t_i, i) is the
j-th elementary
symmetric function in u_1, i, โฆ,u_t_i, i for i=1, โฆ, k
and j=1, โฆ, t_i.
Let V_ฮป be the preimage of W_ฮป by F_ฮป. In
this subsection we present a procedure called Decide(_ฮป) which takes as input _ฮป, and
decides whether the set V_ฮป contains real points.
In order to do this, a straightforward strategy consists in solving
the polynomial system to invert the map F_ฮป. Because of the
group action of S_t_1รโฏร S_t_k, we would then
obtain t_1 !โฏ t_k ! points in the preimage of a single point
in W_ฮป: we would lose the benefit of all that had been done
before.
This difficulty can be bypassed by encoding one single point per orbit
in the preimage of the points in W_ฮป. This can be done via
the following steps.
(i) Group together the variables e_i = (e_1,i, โฆ, e_t_i,i) which
encode the values taken by the elementary symmetric functions E_i,1,
โฆ, E_i, t_i (see Sec. <ref>) and denote by
v_i,1,โฆ, v_i, t_i the parametrizations corresponding to
e_1, i, โฆ, e_t_i, i;
(ii) Make a reduction to a bivariate polynomial system by
considering the polynomial with coefficients in [t]
ฯ_i = v'u^t_i - v_1, i u^t_i - 1 + โฏ + (-1)^t_i
v_t_i, iโ[t][u]
and โsolvingโ the system ฯ_i = v = 0. Here we recall that vโ[t] and
is square-free, so that v and v' are coprime.
(iii) It remains to decide whether, for all 1โค i โค k, there is a real
root ฯ of v such that when replacing t by ฯ in
ฯ_i, the resulting polynomial has all its roots real. To do this we
proceed by performing the following steps for 1โค iโค k:
* first we compute the Sturm-Habicht sequence associated to (
ฯ_i, โฯ_i/โ u) in [t] (the
Sturm-Habicht sequence is a signed subresultant sequence, see <cit.>);
* next, we compute Thom-encodings of the real roots of v, which is a way
to uniquely determine the roots of a univariate polynomial with real
coefficients by means of the signs of its derivatives at the considered real
root (see e.g. <cit.>);
* finally, for each real root ฯ of v, evaluate the signed
subresultant sequence at ฯ <cit.> and
compute the associated Cauchy index to deduce the number of real roots of
ฯ_i (see <cit.>).
(iv) For a given real root ฯ of v, it holds that, for all
1โค iโค k, the number of real roots of ฯ_i equals its degree, if
and only if V_ฮป is non-empty.
The above steps describe our Decide, which returns false
if V_ฮป contains real points, else true.
ยง.ยง The main algorithm
Our main algorithm Real_emptiness takes symmetric polynomials
f = (f_1, โฆ, f_s) in [x_1, โฆ, x_n], with s<n, which
satisfy condition ( A), and decides whether V_(f) is empty.
For a partition ฮป, we first find the polynomials f_ฮป
:= _ฮป(f), which are S_ฮป-invariant in [z_1,
โฆ, z_k], where _ฮป is defined as in
(<ref>). By Corollaryย <ref>,
f_ฮป satisfies condition (A), so we can apply the
results of Sectionย <ref>.
Let _ be the map defined in (<ref>) and
_ฮปโ^_1รโฏร^_1 be the non-zero Zariski open set defined in
Proposition <ref>. Assume is chosen in
_ฮป (this is one of the probabilistic aspects of our
algorithm) at step <ref>. By
Corollaryย <ref>, f_ฮป satisfies condition
( A). Then, the critical locus of the restriction of _
to V(f_ฮป) is of dimension at most zero (by Proposition
<ref>). In addition, the map _ is invariant under
the action of the group S_ฮป.
Let ฮถ__
and ฮถ_f_ฮป in [e_1,
โฆ, e_k] such that
_ = ฮถ__(E_1, โฆ, E_k) and f_ฮป= ฮถ_f_ฮป(E_1, โฆ,
E_k).
Here E_i = (E_1, i, โฆ, E_t_i, i) denotes the
vector of elementary symmetric polynomials in variables z_i. In the
next step, we compute a zero-dimensional parametrization
_ฮป of the critical set W_ฮป:= W(ฮถ__,
ฮถ_f_ฮป) of ฮถ__ restricted to
V(ฮถ_f_ฮป) by using the Critical_points algorithm from Lemmaย <ref>. The
parametrization _ฮป is
given by a sequence of polynomials (v, v_1,1, โฆ,
v_t_1, 1, โฆ, v_1, k, โฆ, v_t_k, k) in [t] and a
linear form ฮผ.
At the final step, we run the Decide(_ฮป) in order
to determine whether the preimage of W_ฮป by the map
F_ฮป contains real points.
Assume that, on input symmetric f as above,
and satisfying condition
(A), for all partitions ฮป of length at least s, is chosen
in _ฮป and that all calls to the randomized algorithm Critical_points return the correct result.
Then Algorithm Real_emptiness returns true if V(f)โฉ^n is
empty and otherwise it returns false.
Since f satisfies condition ( A), Lemma <ref>
implies that f_ฮป also satisfies this condition. Then, by the Jacobian
criterion <cit.>, V(f_ฮป) is smooth
and equidimensional of dimension (โ-s), where โ is the length of
ฮป. Therefore, if โ < s, then the algebraic set V(f_ฮป) is
empty. Thus, the union of V(f_ฮป)โฉ_ฮป where
_ฮป is the open set defined in
Subsectionย <ref> and ฮป runs over the partitions
of n of length at least s, forms a partition of V(f). Hence,
V(f)โฉ^n is non-empty if and only if there exists at least one such
partition for which V(f_ฮป)โฉ_ฮปโฉ^n is non-empty.
We already observed that for all ฮป, f_ฮป does
satisfy condition ( A).
Since we have assumed that each time Step
<ref> is performed, is chosen in _ฮป, we apply
Propositionย <ref> to deduce that the conditions of
Lemmaย <ref> are satisfied. Hence, all calls to
Critical_points are valid.
Note that since we assume that all these calls return the correct result, we
deduce that their output encodes points which all lie in V(f). Hence, if
V(f)โฉ^n is empty, applying the routine Decide on these outputs
will always return true and, all in all, our algorithm returns true when V(f)โฉ^n is empty.
It remains to prove that it returns false when V(f)โฉ^n is
non-empty. Note that there is a partition ฮป such that
V(f_ฮป)โฉ^n is nonempty and has an empty intersection with the
complement of _ฮป. That is, all connected components of
V(f_ฮป)โฉ^n are in _ฮป.
Let C be such a connected component. By
Lemmaย <ref>, the map _ is proper, and
non-negative. Hence, its restriction to V(f_ฮป)โฉ^n
reaches its extremum at all connected components of
V(f_ฮป)โฉ^n. This implies that the restriction of
_ to V(f_ฮป) has real critical points which are
contained in C (and by Propositionย <ref> there are
finitely many). Those critical points are then encoded by the output
of the call to Critical_points (Stepย <ref>)
and false is returned.
ยง.ยง Complexity analysis
Let d = max((f)). First for a partition ฮป, applying
_ฮป to f takes linear time in O(n n+d d),
the number of monomials of f and the cost of Step <ref> is
nothing. At the core of the algorithm, computing _ฮป at
Stepย <ref> requires (
ฮด^2c_ฮป(e_ฮป + c_ฮป^5)n^4ฮ)
operations in by Lemmaย <ref>, where ฮด =
max(d, (ฯ_)). Also, the degree of _ฮป is at
most c_ฮป.
In order to determine the cost of the Decide process at Step
<ref>, let a be the degree of v and b be the maximum of the
partial degrees of ฯ_i's w.r.t. u. By the complexity analysis of
<cit.>, Step (1) above is performed within
O( b^4a ) arithmetic operations in [t] using a classical
evaluation interpolation scheme (there are b polynomials to interpolate, all
of them being of degree โค 2ab). Step (2) above requires O(
a^4log(a) ) arithmetic operations in (see the complexity
analysis of <cit.>). Finally, in Step (3), we
evaluate the signs of b polynomials of degree โค 2ab at the real roots of
v (of degree a) whose Thom encodings were just computed. This is performed
using O( b a^3( (log(a) + b) ) ) arithmetic
operations in following the complexity analysis of <cit.>. The sum of these estimates lies in O(b^4a + b
a^4( (log(a) + b) ) ).
Now, recall that the degree of v is the degree of _ฮป, so a โค
c_ฮป. The degree of ฯ_i w.r.t. u equals t_i and t_iโค n.
This means b โค n. All in all, we deduce that the total cost of this final
step lies in O( n^4c_ฮป + n^2c_ฮป), which
is negligible compared to the previous costs.
In the worst case, one need to consider all the partitions of n of
length at least s. Thus the total complexity of Real_emptiness is
โ_ฮป, โโฅ s(
ฮด^2c_ฮป(e_ฮป + c_ฮป^5)n^4ฮ)
operations in . In addition, Lemma 34 in
<cit.> implies that
โ_ฮป, โโฅ s
c_ฮปโค c and โ_ฮป, โโฅ s e_ฮปโค e,
where c = (ฮถ_f_ฮป)^sn+ฮด-1 n and e = n((ฮถ_f_ฮป)+1)^sn+ฮด
n. Notice further that n+ฮดฮดโค (n+1)n+ฮด-1
d and e =n(d+1)^sn+ฮด nโค
n(n+1)c^5 for ฮดโฅ
2. In addition, since (ฯ_) โคmax(t_i)+1 โค n,
the total cost of our
algorithm is
(d^2n^6c^6ฮ)
= (d^6s+2n^11n+d n^6(n+d n
+ ns+1) )
operations in .
ยง.ยง An example
Let n=4 and s=1 with f = (f) where
f = x_1^2 + x_2^2 +
x_3^2 + x_4^2 -6x_1x_2x_3x_4 -1.
Consider first the
partition ฮป= (4^1). Then f_ฮป := _ฮป(f)=
-6z_1,1^4+4z_1,1^2-1 which has no real solution as
f_ฮป = -2z_1,1^4-(2z_1,1^2-1)^2 < 0 for all z_1,1โ.
Next we consider ฮป=(2^2). Then
f_(2^2) = 2z_1,1^2+2z_2,1^2
-6z_1,1^2z_2,1^2-1
and we take ฯ =
5(z_1,1^2+z_2,1^2)-9(z_1,2+z_2,1)-3. In this case
ฮถ_f_(2^2)ย =ย 2e_1,1^2-6e_2,1^2-4e_2,1-1 and
ฮถ_ฯ =
5e_1,1^2-9e_1,1-10e_2,1-3.
The critical points of ฮถ_ฯ
restricted to V(ฮถ_f_(2^2)) are solutions to
ฮถ_f_(2^2) = ((ฮถ_f_(2^2),
ฮถ_ฯ)) = 0,
that is 2e_1,1^2-6e_2,1^2-4e_2,1-1 =
120e_1,1e_2,1-108e_2,1-36 = 0. A zero-dimensional parametrization of these critical points is
given by
((v, v_1,1, v_2,1), ฮผ), where
v = 200t^4-360t^3+62t^2+60t-27,
v_1,1 = t, ย andย
v_2,1 =
-1/6t^3+9/20t^2-31/600t-1/20.
At the final step, we check that the system
ฯ_1 = v = 0, with ฯ_1 =
v'u^2-v_1,1u+v_2,1โ[t,u],
has real solutions. This implies that V_(f) is non-empty.
The output of our algorithm is consistent with the fact that the point
(1,1,1/2,1/2) is in V_(f).
ยง TOPICS FOR FUTURE RESEARCH
Determining topological properties of a real variety V_(f) is an
important algorithmic problem. Here we have presented an efficient
algorithm to determine if V_(f) is empty or not. More generally,
we expect that the ideas presented here may lead to algorithmic
improvements also in more refined questions, like computing one point
per connected component or the Euler characteristic for a real
symmetric variety. Furthermore, while our complexity gains are
significant for symmetric input we conjecture that we can do better in
certain cases. In particular, when the degree of the polynomials is at
most n then we expect we that a combination with the topological
properties of symmetric semi algebraic sets found in <cit.> can reduce the number of orbits considered,
for example, instead of n^d we might only need n^d/2 for fixed
d. Finally, a generalization to general symmetric semi algebraic
sets should be possible.
ACM-Reference-Format
|
http://arxiv.org/abs/2306.10776v1
|
20230619085008
|
Transition between radial and toroidal orders in a trimer-based magnetic metasurface
|
[
"Vladimir R. Tuz",
"Andrey B. Evlyukhin",
"Volodymyr I. Fesenko"
] |
physics.app-ph
|
[
"physics.app-ph"
] |
APS/123-QED
[email protected]
^1State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, International Center of Future Science, Jilin University, 2699 Qianjin St., Changchun, 130012, China
^2School of Radiophysics, Biomedical Electronics and Computer Systems, V.ย N.ย Karazin Kharkiv National University, 4, Svobody Sq., Kharkiv 61022, Ukraine
^3Institute of Quantum Optics, Leibniz Universitรคt Hannover, 30167 Hannover, Germany
^4Institute of Radio Astronomy of National Academy of Sciences of Ukraine, 4 Mystetstv St., Kharkiv 61002, Ukraine
Magnetic materials are used in a wide variety of technologies, being key components of microwave and quantum devices. The properties of such materials manifest themselves in an external magnetic field and are mainly determined by the order of intrinsic magnetic moments (spin) associated with their subatomic particles. Metasurfaces, the two-dimensional counterparts of metamaterials, can be a platform for studying magnetic orders due to the possibility of excitation of dynamic magnetic moments in their particles (meta-atoms) by incoming electromagnetic radiation. In this paper, the change in the arrangement of magnetic dipole moments in a magnetic metasurface, due to the influence of an external static magnetic field, is discussed. Each meta-atom of the metasurface is composed of three identical disk-shaped resonators (trimer) made of magnetically saturated ferrite. To provide physical insight, full-wave numerical simulations of the near-fields and transmission characteristics of the metasurface are complemented by the theoretical description based on the multipole decomposition method. With these methods, the study of eigenmodes and scattering conditions of a single magnetic resonator, trimer, and their array forming the metasurface is performed. It is found that the magnetic dipole-based collective hybrid mode of the trimer can be gradually transformed from the radial (pseudomonopole) to azimuthal (toroidal) order by increasing the bias magnetic field strength. This is accompanied by various patterns of localization of the electric field inside the meta-atoms. Due to the unique field configuration of these modes, the proposed metasurface can be considered for designing magnetic field sensors and nonreciprocal devices.
78.20.Ek, 78.20.Fm, 78.67.Pt
Transition between radial and toroidal orders in a trimer-based magnetic metasurface
Volodymyr I. Fesenko^4
July 31, 2023
====================================================================================
ยง INTRODUCTION
The interpretation of magnetic effects in solids has developed from two concepts <cit.>. The first is the examination of microscopic electrical currents, such as orbital currents in atoms, or magnetic dipole moments of subatomic particles related to their intrinsic angular momentum (spin). Induced magnetic moments are produced by an externally applied magnetic field, whereas spontaneous moments are present even in the absence of this field. The second concept involves the consideration of mutual interactions of the magnetic moments through ordinary dipole-dipole and quantum mechanical forces. These so-called exchange forces depend on the separation of the magnetic ions as well as their internal arrangement, leading to a variety of magnetic orders in solids. The important aspects of the study of magnetic orders regarding their nature are (i) the type of arrangement of magnetic moments and (ii) the characteristics of ordering processes associated with the corresponding phase transitions and other critical phenomena. These magnetic phase transitions can occur both from an ordered state to a disordered state and from one magnetic order to another (e.g., the ordered magnetic moments change and become disordered at the Curie temperature; on phase transitions and critical phenomena, see Ref. <cit.>).
A typical example of the influence of order on the magnetic properties of solids is the difference between ferromagnets and antiferromagnets (these materials belong to a class of ferroics; regarding the classification of materials according to their magnetic properties, see tutorials and reference books <cit.>). The order parameter of such materials is related to magnetic dipole moments and is known as magnetization. Thus, ferromagnets exhibit parallel alignment of static magnetic moments which results in the magnetization of the material even in the absence of an external bias magnetic field. In antiferromagnets, there is the presence of equal magnetic moments which are aligned in opposite directions resulting in the antiparallel moments canceling each other. This makes the total magnetic moment of antiferromagnets equal to zero. Nevertheless, the simple classification into ferromagnetic and antiferromagnetic orders cannot do justice to the rich variety of other possible ordered patterns. For instance, there are substances with molecules in which magnetization vectors have azimuthal arrangement forming closed loops <cit.>. They are termed ferrotoroidal materials whose order parameter is called toroidization or the ferrotoroidal moment (in what follows, for brevity, the prefix `ferro' is omitted; see recent reviews on the appearance of the toroidicity in natural materials in Refs. <cit.>).
Various models are used to describe the possible orders in solids considering interactions between systems of particles and providing a framework for the study and classification of critical phenomena including phase transitions <cit.>. In particular, the two-dimensional Ising model describes the interaction of particles that possess a magnetic dipole moment. In this model, it is supposed that the energy of a system of such particles depends on the orientation of different magnetic dipole moments relative to each other as well as to an externally applied magnetic field <cit.>. Recently, it has been proposed to study different magnetic orders with metasurfaces <cit.>. They are arrays of artificially structured subwavelength resonant particles that facilitate to reach of desirable electromagnetic functionality. Metasurfaces can maintain artificial magnetism because any desired configurations of their meta-atoms can be created <cit.>. In such meta-atoms, dynamic magnetic moments can arise from macroscopic currents induced by oscillating electromagnetic fields originating from incoming radiation.
In this regard, the use of metasurfaces for the study of toroidicity has acquired particularly significant attention. It is due to the fact that natural materials bearing the toroidal moment are scarce, and the manifestation of the toroidicity in such materials is usually weak. However, a strong toroidal response can be created in metasurfaces when their meta-atoms are properly designed. This implies the excitation of a toroidal dipole moment from a closed loop of magnetic dipoles inherent in the constituent particles forming a meta-atom. In this manner, the toroidal response is constructed in various metasurfaces whose meta-atoms are composed of either metallic <cit.> or dielectric <cit.> particles (for a review on the application of metasurfaces bearing toroidal moments, see Ref.ย <cit.>).
In our previous research <cit.>, the general conditions for the existence of toroidal dipole moments in metasurfaces composed of dielectric particles are revealed involving the multipole decomposition method and group-theoretical analysis. These conditions are checked against both numerical simulations and microwave experiments. Similar to the ferromagnetic and antiferromagnetic orders in ferroics, the toroidic and antitoroidic orders are found to exist in such metasurfaces. Further development in this field may include utilizing magnetic materials in metasurfaces to implement their tunable configurations <cit.>. Specifically, these can be structures that change their dynamic magnetic order under external influence. As a significant advantage, incorporating magnetic materials into metasurface compositions can enhance magneto-optical effects and open prospects in developing advanced nonreciprocal devices <cit.>.
Therefore, in the present paper, our goal is to demonstrate that in a metasurface composed of magnetic particles, the transition between different orders of magnetic dipole moments can arise under the action of an external magnetic field. These magnetic moments belong to the particles that compose the meta-atoms (here we consider meta-atoms made in the form of trimers) and their order is associated with the hybridization of the eigenmodes of the particles. These orders are the azimuthal (toroidal) and radial (pseudomonopole) arrangements of the magnetic moments (the difference between radial and azimuthal is that radial is arranged like rays that radiate from or converge to a common center while azimuthal is of or pertaining to the azimuth, i.e., in a horizontal closed loop). To describe the transition between these two orders influenced by an external static magnetic field, we utilize the multipole decomposition method developed earlier for the calculation of scattering by magnetic particles <cit.>. The theoretical consideration is supplemented by numerical simulations performed with the commercial COMSOL MULTIPHYSICS solver for the metasurface operating in the microwave range. The simulation assumes that the particles are made from a magnetic material (ferrite), which is widely used in practice when constructing microwave devices.
ยง PROBLEM STATEMENT
In what follows, we consider a metasurface composed of trimer-based meta-atoms adjusted in a two-dimensional array as shown in Fig.ย <ref>. The translation unit cell of this array is a square with the side size p. Each meta-atom consists of three identical cylindrical resonators (disks) located at the vertices of an equilateral triangle lying in the x-y plane with their axes parallel to the z-axis. The distance between adjacent disks in the trimer is d. The radius and height of the disks are a and h, respectively.
We consider that the metasurface is normally irradiated by the field of a plane wave with the wave vector k = {0, 0,-k_z} and wavelength ฮป = 2ฯ c/ฯ, where ฯ is the angular frequency and c is the speed of light in vacuum. We define that the incident plane wave is either linearly or circularly polarized.
As a material for disks, we choose a widely used ferrite <cit.>. An external static magnetic field H_0=z_0H_0 is applied along the z-axis parallel to the magnetization M=z_0M_s, where z_0 is the unit vector in the z-axis direction and M_s is the saturation magnetization. Since the direction of propagation of the incident wave coincides with the bias of the external static magnetic field (kโฅH_0), the problem is stated in the Faraday configuration.
In the chosen coordinate frame and with accounting for the time factor exp(-iฯ t), ferrite is characterized by the scalar permittivity ฮต and the second-rank permeability tensor written in the form
ฮผฬ=ฮผ_0( ฮผ iฮผ_a 0-iฮผ_a ฮผ 00 0 1).
The elements of this tensor are
ฮผ=1+ฯ^'+iฯ^''=ฮผ^'+iฮผ^'',
ฮผ_a=ฯ_a^'+iฯ_a^''=ฮผ^'_a+iฮผ^''_a,
where ฯ^'=ฯ_0ฯ_m[ ฯ_0^2-ฯ^2(1-b^2) ]D^-1, ฯ_m=ฮผ_0ฮณ M_s is the magnetization frequency, ฯ_0=ฮผ_0ฮณ H_0 is the Larmor frequency, ฯ^'' = ฯฯ_m b[ ฯ_0^2 - ฯ^2(1+b^2)]D^-1, ฯ_a^' = ฯฯ_m [ ฯ_0^2 - ฯ^2(1+b^2) ]D^-1, ฯ_a^'' = 2bฯ^2ฯ_0ฯ_m D^-1, D=[ฯ_0^2-ฯ^2(1+b^2)]+4b^2ฯ^2ฯ_0^2, ฮณ is the gyromagnetic ratio, b=ฮผ_0ฮณฮฉ/2ฯ is a dimensionless damping constant, ฮฉ is the linewidth of the ferromagnetic resonance.
To be specific, commercially available ๐ ๐
-doped polycrystalline yttrium iron garnet (YIG) G-400 is chosen to be a ferrite material for the disks <cit.>. The properties of this material appear in the microwave range f โ [6,12]ย GHz, f=ฯ/2ฯ, which we fix for our study. To construct the metasurface, we consider that the disks are embedded into a homogeneous medium (host) made from an air-like material with constitutive parameters ฮต_h=ฮผ_h=1 (although this paper is purely theoretical, see our experimental microwave samples of dielectric metasurfaces bearing toroidal resonances and corresponding measurement techniques in Refs. <cit.>).
ยง MAGNETIC DIPOLE MODE OF Aย SINGLE FERRITE RESONATOR
As was previously revealed <cit.>, a dielectric (nonmagnetic) metasurface composed of trimers arranged in square unit cells indeed demonstrates a dynamic toroidal response, when the structure is excited by a normally incident wave with a proper polarization. This response arises due to the hybridization of a natural mode (eigenmode) of individual resonators that form the meta-atoms. It is the hybrid electromagnetic HE_11โ mode of a cylindrical resonator (the HEM_11โ mode in the nomenclature of Ref. <cit.>). In the subscripts of this mode abbreviation, the first index denotes the azimuthal variation of the fields, the second index denotes the order of variation of the field along the radial direction, and the third index denotes the order of variation of fields along the z-direction. Since we do not fix the exact number of the field variations along the z-axis, we substitute the third index with the letter โ (this notation is based on the mode nomenclature of cylindrical dielectric waveguides introduced in Ref.ย <cit.> where the subscripts nm of the HE_nm mode refer to the nth order and mth rank, and the rank gives the successive solutions of the boundary condition equation involving the Bessel function J_n).
For the HE_11โ mode, the fields are azimuthally dependent and the H_z component is sufficiently smaller than the E_z component. From the theory of dielectric resonators <cit.>, it is known that this mode radiates like a magnetic dipole m oriented in-plane of the disk, and in terms of the Mie-theory, it is considered as a magnetic dipole (MD) mode <cit.> (hereinafter we will use this shorter notation). Within the framework of the present consideration, it is necessary to find out what changes appear in the characteristics of the MD mode when the resonator is made of ferrite influenced by an external static magnetic field. Obviously, these characteristics are determined in part by the properties of the constitutive unbounded (bulk) ferrite and the direction of application of the external static magnetic field with respect to the rotation axis of the disk-shaped resonator and the primary wave incidence.
ยง.ยง Magnetic subsystem conditions
As is well known, the eigenwaves of bulk ferrite that are propagated along the magnetization direction are the left-handed (LCP) and right-handed (RCP) circularly polarized waves, which differ in the propagation constants (the Faraday effect). Near the ferromagnetic resonance, circular dichroism also takes place, which manifests itself in a significant attenuation of the LCP eigenwave as it propagates. From a phenomenological point of view, it is explained by the fact that each of those waves propagates in the medium with a different effective permeability
ฮผ^ยฑ=ฮผยฑฮผ_a=(ฮผ^ยฑ)^'+i(ฮผ^ยฑ)^'',
where ฮผ^+ and ฮผ^- are related to the LCP and RCP waves, respectively.
Moreover, in the ferrite resonator under consideration, the dominating components of the magnetic field H of the MD mode are transverse to the bias magnetic field H_0 (HH_0, H_zโ 0). This feature and the difference in the eigenwaves propagation condition in the bulk ferrite result in the degeneracy lifting for the MD mode in the disk-shaped resonator and the corresponding eigenfrequency splitting <cit.>. In what follows, we distinguish these split modes as MD^+ and MD^-.
The eigenfrequencies of the MD^ยฑ modes of a ferrite disk-shaped resonator can be estimated using the following empirical relation <cit.>
f^ยฑโ6.324c/2ฯ aโ(ฮตฮผ^ยฑ+2)[0.27+0.36a/2h+0.02(a/2h)^2].
This estimation is valid in the range a/h โ [0.33, 5.5] with an accuracy of about 2% for calculation of the resonant frequency for both MD^+ and MD^- modes <cit.>.
To show the discussed characteristics explicitly, we collect all the parameters related to the bulk ferrite and the MD mode of the ferrite resonator in Fig.ย <ref>. In the chosen frequency range, the ferromagnetic resonance manifests itself in the bulk ferrite when the strength of the static magnetic field H_0 is higher than 2ย kOe. From the eigenfrequencies plotted at the bottom plane in Fig.ย <ref>(a), one can readily see that the MD^+ mode of the ferrite resonator is significantly perturbed by the ferromagnetic resonance and its dispersion curve has a discontinuity, whereas the MD^- mode does not undergo any noticeable change in its dispersion. Due to circular dichroism, there is a significant attenuation of the MD^+ mode of the ferrite resonator nearby the ferromagnetic resonance frequency.
One can also notice that the MD^+ and MD^- modes do not degenerate in the ferrite particle even in the absence of the magnetic bias field (H_0=0). Previously, this peculiarity has been experimentally observed for disk-shaped resonators made of YIG as well as barium hexaferrite <cit.>. This initial mode splitting, which can reach several GHz, is not caused by residual magnetization and strongly depends on specific parameters, such as the saturation magnetization M_s and the magnitude of the anisotropy field (this effect is beyond the scope of our present study; extra details on this issue can be found in Refs.ย <cit.>).
The characteristics of the effective permeabilities ฮผ^ยฑ are also plotted in Fig.ย <ref>(b) as functions of the external magnetic field H_0. A particular strength of this field where the ferromagnetic resonance arises, is denoted as H_res. These curves show typical behaviors, where at the ferromagnetic resonance, the real part of ฮผ^+ undergoes changing from negative to positive values while the imaginary part reaches its maximum. The real and imaginary parts of ฮผ^- have only a slight monotonous decrease with increasing H_0.
For our further consideration, we choose the parameter space of the problem in such a way that the MD mode is in the region where the real part of ฮผ^+ is greater than zero. It is a region above the ferromagnetic resonance (H_res<H_0), where microwave devices based on low-loss ferrite materials are usually operated. A schematic representation of the flow of the electric and magnetic fields as well as the orientation of the magnetic dipole moment for the MD mode of our interest is shown in Fig.ย <ref>(b) just for reference.
ยง.ยง Resonator scattering conditions
To fully reveal the MD mode features of an individual ferrite resonator, we perform the study of scattered fields involving the multipole decomposition technique developed earlier <cit.> to examine the scattering by magnetic particles. In general, the scattering cross-section in the multipole decomposition representation is written as <cit.>
ฯ_ sca = k^4/6ฯฮต_0^2ฮต_h^2| E_in|^2|๐ฉ|^2+k^4/6ฯฮต_0^2ฮต_h^2 v^2| E_in|^2|๐ฆ|^2
+k^6/80ฯฮต_0^2ฮต_h^2ฮผ_h| E_in|^2โ_ฮฒฮณ|Q_ฮฒฮณ|^2+โฏ
where k is the wave number in the surrounding medium, E_in is the electric field amplitude of the incident plane wave. Note that in Eq. (<ref>) we explicitly indicate only contributions of the scatterer's electric dipole p, magnetic dipole m, and electric quadrupole Qฬ moments. For magnetic particles, all total multipoles moments are the combinations of two parts originating from the electric polarization P( r)=ฮต_0(ฮต-ฮต_h) E( r) and magnetization M( r)=(ฮผฬ-ฮผ_h1ฬ) H( r) induced by incoming radiation inside the scatterer. Here E( r) and H( r) are the total oscillating electric and magnetic fields at the point r inside the scatterer, respectively, and 1ฬ is the 3ร3 unit tensor. In general, the magnetic and electric dipole moments are given by the sums
๐ฆ=๐ฆ^E+๐ฆ^H, ๐ฉ=๐ฉ^E+๐ฉ^H,
where the superscripts E and H are related to the dipole moments determined by the polarization ๐ (electric part) and the magnetization ๐ (magnetic part), respectively. The exact expressions for the magnetic and electric dipole moments can be found in Ref. <cit.>. Here, for the convenience of our discussion, we present the expressions for these dipole moments in the long wavelength approximation (LWA) which is well applicable when the size of scatterers is smaller than the incident wavelength <cit.>.
Thus, for the first-order terms of the LWA, we have
๐ฆ^Eโ-iฯ1/2โซ_V_p[๐ซร๐]d๐ซ,
๐ฆ^Hโ1/ฮผ_hโซ_V_p๐d๐ซ +k^2/10ฮผ_hโซ_V_p[๐ซ(๐ซยท๐)-2r^2๐]d๐ซ,
๐ฉ^Eโโซ_V_p๐d๐ซ + ฮผ_hk^2/10โซ_V_p[(๐ซยท๐)๐ซ-2r^2๐]d๐ซ,
๐ฉ^Hโik^2/ฯฮผ_h1/2โซ_V_p[๐ซร๐]d๐ซ.
Note that the second integral terms in Eqs. (<ref>) and (<ref>) are introduced as the magnetic T^ H and electric T^ E toroidal dipole moments, respectively <cit.>.
To characterize absorption in a scatterer, the absorption cross-section ฯ_abs is used, which for magnetic particles is determined by the expression
ฯ_abs=k/ฮต_0ฮต_h|E_in|^2 Imโซ_V_p{( Pยท E^*)+ฮผ_0( Mยท H^*)}d r.
Note that for scatterers with real permittivity, as in our case, the contribution of P into the absorption cross-section is equal to zero. Therefore, the absorption is associated only with the magnetic subsystem, so that
ฯ_abs=k/ฮผ_h|H_in|^2 Imโซ_V_p( Mยท H^*)d r,
where (*) denotes the complex conjugation, H_in is the magnetic field amplitude of the incident plane wave.
For the calculation of the scattering and absorption cross-sections as well as the introduced dipole moments, the corresponding procedures are implemented in the RF module of the commercial COMSOL MULTIPHYSICS software. The realization of the computational model in this solver for both single particle and metasurface has been described earlier (see Refs. <cit.>), therefore here we omit all the details so as not to overburden the presentation. To reveal the scattering characteristics caused by the excitation of both MD^+ and MD^- modes, we consider the frontal irradiation of the disk-shaped resonator by a wave with linear polarization. For definiteness, we fix that the vector E of the incident wave is oriented along the y-axis. After simulation, we retrieve from the COMSOL MULTIPHYSICS data both scattering and absorption cross-sections as functions of the frequency f and the external static magnetic field strength H_0. The implementation of expressions (<ref>)-(<ref>) for the dipole contributions in the COMSOL MULTIPHYSICS also makes it possible to visualize the distribution of fields and the orientation of the dipole moments inside the resonator and, thus, to reveal the influence of the bias magnetic field on the MD mode behaviors. The obtained results are summarized in Fig.ย <ref>.
In Fig.ย <ref>(a), one can readily identify the manifestation of both the magnetic subsystem in its essence and the split MD resonances of the disk-shaped ferrite particle. In particular, the maximal magnitudes in the scattering and absorption cross-sections can be uniquely matched with the MD^ยฑ dispersion curves and ferromagnetic resonance position in the f-H_0 space, respectively, shown above in Fig.ย <ref>(a). However, since the primary wave has a linear polarization, here the areas of existence for both split MD modes experience a discontinuity in the region of ferromagnetic resonance. In fact, the MD^ยฑ resonances disappear completely in the region where the real part of ฮผ^+ has a negative value, within which the magnetic losses are significant and the field cannot be localized inside the particle <cit.>. We should note that such an effect for axially magnetized ferrite disks has been theoretically predicted <cit.> and experimentally confirmed for ๐ ๐
-doped YIG disks in the microwave frequency range (see also Fig. 3 in Ref.ย <cit.>). This additionally verifies our results obtained.
Figure <ref>(b) demonstrates the separate contributions of the electric m^E and magnetic m^H parts of the magnetic dipole moment m to the scattering. One can conclude that the m^H part makes a substantial contribution near the frequency of ferromagnetic resonance, where it affects the behaviors of the MD^+ mode and is responsible for absorption. In the rest of the range of parameters f-H_0, the contribution of this part of the magnetic dipole moment is negligible, and the resonant scattering properties of the ferrite disk are determined mainly by the m^E contribution. The apparent resonant behavior of the scattering cross-section in the range f>10 GHz and H_0>3 kOe arises due to the excitation of an electric dipole resonance, but it is not important for our present consideration.
The specific influence of the magnetic subsystem is that the overall magnetic moment m of the MD^+ mode undergoes rotation as the magnetic field strength H_0 increases, whereas no such changes are observed for the MD^- mode. The representative patterns of the magnetic field magnitude and flow as well as the orientation of the vector m for these split modes are shown in Fig. <ref>(c). In particular, the presented pictures suggest that the rotation of the magnetic dipole moment m for the MD^+ mode is clockwise, with increasing H_0. Further, it is of interest to find out how this rotation manifests itself in a system of several electromagnetically coupled ferrite particles, particularly, in trimers that form a metasurface.
ยง MAGNETIC DIPOLE MOMENTS HYBRIDIZATION IN TRIMER
Let us now turn to the study of the spectral behaviors of our metasurface based on trimers. It is known that the electromagnetic response of the metasurface with composite meta-atoms (clusters) containing three or more resonant particles is determined both by the properties of individual particles and, to a greater extent, by the near-field coupling between these particles within the cluster <cit.>. This electromagnetic coupling results in existing collective hybrid modes of the cluster. Among a large variety of such collective modes, of particular interest are the so-called radial (pseudomonopole) and azimuthal (toroidal) modes <cit.> that arise from the MD mode hybridization (hereinafter we denote it as the collective MD mode). These modes are not radiative, which is important from the practical viewpoint making it possible to realize strong field localization inside the system when the metasurface is operated on these modes (nonetheless, for the trimers, the pseudomonopole exhibits a nonzero magnetic octupole moment Q^(m)_yxx, that in some cases may unmask it <cit.>). Although both these modes of the trimer arise from the MD mode of individual disk-shaped resonators considered above, there is some difference in the arrangement of three magnetic dipole moments, each of which belongs to the corresponding particle (recall that these magnetic moments still lie in the horizontal plane of the metasurface). In particular, the toroidal dipole mode appears from a head-to-tail arrangement of the in-plane magnetic dipole moments providing the toroidal dipole moment of this system is oriented out-of-plane. In contrast, for the pseudomonopole, there is their tail-to-tail (or head-to-head) arrangement (see Tableย <ref> in Appendixย <ref>).
We should note that the choice of a trimer-based meta-atom for the formation of our metasurface is conditioned by the fact that such a cluster provides the most straightforward rotational arrangement of particles able to support the toroidal mode <cit.>. Moreover, such a cluster configuration allows one to efficiently separate the toroidal dipole from other higher-order electric and magnetic multipoles that exist in the meta-atom, compared to clusters composed of the even number of particles like a four-rod configuration <cit.>.
Next, we study the manifestation of these modes in the transmission spectra of our metasurface and fix changes in the position of the corresponding resonant frequencies with an increase in the strength of the bias magnetic field H_0. For this study, we consider that the metasurface is illuminated by a normally incident plane wave with either left-handed (LCP) or right-handed (RCP) circular polarization. Here, we have chosen circularly polarized waves to irradiate the structure, since they are eigenwaves of the longitudinally magnetized ferrite, providing independent excitation of the MD^+ and MD^- modes for the purity of our experiment.
In the framework of the RF module of the COMSOL MULTIPHYSICS, a single unit cell of the metasurface containing a trimer is implemented, whereas their two-dimensional arrangement is imitated by applying the periodic (Floquet) boundary conditions repeating the unit cell infinitely in both transverse directions. We dispose of the radiating and receiving ports above and below the metasurface, respectively. Just like before, after simulation, we retrieve values of the transmission coefficient |T|^2 = |S_21|^2 as a function of the frequency f and the strength of the external magnetic field H_0 for the LCP and RCP incident waves.
The results of these calculations are summarized in Fig.ย <ref>. The collective MD mode of the trimer is effectively excited in the metasurface when it is irradiated with an LCP wave, while when it is irradiated with an RCP wave, only a very slight manifestation of this mode is observed in the transmission spectra. When H_0 increases, the corresponding resonant frequency is shifted towards higher frequencies, and the quality factor of the resonance changes, whereas the manifestation of the resonance in the spectra of the RCP wave becomes to be smoothed out (just for convenience, in the figure, we have marked the positions of the collective MD resonance on the frequency scale with grey vertical lines ending with arrows and also indicated its particular resonant frequencies in rectangular frames). The change in the quality factor is explained by the approaching of the mode frequency to the ferromagnetic resonance frequency and, thus, an increase in the level of magnetic losses. Moreover, this effect is also caused by a change in the characteristic of the collective MD mode itself. Thus, the appearance of the collective MD mode in the transmission spectra of the RCP wave at low H_0 indicates that the field distribution of this mode is different from that at the higher H_0, where this resonance completely disappears. It means that in the particular H_0 range, the components may appear in the field of the collective MD mode which can interact with the magnetic subsystem even when the metasurface is irradiated with the RCP wave. All these features can be revealed from the dipole contributions and electromagnetic field patterns calculated for the metasurface unit cell.
Therefore, to elucidate all conditions of the corresponding resonance, the magnitude of the toroidal dipole T, electric dipole p, and magnetic dipole m moments of the trimer included in the metasurface unit cell are calculated in the COMSOL MULTIPHYSICS for different values of H_0. The results are depicted in Fig.ย <ref>. We consider only the case when the metasurface is irradiated with the LCP wave (i.e., we are interested in the hybridization of the MD^+ mode only). These are the contributions to the effective scattering cross-section, which would be in the case of scattering by the unit cell into free space (they are calculated with respect to the center of mass of the trimer using Eqs. (<ref>)-(<ref>), where the volume V_s is replaced by the volumes of all disks forming the trimer; in our case, the centers of mass of the trimer and the unit cell coincide). The obtained spectral curves are supplemented by patterns of the magnitude distribution and flow of the magnetic and electric fields within the unit cell. These patterns are plotted in the plane fixed at the half-height of the disks (at z=0).
The obtained spectral curves suggest that in the selected frequency range, only the electric parts of the electric dipole (p^E) and toroidal dipole (T^E) moments make a significant contribution to the studied resonant state (we have not presented here the contribution of the magnetic part of the toroidal dipole moment (T^H) because of its negligible value). The small magnitude of the overall magnetic dipole moment m is explained by the fact that, due to the specific symmetry of the trimer, the magnetic dipole moments of individual resonators are mutually compensated in the cluster. As H_0 increases, the resonant frequency of the collective MD mode experiences a redshift, and the magnitude of the toroidal dipole moment noticeably increases. The most remarkable feature is that with the change in H_0, there is also a rearrangement in the order of the magnetic dipole moments belonging to the collective MD mode.
First of all, one can see from the patterns of the electromagnetic field distributions plotted within the unit cell that the appearance of the fields in each ferrite disk closely resembles that of the MD mode of the single resonator, and for all selected values of the magnetic bias field H_0, there are just different manifestations of the same collective MD mode. The evolution of this collective mode consists in the fact that as H_0 increases, the individual magnetic moment in each disk undergoes a sequential clockwise rotation. At the extreme fixed values of H_0=3.4ย kOe [Fig.ย <ref>(a)] and H_0=5.0ย kOe [Fig.ย <ref>(f)], the arrangement of magnetic dipole moments in the trimer corresponds to the radial and toroidal order, respectively, while in-between [Figs.ย <ref>(b-e)], this arrangement has an intermediate order.
This transformation from the radial to toroidal order resembles a phase transition in solids and is accompanied by a change in the localization of the electric field in the central part of the cluster. Remarkably, at the stage of existence of the radial order, the electric field intensity has a local minimum in the center of the cluster and six bright hotspots on the periphery of the disks. Contrariwise, in the toroidal order, it has a bright hotspot in the center of the cluster and three less bright hotspots on the periphery of the disks. The observed feature makes it possible to realize control over the operating regimes of the trimer-based magnetic metasurface by tuning the strength of the external magnetic field. More precisely, the near-field coupling between the particles forming meta-atoms depends on the strength of the bias magnetic field, which changes the character of electromagnetic excitation of the metasurface from the radial order to the toroidal one. Due to the distinctive field configurations of these orders, the proposed metasurface can be considered for designing magnetic field sensors and magnetic field-controlled and nonreciprocal devices (e.g., see Refs. <cit.>).
ยง CONCLUSIONS
In conclusion, we have presented a thorough investigation of the transition of the radial order to the toroidal one in a magnetic metasurface conditioned by the influence of an external static magnetic field. The metasurface with trimer-based clusters arranged in a two-dimensional array with the translation unit cell based on identical disk-shaped ferrite resonators (made of ๐ ๐
-doped YIG material) has been considered. The study has been carried out with the use of the COMSOL MULTIPHYSICS finite-element electromagnetic solver. To provide physical insight, the simulations have been supplemented by analytical and numerical modeling of the eigenfrequencies and scattering characteristics of the individual ferrite resonator. The evolution of the collective mode ordering with changing the bias magnetic field has been considered. It was found that the MD-based collective hybrid mode of the trimer can be gradually transformed from the radial to toroidal order with increasing the bias magnetic field strength.
We believe that the proposed design of the metasurface can be useful in practical applications due to the possibility of easily switching the nature of the excitation between different magnetic orders and conditions of the electric field localization inside the system. Also, the results obtained here can be extended to metasurfaces containing particles made of optically active materials with tensor-valued permittivity.
ยง ACKNOWLEDGMENTS
V.R.T. is grateful for the hospitality and support from Jilin University, China. A.B.E. thanks funding support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germanyโs Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453).
ยง SYMMETRY-ADAPTED LINEAR COMBINATION (SALC) APPROACH
The chosen basis for dipole magnetic moments is shown in Fig. <ref>. The unit vectors m_i with odd and even index i are oriented parallel and perpendicular to the planes of symmetry ฯ, respectively. One can calculate SALCs using the projection operators <cit.>:
m_kj^ฮ=1gโ_Rฯ_kj^ฮ(R)R m_i,
where the summation is performed with respect to the elements R of the C_3v group. These elements are e, C_3, C_3^-1, ฯ_1, ฯ_2, and ฯ_3 (see Ref. <cit.>). ฮ is a representation A or B of the C_3v group, ฯ_kj^ฮ(R) is the kj-th component of the irreducible representation (IRREP) ฮ of the element R, R m_i is a basis element m_i transformed by the operator R, and g is the order of the group (in our case, g=6).
The normalized eigenvectors of the trimer are presented in the third column of Tableย <ref>. Corresponding eigenmodes are also identified performing the full-wave simulation in Ref. <cit.>.
|
http://arxiv.org/abs/2306.02530v1
|
20230605013622
|
Heavy quark drag and diffusion coefficients in the pre-hydrodynamic QCD plasma
|
[
"Xiaojian Du"
] |
hep-ph
|
[
"hep-ph",
"nucl-th"
] |
[email protected]
Instituto Galego de Fรญsica de Altas Enerxรญas (IGFAE), Universidade de Santiago de Compostela, E-15782 Galicia, Spain
Kinetic and chemical equilibrations play important roles in the formation of the quark-gluon plasma (QGP) in relativistic heavy-ion collisions (HICs).
These processes further influence the production of hard and electromagnetic probes in HICs, in particular, the thermalization of heavy quarks, which are produced at an extremely early time before the formation of the QGP.
We calculate the drag and diffusion coefficients of heavy quarks in the pre-hydrodynamic quantum chromodynamic (QCD) plasma with the state-of-the-art QCD effective kinetic theory (EKT) solver.
We present the time, momentum, and angular dependencies of these coefficients for gluon and quark contributions separately, showing the effects of isotropization and chemical equilibration from the QCD plasma.
We also provide a simple formula to estimate the heavy quark energy loss within the pre-hydrodynamic plasma in both weakly and strongly coupled cases based on the attractor theory.
Heavy quark drag and diffusion coefficients in the pre-hydrodynamic QCD plasma
Xiaojian Du
July 31, 2023
==============================================================================
Introductionย โย
Thermalization is omnipresent and there are two main categories of thermalization systems that are being widely studied due to the simplification of degrees of freedom at certain scales.
One is the thermalization of many-body systems, where the microscopic dynamics of single particles can be coarse-grained and the emergent behavior is more interesting.
The other one is the thermalization of open quantum systems, where the objects have distinguished degrees of freedom or scales from the background medium environment, and tracing out the environment leaves a simple dynamical description of the system in the medium.
Relativistic heavy-ion collisions (HICs) are such experiments that both categories are present for us to understand the fundamental strong interaction.
The quark-gluon plasma (QGP) containing free quark and gluon degrees of freedom, as a many-body system, can only be produced in the early universe or HICs nowadays.
The main period of the QGP evolution in HICs is successfully described by a near-equilibrium macroscopic theory, the relativistic hydrodynamicsย <cit.>, in terms of the energy-momentum tensor.
A more involved tool that can describe the non-equilibrium and pre-hydrodynamic QGP is the effective kinetic theory (EKT)ย <cit.>, in terms of particle distributions.
This theory is initially implemented as Yang-Mills kineticsย <cit.> including gluon, and developed into quantum chromodynamic (QCD) kineticsย <cit.> including both gluon and quarks.
Various models as similar approaches or as simplified versions of EKT existย <cit.>.
Although the pre-hydrodynamic QGP in HICs is complicated as is anisotropic and chemically out-of-equilibrium, there are still some universal descriptions of the pre-hydrodynamic QGP based on simple conservation laws, independent of the microscopic physics, such as the attractor theoryย <cit.>.
On the other hand, heavy quarks have distinguished mass scales from light partons in the QGP, and are produced as open quantum systems due to their large mass thresholds.
Heavy quark thermalization is contributed by energy loss and diffusion.
Most of the heavy quarks are produced within the momentum range pโฒ m_HQ, where the radiative energy loss can be neglected and the collisional energy loss dominatesย <cit.>.
They are produced at a time scale ฯโ1/m_HQ before the hydrodynamization of the QGP at ฯ_hโ4ฯฮท/Ts, and relax at a much later time ฯ_Rโm_HQ/Tฯ_h.
Thus, most of the heavy quark thermalization simulationsย <cit.> are focusing on the hydrodynamic stage when the QGP is nearly thermalized.
There are some efforts in addressing the heavy quark thermalization in the pre-hydrodynamic glasma or QGPย <cit.> taking care of the anisotropy or chemical effects.
The EKT allows us to calculate the heavy quark diffusionย <cit.> as well as jet momentum broadeningย <cit.> dynamically during the pre-hydrodynamic stage from the first principle.
Furthermore, the recent developments of the EKT to a full QCD level will complete this picture.
In this letter, we will show the first-principle calculations of heavy quark drag and diffusion coefficients in the QCD plasma, from the state-of-the-art QCD effective kinetic theory (QCD EKT) solverย <cit.> including both gluon and quark dynamics.
As an add-on, with the attractor theory, we will also provide a simple formula to estimate the heavy quark energy loss within the pre-hydrodynamic QGP in both weakly and strongly coupled plasmas.
Pre-hydrodynamic QCD plasma & attractorย โย
The QCD plasma out-of-equilibrium before the formation of the hydrodynamic state can be described by the QCD EKT, with a Bjorken expansion at the early stage of HICs.
Within the EKT, the evolution of gluon and light quark/anti-quarks as constitutes of the QCD plasma is formulated as a set of coupled Boltzmann equationsย <cit.> with a=g,q,qฬ
and flavor number N_f=3
โ f_a(pโ,ฯ)/โฯ-p_โ f_a(pโ,ฯ)/ฯโ p_=C_a^1โ2,2โ2[f](pโ,ฯ).
The collision integrals C_a[f] contain elastic 2โ 2 processes with screening masses fitted to the Hard Thermal Loop (HTL) calculationย <cit.>, as well as collinear inelastic 1โ 2 processes including the Landau-Pomeranchuk-Migdal (LPM) effectย <cit.>.
Both processes are calculated with leading-order (LO) perturbative QCD (pQCD).
Details of the QCD EKT and its numerical implementations can be found in our previous paperย <cit.>.
The expansion term with a factor of 1/ฯ renders an anisotropization of the plasma in the longitudinal-transverse plane at an early time, while the collision terms C_a[f] take over the evolution at a later time and drive the plasma to reach hydrodynamic equilibrium, in both kinetic and chemical sense.
The early-time expansion and the later-on hydrodynamization are independent of the details of the microscopic interactions in the kinetic theory, resulting in a universal attractor solution.
This solution connects the energy density of the plasma at any time e(ฯ) to its initial value e_0 in a simple and universal wayย <cit.>
ฯ^4/3e(ฯฬ)=(4ฯฮท/s)^4/9(ฯ^2ฮฝ_ eff/30)^1/9(ฯ_0e_0)^8/9C_โโฐ(ฯฬ).
The function โฐ(ฯฬ) is called the energy attractor
in terms of the universal and dimensionless time scale ฯฬ=ฯ Ts/4ฯฮท.
The effective temperature can be evaluated by Landau matching T=(30e(ฯ)/ฯ^2ฮฝ_ eff)^1/4.
The N_f=3 massless QCD degeneracy factor ฮฝ_ eff=ฮฝ_g+7/4ฮฝ_qN_f=47.5 and C_โ=0.87 are both constants.
The shear viscosity over entropy density ratio ฮท/s directly reflects how strong the interaction coupling is and how quick the equilibration can be.
Indeed, at any ฯฬ, there is a universal energy attractor โฐ(ฯฬ) that characterizes the degree of thermalization, valued from โฐ(ฯฬโ 0)=0 to โฐ(ฯฬโโ)=1.
One can evaluate the corresponding time at any specific ฯฬ asย [Do not be confused by the different ฯ-ฮท/s scaling in our previous workย <cit.> where the ฮท/s is the average ฮท/s of the whole QGP period dominated by the hydrodynamic stage (the same as the average ฮท/s we use in Eq.ย (<ref>)).
Here we consider the ฮท/s to be the average ฮท/s in the pre-hydrodynamic stage (where hydrodynamics should not apply since the plasma could be highly off-thermal. This ฮท/s reflects how fast the thermalization is, not the degree of thermalization, so it is a valid quantity), which is a short period of time among the whole QGP evolution. However, this is what we are interested in if we want to evaluate heavy quark thermalization in the pre-hydrodynamic stage, regardless of a strongly or a weakly coupled plasma.]
ฯ=(4ฯฮท/s)_ prehydro^4/3(ฯ^2ฮฝ_ eff/30)^1/3(ฯ_0e_0)^-1/3C_โ^-3/8โฐ^-3/8(ฯฬ)ฯฬ^3/2.
This means that a more strongly coupled plasma with a smaller ฮท/s requires a shorter time to reach a certain degree of thermalization, while a more weakly coupled plasma with a larger ฮท/s requires a longer time.
As a consequence, the universality of the attractor gives the approximate relation in the pre-hydrodynamic stage
ฯ_ strong(ฮท/s)^-4/3_ strongโฯ_ weak(ฮท/s)^-4/3_ weak.
Although the massless QCD effective kinetic theory is conformal, one can fix the scales by matching experimental data at the end of the QGP evolution, assuming entropy conservation in the following thermal hydrodynamic stage.
In equilibrium, โฐ(ฯฬโซ 1)=1 and sT=e+p at zero net-baryon density. The equation of state e=3p always holds in a conformal theory.
One has
(ฯ s)_ eq=4/3(ฯ^4/3e)_ eq/(ฯ^1/3T)_ eq=4/3(ฯ^4/3e)_ eq^3/4(ฯ^2ฮฝ_ eff/30)^1/4,
and the entropy density can be related to the charged particle multiplicity via dN_ ch/dฮท=N_ ch/S(ฯ s)_ eqS_โฅ with S/N_ ch=8.36ย <cit.> and S_โฅ the transverse area of the collision.
As a consequence, one has the following relation to constrain the scales in the initial condition ฯ_0e_0
dN_ ch/dฮท=4/3N_ ch/S(4ฯฮท/s)_ average^1/3(ฯ^2ฮฝ_ eff/30)^1/3(ฯ_0e_0)^2/3C_โ^3/4S_โฅ.
Matching the LHC 5.02 TeV Pb+Pb collision dataย <cit.>, dN_ ch/dฮท=1942 and S_โฅ=138fm^2 assuming strongly coupled plasma at the holographic bound ฮท/s=1/4ฯย <cit.> for the hydrodynamic evolution, one can determine the value of ฯ_0e_0=1.961GeV^3 in this specific collision system. One has further the rescaling formula to evaluate the initial energy density in other circumstances
ฯ_0e_0=1.961 GeV^3(dN_ ch/dฮท/1942)^3/2(4ฯฮท/s)_ average^-1/2(S_โฅ/138 fm^2)^-3/2.
The initial energy is mainly deposited by an over-occupied, anisotropic, and gluon-saturated state; and described by the CGC-inspired distributionย <cit.>
f_g(pโ,ฯ_0)=10.5/ฮป_01.8Q_s/โ(p_โฅ^2+(ฮพ p_)^2)exp[-2/3p_โฅ^2+(ฮพ p_)^2/(1.8Q_s)^2],
f_q(pโ,ฯ_0)=f_qฬ
(pโ,ฯ_0)=0,
with the momentum decomposition in transverse and longitudinal directions pโ=(pโ_โฅ,p_).
The anisotropic parameter is typically chosen as ฮพ=10.
A typical value for the 'tHooft coupling ฮป_0=g_0^2N_c chosen in simulating weakly coupled gauge field in CGC effective theory is ฮป_0=10, which is at least reasonable as well at the initial time for our QCD kinetic simulation when the system has a high temperature.
The general 'tHooft coupling ฮป=g^2N_c entering into the QCD kinetic simulation is controlling the thermalization speed, which can be further reflected in the macroscopic coefficient ฮท/s.
Although a realistic QCD running coupling requires the coupling ฮป to increase (resulting in a decreasing value of ฮท/s) at a lower energy scale which happens at the later stage of the hydrodynamization, the massless QCD is scale-invariant.
We keep the coupling ฮป=10 throughout the QCD kinetic simulation and perform rescaling of physical quantities to evaluate strongly coupled plasma, where the validity of both the kinetic theory and the perturbation theory breaks down.
General rescaling can be achieved due to the universality of the attractor solutions, from the basic principles of energy conservation and conformality, regardless of the coupling or modeling.
The initial time for the QCD kinetic evolution is approximately the inverse of the saturation scale for the gauge fields ฯ_0โ 1/Q_s before the formation of the quasi-particles in the kinetic theory picture.
Without losing generality, by choosing ฯ_0=Q_s^-1 we can calculate the initial energy density and one gets ฯ_0e_0=0.5858Q_s^3=1.961 GeV^3.
Now one can estimate that Q_s=1.496 GeV and ฯ_0=Q_s^-1=0.134 fm, smaller than the typical hydrodynamization time ฯ_ hโ 0.2-0.6 fm but at the same order of magnitude.
By solving the QCD kinetic theory, one gets the time evolution of the distributions f_g(pโ,ฯ), f_q(pโ,ฯ), f_qฬ
(pโ,ฯ).
There are certain quantities that one can calculate to characterize the equilibration of the QCD plasma, such as the energy-momentum tensor
T^ฮผฮฝ
=โซd^3p/(2ฯ)^3p^ฮผp^ฮฝ/p{ฮฝ_gf_g(pโ)+ฮฝ_q N_f[f_q(pโ)+f_qฬ
(pโ)]}.
The longitudinal pressure over energy density ratio p_L/e=T^zz/T^ฯฯ characterizes the isotropization of the plasma with an equilibrium limit of 1/3.
The quark over gluon energy density ratio e_q/e_g characterizes the chemical equilibration of the plasma with an equilibrium limit (7ฮฝ_qN_f)/(4ฮฝ_g).
We show these characteristic scales in Fig.ย <ref> in terms of the universal time scale ฯฬ for the QCD plasma.
The anisotropy of the plasma approaches the hydrodynamic limit at around ฯฬโ 1-2 while its equilibrium limit has to be reached after a much longer time.
The chemical equilibration roughly finishes later than ฯฬโ 2-3 where the quark over gluon density ratio tends to be a plateau.
The non-equilibrium gluon, quark/antiquark distributions in the QCD plasma from the initial time ฯ_0โ1/Q_s to the hydrodynamization time ฯ_h(ฯฬโฬ ฬ1ฬ-ฬ2ฬ) serve as the background for heavy quarks to lose energy and diffuse, even before the formation of the hydrodynamic plasma.
These non-equilibrium distributions will deviate the transport coefficients of heavy quarks from thermal cases, and open an opportunity to extend heavy quark simulations to the pre-hydrodynamic stage of HICs.
Heavy quark thermalizationย โย
The heavy quark thermalization with soft collisions from the background QCD plasma can be described by a stochastic differential equation (SDE) in phase space (xโ,pโ), the Langevin equation
dx_i = p_i/E(pโ)dฯ ,
dp_i = -A_i(pโ,ฯ)dฯ+ฯ_ij(pโ,ฯ)dW_j ,
with a Wiener process dW_jโผ๐ฉ(0,dฯ) correlated as โจdW_idW_j|=โฉฮด_ijdฯ.
Applying Ito's lemma to Eq.ย (<ref>) up to order ๐ช(dฯ), the corresponding Fokker-Planck equation with diffusion coefficients B_ij(pโ,ฯ)=1/2ฯ_ikฯ_jk reads
โ f_Q(pโ,ฯ)/โฯ
=โ[A_i(pโ,ฯ)f_Q(pโ,ฯ)]/โ p_i+โ^2[B_ij(pโ,ฯ)f_Q(pโ,ฯ)]/โ p_iโ p_j .
Notice that both drag coefficients A_i(pโ,ฯ) and diffusion coefficients B_ij(pโ,ฯ) depend not only on the heavy quark momentum pโ but also on the time of the evolving pre-hydrodynamics QCD plasma ฯ. There are two evolving chemical contributions, from gluon collisions gQโ gQ and from quark/antiquark collisions qQโ qQ (including the factor 2N_f in the quark sector)
A_i(pโ,ฯ) = A_g,i(pโ,ฯ)+A_q,i(pโ,ฯ) ,
B_ij(pโ,ฯ) = B_g,ij(pโ,ฯ)+B_q,ij(pโ,ฯ) .
The drag and diffusion coefficients for heavy quark Q collided by a parton a=g,q,qฬ
in the QCD plasma can be calculated asย <cit.>
A_a,i(pโ,ฯ)=1/2E(pโ)โซ dฮ |M_aQโ aQ|^2ฮฝ_a f_a(pโ_a,ฯ)
ร(1ยฑ f_a(pโ_a',ฯ))(1-f_Q(pโ',ฯ))(pโ-pโ')_i ,
B_a,ij(pโ,ฯ)=1/4E(pโ)โซ dฮ |M_aQโ aQ|^2ฮฝ_a f_a(pโ_a,ฯ)
ร(1ยฑ f_a(pโ_a',ฯ))(1-f_Q(pโ',ฯ))(pโ-pโ')_i(pโ-pโ')_j ,
with dฮ =d^3p_a/(2ฯ)^3 2E_a(pโ_a')d^3p_a'/(2ฯ)^3 2E_a'(pโ_a')d^3p_Q'/(2ฯ)^3 2E'(pโ')
ร (2ฯ)^4ฮด^(4)(P+P_a-P'-P_a') .
Due to the small occupation of heavy quarks f_Q(pโ',ฯ)โช 1, the Fermi-blocking factor for the heavy quark can be neglected.
We calculate the amplitude squares |M_aQโ aQ|^2 in the LO pQCD, and further implement the dynamical screening mass m_D^2ย [The dynamical screening mass m_D^2 in the amplitude square is calculated from the QCD EKT, m_D^2(ฯ)=4ฮป/N_cd_Aโซd^3p/p(2ฯ)^3[ฮฝ_gC_Af_g(pโ,ฯ)+ฮฝ_qC_FN_f(f_q(pโ,ฯ)+f_qฬ
(pโ,ฯ))]
to t-channel tโ t[1+ฮพ_g^2m_D^2/(pโ_Q-pโ_Q')^2].
The coefficient ฮพ_g=2^-3/2e^5/6 is from fitting to the HTL results, as also described in details in our previous workย <cit.>.].
The relations between A_a,i and B_a,ij are complicated in the anisotropic plasmaย [
In the special case of an isotropic plasma f_a(pโ)=f_a(|pโ|), one has the tensor decomposition for the coefficients A_i(pโ,ฯ)=p_iA(|pโ|,ฯ) and ฯ_ij(pโ,ฯ)=(ฮด_ij-p_ip_j/p^2)โ(2B_0(|pโ|,ฯ))+p_ip_j/p^2โ(2B_1(|pโ|,ฯ)), so that the diffusion coefficients B_ij(pโ,ฯ)=1/2ฯ_ikฯ_jk=(ฮด_ij-p_ip_j/p^2)B_0(|pโ|,ฯ)+p_ip_j/p^2B_1(|pโ|,ฯ).
The heavy quark thermalization with expected thermal Boltzmann distribution f_Q(pโ)=exp(-ฮฒ E(pโ)) requires the following Einstein relation A_a,i=โ B_a,ij/โ p_j-ฮฒB_a,ijp_j/E(pโ). See more discussion inย <cit.>.
However, the momentum dependencies of and the relations between A_a,i and B_a,ij are more complicated in an anisotropic plasma.].
To calculate the drag and diffusion coefficients, we assume a charm quark mass m_c=1.5 GeV, and the coupling in the collisional amplitude squares the same as the background QCD plasma ฮฑ_s=g^2/4ฯ=ฮป/4ฯ N_c with ฮป=10.
Heavy quark drag and diffusion coefficientsย โย
At mid-rapidity in HICs, one expects a longitudinally boost invariant plasma, and usually cares only about the heavy quark evolution in the transverse plane.
Without loss of generality, we choose the momentum direction of the heavy quark in the transverse plane, to be along the x-axis, the perpendicular direction in the transverse plane to be along the y-axis, and the longitudinal direction along the z-axis.
So we choose pโ=(p_x,p_y,p_z)=(p,0,0) in Cartesian coordinate oe pโ=(p,cos(ฮธ),ฯ)=(p,0,0) in cylindrical coordinate if not stated otherwise.
By default, pโ=(p,0,0), the symmetry of the integration will give trivial values of A_y, A_z, as well as off-diagonal terms in the B_ij matrix, even if the plasma is anisotropic.
We plot the time evolution of coefficients A_x, B_xx, B_yy, B_zz in Fig.ย <ref> including their gluon and quark components.
The time evolution of the coefficients in terms of ฯฬ roughly features a pow-law behavior.
One see the isotropization more clearly in the lower panel of Fig.ย <ref>, that B_yy is deviated from B_zz at an early time and is approaching B_zz at a late time.
The increasing trend in quark contribution is clearly shown in both panels, that the drag and diffusion coefficients contributed by quarks become comparable to gluon at a late time.
However, due to the Bose-enhancement and Fermi-blocking factors, the quark contribution in the coefficients is not as significant as it is in the energy density of the QCD plasma as we see in Fig.ย <ref>, where e_qโ 2e_g at the later time.
The momentum dependencies of the coefficients are shown in Fig.ย <ref> for various times, where we have normalized everything by either A_x(p=m_HQ) or B_xx(p=m_HQ) so that these two coefficients are fixed at the point (p/m_HQ=1,1).
By performing this normalization, one finds a universality of A_x(p) at a different time for small momentum pโค m_ HQ, and the dependence becomes linear at the late stage.
Isotropization can also be found in the lower panel that B_yy(p) gradually approaches B_zz(p) at a later time.
B_yy(p) is the same as B_xx for pโช m_ HQ for all time, the reason is obvious.
The increasing trend of the quark contribution is also presented.
Now we release our constraints for the heavy quark momentum to be in the transverse plane only.
Still, look at the typical momentum p=m_ HQ, but in the x-z plane, we present the coefficients as a function of the cos(ฮธ)=p_z/p in Fig.ย <ref>. Now the breaking symmetries in the integrals result in many more nontrivial coefficients A_x, A_z, B_xx, B_xz, B_yy, B_zz but they have vanishing points at certain angles.
For example, at cos(ฮธ)=0, the heavy quark momentum is in the transverse plane and A_z vanishes; while at cos(ฮธ)=ยฑ 1, the heavy quark momentum is in the longitudinal plane, and A_x vanishes.
Phenomenological consequencesย โย
With the weakly coupled plasma simulated from the kinetic theory and the universality of attractor theory, we can estimate the energy loss of heavy quarks in a strongly coupled pre-hydrodynamic QCD plasma.
Indeed, for any time convolution of a physical quantity ๐(ฯ)
โซ_ฯ_0^ฯ_h๐(ฯ)ย dฯ_ strongโโซ_ฯ_0^ฯ^*๐(ฯ)ย dฯ_ weak(ฮท/s)^4/3_ strong/(ฮท/s)^4/3_ weak,
where we have to fix the same initial time ฯ_0 since the early time plasma is weakly coupled, until a hydrodynamization time ฯ_h in a strongly coupled plasma when ฯฬโ 1-2.
With attractor theory, one can estimate the hydrodynamization time in strongly coupled plasma ฯ_h from the time one calculated from weakly coupled plasma ฯ^* when ฯฬโ 1-2.
Then the time convolution in Eq.ย (<ref>) can be simply evaluated in the weakly coupled plasma where we already have all the information.
For the weakly coupled plasma ฮป=10 we have calculated, one has (ฮท/s)_ weak= 1 and ฯ^*โ 100-275Q_s^-1 according to the coupling independent universal time ฯฬโ 1-2.
Strongly coupled plasma also gives a larger value of the coefficients, the drag and diffusion coefficients are roughly scaled as ๐(ฮป)โ(ฮป/10)^2๐(ฮป=10).
Now, one can evaluate the energy loss and diffusion of heavy quarks as
<ฮ p_i/p_0>_ loss โ โซ_ฯ_0^ฯ^*-A_i(pโ,ฯ)/p_0dฯ(ฮท/s)^4/3(ฮป/10)^2,
<ฮ p_i ฮ p_j/p_0^2>_ diff โ โซ_ฯ_0^ฯ^*B_ij(pโ,ฯ)/p_0^2ฮด_ijdฯ(ฮท/s)^4/3(ฮป/10)^2.
Focusing on the transverse plane and assuming the heavy quark initial momentum to be p_0โค m_HQ in the x-direction, one has roughly a linear momentum dependence on the drag coefficient A_x(pโ,ฯ)โA_x(m_ HQ,ฯ)/m_ HQp_x, so the energy loss is independent of the initial momentum.
Solving Eq.ย (<ref>), one has (see the supplemental material)
<ฮ p_x/p_0>_ lossโ 1-e^-โซ_ฯ_0^ฯ^*A_x(m_ HQ,ฯ)/m_ HQdฯ(ฮท/s)^4/3(ฮป/10)^2.
Numerical evaluations for ฯ^* corresponding to ฯฬ=1 and ฯฬ=2 give
<ฮ p/p_0>_ lossโ 1-e^-0.56(ฮท/s)^4/3(ฮป/10)^2 up to ฯฬ=1, and
<ฮ p/p_0>_ lossโ1-e^-0.89(ฮท/s)^4/3(ฮป/10)^2 up to ฯฬ=2.
Before the plasma reaches the hydrodynamic stage, it could be neither so weakly coupled nor that strongly coupled, and one may simply estimate the value with different ฮท/s and ฮป accordingly.
Conclusions & Outlookย โย
In this letter, we calculate the heavy quark drag and diffusion coefficients in a weakly coupled pre-hydrodynamic QCD plasma from a first principle and the state-of-art QCD EKT solver.
We present the time, momentum, and angular dependencies of the coefficients A_i, B_ij with all indices.
With arguments from the attractor theory, we provide a simple formula to evaluate the heavy quark energy loss even in a strongly coupled pre-hydrodynamic plasma.
For complete calculations on heavy quark energy loss in the pre-hydrodynamic stage, one needs to solve for the time evolution of the QCD plasma with evolving coupling in full phase space and calculate all the drag and diffusion coefficients in all momenta and angles as functions of time A_i(pโ,ฯ), B_ij(pโ,ฯ). Then one can simulate heavy quarks in this dynamically evolving plasma via kinetic equations in Eq.ย (<ref>) or Eq.ย (<ref>), which is out of the scope of the current work, and we leave that to a future study.
Acknowledgementย โย
The author thanks Kirill Boguslavski, Florian Lindenbauer, Meijian Li, and Bin Wu for helpful discussions.
The author is supported by Xunta de Galicia (Centro singular de investigacion de Galicia accreditation 2019-2022), European Union ERDF, the โMaria de Maeztuโ Units of Excellence program under project CEX2020-001035-M, the Spanish Research State Agency under project PID2020-119632GB-I00, and European Research Council under project ERC-2018-ADG-835105 YoctoLHC.
The author also acknowledges the computational resources supported by LUMI-C supercomputer, under The European High Performance Computing Joint Undertaking grant EHPC-REG-2022R03-192 Non-equilibrium Quark-Gluon Plasma.
ยง SUPPLEMENTAL MATERIAL
Below, we show how to estimate the energy loss according to scaling and derive Eq.ย (<ref>).
Since A_x(pโ,ฯ)โA_x(m_ HQ,ฯ)/m_ HQp_x for p_xโค m_HQ, one has
<dp_x/p_0>_ loss = -A_x(pโ,ฯ)/p_0dฯ(ฮท/s)^4/3(ฮป/10)^2
โ -A_x(m_ HQ,ฯ)/m_ HQp_x/p_0dฯ(ฮท/s)^4/3(ฮป/10)^2,
and the time convolution gives
- โซ_ฯ_0^ฯ^*A_x(m_ HQ,ฯ)/m_ HQdฯ(ฮท/s)^4/3(ฮป/10)^2
โ โซ_p_0^p_0-ฮ p_x<dp_x/p_x>_ loss=ln(1-<ฮ p_x/p_0>_ loss),
we arrive at
<ฮ p_x/p_0>_ lossโ 1-e^-โซ_ฯ_0^ฯ^*A_x(m_ HQ,ฯ)/m_ HQdฯ(ฮท/s)^4/3(ฮป/10)^2.
|
http://arxiv.org/abs/2306.02836v1
|
20230605122955
|
Limitations of Noisy Quantum Devices in Computational and Entangling Power
|
[
"Yuxuan Yan",
"Zhenyu Du",
"Junjie Chen",
"Xiongfeng Ma"
] |
quant-ph
|
[
"quant-ph"
] |
tikzmark, calc, fit, positioning
shapes
quantikz
theoremTheorem
lemmaLemma
corollaryCorollary
claimClaim
conjectureConjecture
postulatePostulate
observationObservation
definitionDefinition
propositionProposition
presumptionPresumption
remarkRemark
exampleExample
criterionCriterion
questionQuestion
solution
[auto counter]mybox[2][]
enhanced,
breakable,
colback=blue!5!white,
colframe=blue!75!black,
fonttitle=,
title=Box : #2,#1
apsrev
*theorem*Theorem
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, P.ย R.ย China
Quantum computing devices have been rapidly developed in the past decade. Tremendous efforts have been devoted to finding quantum advantages for useful but classically intractable problems via current noisy quantum devices without error correction. It is important to know the fundamental limitations of noisy quantum devices with the help of classical computers. For computation with general classical processing, we show that noisy quantum devices with a circuit depth of more than O(log n) provide no advantages in any quantum algorithms. This rigorously rules out the possibility of implementing well-known quantum algorithms, including Shor's, Grover's, Harrow-Hassidim-Lloyd, and linear-depth variational algorithms. Then, we study the maximal entanglement that noisy quantum devices can produce under one- and two-dimensional qubit connections. In particular, for a one-dimensional qubit chain, we show an upper bound of O(log n). This finding highlights the restraints for quantum simulation and scalability regarding entanglement growth. Additionally, our result sheds light on the classical simulatability in practical cases.
Limitations of Noisy Quantum Devices in Computational and Entangling Power
Xiongfeng Ma
Received 21 February, 2023; accepted 5 June, 2023
==========================================================================
In recent years, significant progress has been made in improving the size and quality of quantum computing devices. The most advanced quantum devices have already scaled beyond the capabilities of classical brute-force simulation <cit.>. However, they still lack the ability of scalable quantum error correction, mainly because of the relatively high noise strength. In the near term, with the number of noisy qubits from dozens to a few hundred, the ability of quantum devices is expected to remain in the middle between classical computation and fault-tolerant quantum computing. This class of quantum devices is termed Noisy Intermediate-Scale Quantum (NISQ) devices <cit.>. Although there is active progress in implementing quantum error correction <cit.>, filling the gap between the current NISQ era and the fault-tolerant quantum computing era is still expected to be a substantial challenge.
In the NISQ era, a crucial question is to which extent noisy quantum devices without quantum error correction have advantages over their classical counterparts. By the term โadvantages,โ we refer to the ability of a noisy quantum device to cooperate with classical computers to complete a computational task faster than only classical computers, with a non-constant ratio of complexity reduction with increased problem size. In this work, we focus on decision problems, which provide a definitive answer and are particularly sought after for practical applications. The scope of this study does not cover sampling problems. Quantum advantages have been rigorously proven in specific decision problems. With the merits of quantum nonlocality, noisy quantum circuits only require a โshallowโ depth, namely independent of the input size, to solve a class of problems, including the magic square problem, while โshallowโ classical circuits are not able to solve them <cit.>. By introducing oracle machines, noisy quantum devices show a super-polynomial separation of oracle complexity in modifications of Simonโs problem and Bernstein-Vazirani problem <cit.>. The results provide theoretical insights but do not correspond to realistic computation scenarios because of the use of oracles.
In practice, it is more important to explore the advantages of efficiently solving classically intractable problems that are free of oracles. However, such advantages are severely challenged by the noise existing in quantum devices <cit.>. Many candidate algorithms for quantum advantages have been shown to lack advantages under noise, including the quantum approximate optimization algorithm and quantum annealing <cit.>. As far as we know, not even polynomial computational advantages have not been established for noisy quantum devices at the moment. So the question of whether strong advantages exist remains an open theoretical question.
On the flip side, understanding the general limitations of noisy quantum devices is crucial yet a complex task. When considered alone, a noisy quantum device suffers from a rapid information loss under noise <cit.>. However, classical processing components consistently play a significant role in quantum algorithms, which are considered noiseless and have a long memory. At the very least, quantum devices are controlled by classical input, and measurement outcomes undergo classical processing. In the NISQ era, classical processing is particularly relied on to mitigate noise and enhance the power of quantum devices <cit.>. The classical processing parts are extremely flexible across different quantum algorithms, therefore making it challenging to establish general theories.
In this work, we provide theoretical results on the limitations of the computational power of noisy quantum devices. We adopt a model of noisy quantum devices affected by independent depolarizing noise and lacking the ability of quantum error correction. We consider arbitrary classical processing and show that the noisy quantum devices with a depth greater than O(log n) will not provide any quantum advantages for any polynomial-time quantum algorithms.
To further characterize the power of noisy quantum devices, we study the maximal quantum entanglement production at arbitrary circuit depths. For a one-dimensional qubit chain, we show that quantum entanglement production is upper bounded by O(log n). Our results are significant for quantum simulation. The possibility of efficiently simulating highly-entangled states is excluded, such as highly excited states in general quantum systems and thermalized states in quantum dynamics. Also, our results provide insights into the classical simulatability of noisy quantum devices. The main results of our work are summarized in Fig.ย <ref>.
Model of noisy quantum devices.โIn the NISQ era, we describe a noisy quantum device by employing layers of unitary gates followed by independent single-qubit depolarizing noise channels after each layer. The noise act on a single qubit as
ฮ_1(ฯ) = (1-p)ฯ+pI/2.
After applying all layers of gates and noise, we perform computational-basis measurements at the end of the circuit to obtain the classical output. Our model is formally defined as follows and illustrated in Fig.ย <ref>. A more detailed discussion can be found in Appendix <ref>.
A noisy quantum device, with n qubits, produces a quantum state at a depth of t,
ฯ(t) = ฮโ๐ฐ_tโฮโโฏโฮโ๐ฐ_2โฮโ๐ฐ_1 (|0โฉโจ^|โ n),
where ๐ฐ's are layers of gates and ฮ = ฮ_1^โ n is the noise channel. In each layer, a qubit can be manipulated at most by one quantum gate. The classical output C_n(ฯ(t)) from measurements obeys the distribution
[C_n(ฯ(t)) = X] = โจX|ฯ(t) |Xโฉ,
where X is a n-bit string and |Xโฉ is the corresponding computational basis.
Importantly, our model does not allow mid-circuit measurements and RESTART operations, i.e., replacing a qubit with a known pure state, such as |0โฉ. Because both of these operations will enable fault-tolerant quantum computing, which goes beyond the scope of the NISQ era <cit.>. As a result, we lose the information of the quantum state layer by layer without the opportunity of retrieval. Regardless of how small the value of p is, we will eventually have the โuselessโ maximally mixed state ฯ_0 = I/2^n, where no information remains. The convergence is exponentially fast with respect to the circuit depth t. Quantitively, the relative entropy between the state ฯ(t) and the maximally mixed state decays as
D(ฯ(t) ฯ_0) โค n (1-p)^t.
Another interpretation of the relative entropy D(ฯ(t) | ฯ_0) is that it represents the remaining information in the state ฯ(t). This is due to the relationship D(ฯ(t) | ฯ_0) = n - S(ฯ(t)), where S(ฯ(t)) denotes the von Neumann entropy of the state ฯ(t). This result has been derived in previous work<cit.>, while we revisit it with simplified proof in Appendix <ref>.
Entropy analysis for a hybrid algorithm.โAs we stressed earlier, we consider quantum algorithms with the assistance of classical computers. In the rest of this work, we use the term โhybrid algorithmโ to emphasize the hybrid essence of quantum algorithms, containing arbitrary classical processing parts. In such a hybrid algorithm, a classical computer queries noisy quantum devices in q rounds. In each round, the quantum device responds by returning measurement results to the classical computer. The setup is illustrated in Fig.ย <ref>. We analyze the relation between the entropy of measurement results from all queries X_1, X_2,โฏ, X_q and the depth of the quantum circuit t. Our results show that the entropy of total measurement results is exponentially close to the maximal entropy with increased circuit depth t, which implies that the measurement results are exponentially close to a uniform distribution.
For each query of a noisy quantum device, the outcome X_i is obtained by measuring the state after the final layer of the circuit under the computational basis and has exponentially small information. However, when extending the arguments from a single query to all queries in a hybrid algorithm, we must carefully treat the correlations among outcomes from different queries {X_i}_i = 1,2,โฏ, q. Those correlations are introduced by classical processing, which controls quantum devices dependent on the previous measurement outcomes. By a naive analysis of entropy subadditivity, correlations among different queries have a chance of reducing the total amount of entropy from different queries. Thus, hybrid algorithms could use correlations to accumulate information and enhance quantum advantages.
We theoretically exclude such a chance by showing that the total information of all measurement outputs (X_1, X_2,โฏ, X_q) still have only an exponentially small amount of information with increased depth t, regardless of correlations resulting from classical processing. Our result is presented in the following lemma, with proof in Appendix <ref>.
Suppose we use quantum devices of depth at least t for q times. Let X_i denotes the measurement result of the i-th quantum device which takes n_i-qubit state |0โฉ^โ n_i as inputs. We obtain q random variables X_1, โฆ, X_q. Then
S(X_1,โฏ,X_q) โฅ(1-(1-p)^t)โ_i=1^q n_i,
where p is the strength of the depolarizing channel in noisy quantum devices as given in Definition <ref>.
The results are derived by decomposing the overall amount of entropy into the sum of a series of conditional entropy on previous queries. We stress that Lemma <ref> goes beyond the isolated analysis of noisy quantum devices that appeared in existing works. Our analysis differs fundamentally from putting measurement outcomes of each query together and can be utilized in studying any kind of hybrid algorithm.
Limitations of depths for computational advantages.โIn this part, we analyze the limitations of hybrid algorithms with noisy quantum devices in solving decision problems. A decision problem can be understood as a computational task, which requires an output bit L(x) โ{0, 1} depending on a given input string x. We expect an algorithm to correctly output L(x) with a probability greater than 2/3. Decision problems are fundamental and general because numerous problems are found to be equivalent to a decision problem, such as factorizing, solving linear systems, and max-cut problems. Here, equivalent means that the ability to solve the corresponding decision problem in polynomial time implies the ability to solve the original problem in polynomial time. Thus, the limitations of solving decision problems directly lead to the limitations of solving various other important problems outside of the category.
Our entropy analysis for hybrid algorithms implies that if an algorithm queries noisy quantum devices with super-logarithmic depth, the output of the noisy quantum device will be highly noisy. In fact, such noisy quantum devices are too noisy to provide any quantum advantages for the computation. If we replace them with random coins, the design of computation results will not be influenced. To clarify this argument, we present the following theorem, with proof in Appendix <ref>.
Suppose a hybrid algorithm with noisy quantum devices computes a decision problem in running time T(n). In that case, the queries of noisy quantum devices with depth tโฅ O(log T(n)) do not provide any quantum advantages. These queries can be replaced by random coins without input or additional classical computation.
An immediate consequence of Theorem <ref> is that noisy quantum devices with super-logarithmic depth do not provide any quantum advantage, even in general hybrid computing scenarios where each noisy quantum device depends on the data obtained in previous measurements and computations. The noisy quantum devices can be replaced by classical random coins without any additional classical computation and thus cannot provide quantum advantages.
This result helps us explicitly eliminate the advantages of implementing a broad class of quantum algorithms on noisy quantum devices that only query super-polynomial quantum circuits. Examples include but are not limited to Shor's algorithm <cit.>, Grover's algorithm <cit.>, and Harrow-Hassidim-Lloyd (HHL) algorithm <cit.>. We summarize these examples in Table <ref>. The required depth in the table is given as the best implementation as far as we know in the sense that the device depth is optimized. For Grover's algorithm, the depth complexity has been theoretically proven to be optimal <cit.>. Recent work suggests the absence of advantage of Grover's algorithm by analyzing the complexity of implementations of oracles <cit.>. Our result here further shows that no advantages exist even if oracles can be efficiently implemented under the limitations of noisy quantum devices.
In the NISQ era, variational quantum algorithms are an important class of quantum algorithms, which is designed to solve optimization problems <cit.>. Our results pose information-theoretical limitations on the depths of variational quantum circuits, which cannot be overcome by error mitigation or other algorithmic efforts. We strengthen the previous results on the barren plateau problem induced by noise <cit.>. We obtain stronger results on overall information, which is stronger than exponentially vanishing gradients for variational updates in their work. We exclude the possibility of advantages in variational circuits of super-polynomial depths in a more fundamental way.
Limitations of quantum entanglement production.โAfter ruling out quantum advantages of noisy quantum devices above O(log n) depth, a question immediately follows โ to what extent do quantum advantages exist for arbitrary depths, including those below O(log n)? A direct answer to this question is difficult. Here, we gain insights from the entangling power by deriving limitations of the maximal entanglement production of noisy quantum devices. In this part, our results depend on the topology of qubit connections. We study one- and two-dimensional connections, which are two typical designs for quantum devices.
For the one-dimensional topology, namely a qubit chain, quantum gates are restrained between the nearest neighbour qubits on the chain. We consider bipartite entanglement between two contiguous halves of the chain, denoted as A and Aฬ
. The two halves are not required to have the same length. The choice of contiguous regions is important to characterize the quantum entanglement globally existing in the system. Without the requirement, for example, one can trivially assign odd sites to A and even sites to Aฬ
and create n noisy Bell pairs between A and Aฬ
by a single layer of quantum gates. But quantum entanglement only locally exists in the neighbouring sites, while the slightly apart parts of the system are separable. In contrast, strong entanglement between contiguous halves implies global correlations.
A key observation is that the interaction of a qubit is localized in a region whose radius grows with the depth t. In other words, the qubits interact within a light cone. We generalize this observation to entanglement spreading and derive an upper bound of the bipartite entanglement monotone E between halves of the chain,
E(A:Aฬ
) โค t.
The result implies entanglement production requires sufficient circuit depth. While similar bounds have been found for noiseless dynamics of pure states <cit.>, we generalize the result to the mixed-state case.
On the other hand, noisy quantum devices suffer from an exponential loss of information with increasing depth t, which also leads to exponentially rapid loss of entanglement. The two effects jointly lead to a logarithmic upper bound of entanglement production in noisy quantum devices, as stated in the following theorem, with proof in Appendix <ref>.
For any noisy quantum devices with the number of qubits n > 1/1-p in one-dimensional qubit topology, quantum relative entropy of entanglement between a contiguous half A and the other half Aฬ
is upper bounded by E_R(A:Aฬ
) โคlog (n)/-log (1-p), where p is the noise strength in Eq. (<ref>). Quantum mutual information obeys the same bound, I(A : Aฬ
) โคlog (n)/-log (1-p).
Our result has important meaning for the quantum simulation of quantum many-body systems, which is an important application of quantum computing devices <cit.>. Our results imply any quantum systems with a super-logarithmic entanglement scaling will not be able to be efficiently prepared and simulated in a noisy quantum device. This excludes the possibility of efficient simulation of a broad range of quantum systems in noisy quantum devices, such as highly excited states at the mid-spectrum of local Hamiltonian <cit.> and thermalized quantum states in quantum dynamics <cit.>, including quantum many-body thermalization and black hole dynamics.
Apart from the one-dimensional chain, we also considered two-dimensional lattice and fully connected topology and summarize results on maximal entanglement in Table <ref>, with detailed results in Appendix <ref>. The result suggests that the limitations of entanglement production depend on topology. From the view of entanglement scaling, the simulation of quantum systems also has restraints in a two-dimensional lattice.
We present the limitations of entanglement numerically for different values of noise strength p in Fig.ย <ref>. After the threshold number of qubits is reached, the further growth of quantum entanglement in the noisy quantum device will be suppressed. For the one-dimensional case, this will lead to an exponential cost of qubits required to scale up further the system's entanglement due to the logarithmic scaling of the upper bounds of entanglement. While in the two-dimensional case, a polynomial cost is also required.
Discussion.โBoth of the two theoretical aspects of limitations originate from the exponential decay of information due to noise. In the time evolution of one-dimensional qubit chains, classical simulatability is related to entanglement in most cases <cit.>. Thus, the limitations of entanglement provide insights into the difficulty of finding super-polynomial quantum advantages even below the logarithmic depth. Overall, our findings are interconnected and collectively indicate the fundamental challenge of achieving significant quantum advantages in the presence of noise in quantum devices.
The limitations are unavoidable consequences of our noisy circuit model. For future development of quantum computing devices, it is necessary to break the model assumptions and avoid the exponential decay of information. To partly mitigate the problem in the near term, we may introduce new qubits later in the noisy circuit or convert the type of noise <cit.>. But the ultimate approach is RESTART operations or mid-circuit measurements, which can purify the system. Experimentally active progress is made along this road <cit.>. In general, either of these approaches could enable quantum error correction, provided we have sufficient quantum qubits and an error rate that is below the error threshold. From the theoretical view, overcoming the limitations derived in our work is also the road from the NISQ era to the fault-tolerant era.
Future work will involve expanding the results of this work to other types of noise and to analog quantum computing based on continuous-time dynamics. Our methods may be helpful in establishing limitations for other properties of noisy quantum devices, including quantum state complexity, quantum chaos, quantum topological entanglement, and quantum magic.
The authors acknowledge Dorit Aharonov and Michael Ben-Or for answering our questions about the convergence of quantum states. The authors thank Chenxu Li, Chendi Yang for their helpful discussion and Xingjian Zhang, Guoding Liu for providing feedback on the manuscript. This work was supported by the National Natural Science Foundation of China Grants No. 12174216.
apsrev
ยง EXPLAINATION OF MODEL ASSUMPTIONS
We characterize a noisy quantum device in the NISQ era via the following assumptions, summarised in Box <ref>. The n-qubit device is initialized into |0โฉ^โ n. Qubits are manipulated via two-qubit gates, which are grouped into layers. In each layer, a qubit can be manipulated at most by one quantum gate. After each layer of quantum gates, every single qubit of the device suffers from independent depolarizing noise. For a single-qubit state ฯ under such noise, we completely lose its information with a certain probability p, and the state is replaced by the maximally mixed state. The noise is described by the quantum channel
ฮ_1(ฯ) = (1-p)ฯ+pI/2.
Importantly, our model does not allow mid-circuit measurements and RESTART operations. The RESTART operation refers to a quantum channel replacing a qubit with a pure state, such as |0โฉ. This means that we cannot retrieve any information once the information is lost due to noise. The assumption is suitable for the NISQ era because either of these operations enables fault-tolerant quantum computing with p below a threshold <cit.>. Lastly, computational-basis measurements are performed at the end of the circuit to obtain the classical output. Under these assumptions, our model is formally defined as follows.
A noisy quantum device, with n qubits, produces a quantum state at a depth of t,
ฯ(t) = ฮโ๐ฐ_tโฮโโฏโฮโ๐ฐ_2โฮโ๐ฐ_1 (|0โฉโจ^|โ n),
where ๐ฐ's are layers of gates and ฮ = ฮ_1^โ n is the noise channel. In each layer, a qubit can be manipulated at most by one quantum gate. The classical output C_n(ฯ(t)) from measurements obeys the distribution
[C_n(ฯ(t)) = X] = โจX|ฯ(t) |Xโฉ,
where X is a n-bit string and |Xโฉ is the corresponding computational basis.
[label=box:model]Assumptions for noisy quantum devices
* |0โฉ^โ n as initial states.
* Two-qubit quantum gates.
* Independent single-qubit depolarizing noise channels.
* No mid-circuit measurements and rest operations.
* Computational-basis measurements.
We set up our model in the inspiration of previous work on noisy quantum devices <cit.>.
ยง CONVERGENCE OF NOISY QUANTUM DEVICES WITH A SIMPLIFIED PROOF
Lemma <ref> is originally proposed in <cit.>, whose generalized version is rigorously proved in <cit.>. Here we focus on the case where depolarizing noises are introduced according to our model assumptions.
In a n-qubit noisy quantum device, after t layers of quantum gates, the relative entropy between the state ฯ(t) and the maximally mixed state ฯ_0 = I/2^n decays as
D(ฯ(t) ฯ_0) โค n (1-p)^t.
Similar exponential convergence is not limited to our choice of single-qubit depolarizing channels. In some other noise models, exponential decaying behaviors remain, but the decay rate would be different from (1-p)^2 in this lemma <cit.>. Thus, results in this work can be easily extended to those cases with quantitive results modified according to the different decay rates, such as Theorem <ref>.
Here we provide a new proof of the lemma, using the strong subadditivity of quantum entropy. First, we strengthen quantum Shearer's inequality with our new proof. Our result is stronger than the inequality in <cit.> and our proof techniques are also different.
Consider t โ and a family โฑโ 2^{1,2,โฏ,n} of subsets of {1,2,โฏ,n} such that each i is included in more than t elements of โฑ. For any state ฯโ๐(^d_1โ^d_2โโฏโ^d_n) we have
โ_F โโฑ S(ฯ_F) โฅ t S(ฯ)
in which ฯ_F is the reduced density matrix of ฯ on subsystems F.
We will prove the theorem using mathematical induction.
For t = 0, the lemma holds because S(ฯ_F) โฅ 0 for any subsystem F.
For t โฅ 1, let S denotes {1,2,โฏ,n}. If S โ, then ' := โ{S} is a family in which each i is included in more than t-1 times. By induction,
โ_F โ' S(ฯ_F) โฅ (t-1) S(ฯ)
Thus,
โ_F โ S(ฯ_F) = S(ฯ) + โ_F โ' S(ฯ_F)
โฅ t S(ฯ)
If S โ, find a maximal set S_1 in . Because t โฅ 1, there must exists another set S_2, such that S_2 โ S_1 โ โ
. Let S_1' = S_1 โช S_2, S_2' = S_1 โฉ S_2, ' = (โ{S_1,S_2}) โช{S_1', S_2'}, by the strong subadditivity of quantum entropy,
S(ฯ_S_1) + S(ฯ_S_2) โฅ S(ฯ_S_1') + S(ฯ_S_2')
โ_F โ S(ฯ_F) โฅโ_F โ' S(ฯ_F)
If S_1' = S, then combine equation (<ref>) and (<ref>), we get equation (<ref>). If S_1' โ S, we repeat this process on '. Because n is finite, this process will terminate in finite steps and finally get
โ_F โโฑ S(ฯ_F) โฅโ_F โโฑ' S(ฯ_F)โฅโฏโฅ t S(ฯ)
Based on our strengthened quantum Shearer's inequality, the next lemma characterizes the increase of entropy caused by the noise channel.
For any n-qubit state ฯ,
S(ฮ(ฯ)) โฅ (1-p) S(ฯ) + p n,
where ฮ is the noise channel and p is the strength of the depolarizing channel in noisy quantum devices as given in Definition <ref>.
By definition of the noise channel,
ฮ(ฯ) = โ_i=0^n p^t (1-p)^n-iโ_F โ [1,2,โฏ,n], |F| = iฯ_F โI/2^n-i
Take von-Neumann entropy on both sides and use the subadditivity of entropy,
S(ฮ(ฯ)) โฅโ_i=0^n p^t (1-p)^n-iโ_F โ [1,2,โฏ,n], |F| = i S(ฯ_F โI/2^n-i)
= โ_i=0^n p^t (1-p)^n-iโ_F โ [1,2,โฏ,n], |F| = i [S(ฯ_F) + n-i]
Note that = {F โ [1,2,โฏ,n], |F| = i} is a family that each i is included in more than n-1i-1 times. Then
S(ฮ(ฯ))
โฅโ_i=0^n p^t (1-p)^n-ini [i/nS(ฯ) + n-i]
= n - n-S(ฯ)/nโ_i=0^n p^t(1-p)^n-tni i
= n - (n-S(ฯ))(1-p)
= (1-p) S(ฯ) + pn
The first inequality is from Lemma <ref>, and other equalities are from the direct calculation and basic combinatorial identity.
Finally, we get back to the proof of Lemma <ref> and derive the exponential information loss.
For any n-qubit state ฯ and the maximally mixed state ฯ_0 = I/2^n,
D(ฯฯ_0) = n - S(ฯ).
By Lemma <ref>,
D(ฮ(ฯ) ฯ_0) = n - S(ฮฯ)
โค n - (pn + (1-p) S(ฯ))
= (1-p)(n-S(ฯ))
= (1-p)D(ฯฯ_0)
Entropy is invariant after passing through unitary channel {๐ฐ_i},
D(๐ฐ_๐พ(ฯ)ฯ_0) = D(ฯฯ_0)
From the definition of ฯ(t) in Definition <ref>,
ฯ(t) = ฮโ๐ฐ_tโฮโโฏโฮโ๐ฐ_2โฮโ๐ฐ_1 (|0โฉโจ^|โ n),
we can derive Lemma <ref> by combining equation (<ref>), (<ref>), (<ref>).
ยง PROOFS OF LIMITATIONS OF DEPTHS FOR COMPUTATIONAL ADVANTAGES
ยง.ยง Proof of lower bounds of total entropy in hybrid algorithms
In this section, we provide a formal version of Lemma 1 in the main text with the proof.
Suppose we use noisy quantum devices of depth at least t for q times. Let X_i = C_n_i(ฯ_i) denotes the measurement result of the i-th quantum circuit. Here, ฯ_i = ฮฆ^(i)(|0โฉโจ0|^โ n_i), ฮฆ^(i) = ฮโ๐ฐ_t_i^(i)โฮโโฏโฮโ๐ฐ_2^(i)โฮโ๐ฐ_1^(i) denotes the quantum channel as a whole process combining all gates and noise in sequential order, t_i โฅ t, ๐ฐ_j^(i) is an arbitrary quantum channel. We obtain q random variables X_1, โฆ, X_q. Then S(X_1,โฏ,X_q) โฅ (โ_i=1^q n_i) (1-ฯต), in which ฯต = (1-p)^t.
By Lemma <ref>, for any quantum channel ๐ฐ^(i)_j,
S(ฮฆ^(i)(|0โฉโจ0|^โ n_i)) โฅ n_i (1- ฯต).
According to our assumption, measurements are performed on a computational basis and are noiseless. The resulting distribution of measurement outcome X_i lies on the diagonal of ฮ(ฮฆ^(i)(ฯ)), where ฮ denotes the dephasing channel with respect to the computational basis.
S(X_i) = S(ฮ(ฮฆ_t(ฯ)))
It's well known that S(ฮ(ฯ)) โฅ S(ฯ) for arbitrary quantum state ฯ. Thus,
S(X_i) = S(ฮ(ฮฆ_t(ฯ))) โฅ S(ฮฆ^(i)(ฯ)) โฅ n_i (1- ฯต)
Note that equation Eq. (<ref>) holds regardless of channels {๐ฐ^(i)_j}_j = 1,2,โฏ,t_i implemented in the noisy quantum device. That is, although X_i may depend on X_1, โฆ, X_i-1, Eq. (<ref>) holds regardless of previous measurement outcome. In other words, for all 1 โค i โค q , 1 โค j < i and x_j โ{0,1}^n_j, we have:
S(X_i|X_1=x_1,X_2=x_2,โฏ,X_i-1=x_i-1) โฅ n_i (1-ฯต)
Using the definition of conditional entropy, we obtain:
S(X_i|X_1,X_2,โฏ,X_i-1) โฅ n_i(1-ฯต)
Thus,
S(X_1,โฏ,X_q) = โ_i=1^q S(X_i | X_1,X_2โฏ,X_i-1) โฅโ_i=1^q n_i(1-ฯต)
ยง.ยง Equivalence on decision problems
In this part, we present some problems that are equivalent to a decision problem, thus clarifying our consideration of focusing on the computational ability of decision problems. Here, equivalent means that solving one problem in polynomial time implies being able to solve the other problem in polynomial time.
The following two problems are equivalent:
* Given n, output the smallest non-trivial factor of n.
* Given (n,k), determine if there exist a non-trivial factor of n that is less than k.
If we solve problem 1 by algorithm in polynomial time. Then for any input (n,k), we use to factorize n and get its minimal non-trivial factor m. Then we compare m and k. Thus we solve problem 2 in polynomial time.
If we solve problem 2 by algorithm in polynomial time. Then we can use binary search to find the smallest non-trivial factor of n in polynomial time.
Proposition <ref> implies that a noisy quantum device has no advantages in solving factorizing using Shor's algorithm, as discussed in the main text.
The following two problems are equivalent:
* Given a classical description of the N ร N matrix A, a unit vector |bโฉ, a quantum operator M. Output โจ x | M | x โฉ, |xโฉ is the solution of A|xโฉ = |bโฉ.
* Given a classical description of the N ร N matrix A, a unit vector |bโฉ, a quantum operator M, and a real number a . Determine if โจ x | M | x โฉ is less than a. |xโฉ is the solution of A|xโฉ = |bโฉ.
Similarly, we can use binary search to prove proposition <ref>, which implies that algorithms that query noisy quantum devices of super-logarithmic depth have no advantages in solving linear equations using the HHL algorithm.
The following two problems are equivalent:
* Given a graph (V,E), V is the set of points, E is a set of weighted edges with endpoints in V. Output the maxiumum cut of (V,E).
* Given a graph (V,E), a real number k, a edge (u,v,w) โ E, u,v is the endpoint of the edge and w is the weight of the edge. Output where there exists a maximum cut with weight at least k and does not contain edge (u,v,w).
If we solve problem 1 by algorithm in polynomial time, then for an input (V,E), k, (u,v,w), we first compute the maximum cut ((V,E)). If the weight of ((V,E)) is less than k, then output 0. Otherwise, we modify the edge (u,v,w) to (u,v,w-ฮด), ฮด is a small perturbation, get a new edge set E'. Then we compute ((V,E')). If the weight of ((V,E')) is still k, then output 1. If the weight of the maximum cut is changed to k-ฮด, then we output 0. Thus we solve problem 2 in polynomial time.
If we solve problem 2 by algorithm in polynomial time, then for an input (V,E), we first select two points in (u,v) and add a trial edge (u,v,0) to E, get E'. (u,v,0) certainly does not appear in the maximum cut necessarily. Thus we can binary search k, query with V, E', edge (u,v,0), and weight k to get the weight of the maximum cut. Then for every edge in E, we ask whether this edge is not necessary for the maximum cut. If not, we will remove this edge. Finally, we will get a maximum cut of the graph.
ยง.ยง Proofs of the limitations of hybrid algorithm on decision problems
In this section, we analyze the limitations of hybrid algorithms with noisy quantum devices on decision problems. We use the term language to refer to a general decision problem.
A language L is a subset of {0,1}^*. For x โ{0,1}^*, L(x) is defined as L(x) = [x โ L].
To represent a classical algorithm that computes a language, we utilize the definition of Probability Turing Machine (PTM).
For T : โ and L โ{0,1}^* we say that a PTM M decides L in time T(n) if for every x โ{0, 1}^*, M halts in T(|x|) steps regardless of its random choices, and Pr[M(x) = L(x)] โฅ2/3.
To facilitate our discussion below, we introduce the random variable Y to represent the random choice of the PTM. The variable Y follows a uniform distribution over the 2^T(|x|) strings of length T(|x|), denoted as Y โผ{0,1}^T(|x|). With this definition of Y, we can now make a slight modification to Definition <ref>.
For T : โ and L โ{0,1}^*, we say that a PTM M decides L in time T(n) if for every x โ{0, 1}^*, M halts in T(|x|) steps, and Pr_Y โผ{0,1}^T(|x|)[M(x,Y) = L(x)] โฅ2/3, Y is the random choice of M, M(x,y) is a function determined by the PTM that when the input and random choices are given, M(x,y) is the output of the Turing machine.
Next, we turn to the definition of algorithms assisted by noisy quantum devices. As mentioned earlier, in a more general computation scenario, a may query noisy quantum devices multiple times, as shown in Fig. 3. in the main text.
A hybrid algorithm with noisy quantum devices is a PTM that can query noisy quantum devices, obtain its measurement outcome as a classical string, and perform post-processing and classical computation. The time cost for a query of noisy quantum devices is the length of the output string.
We use _t to denote a hybrid algorithm that queries noisy quantum devices of depth at least t, t may depend on the length of the input.
According to Definition <ref>, a _t that runs in time T(n) can obtain at most T(n) bits of measurement outcome produced by the noisy quantum devices. By Lemma <ref>, if t is super-logarithmic to the input length, then the noisy quantum devices are expected to provide no advantages.
Let p be the probability parameter of the p-depolarizing channel in noisy quantum devices, and let c=-5/log(1-p). For any _t M' that decides a language L in time T(n), and t โฅ1/|log(1-p)|log(T(n)) + c, by replacing the noisy quantum devices with random coins, we can decide L with a without noisy quantum devices in time O(T(n)).
Suppose the _t queries noisy quantum devices for q times. Let X = (X_1, โฆ, X_q) denote the output generated by the noisy quantum device and let n_i be the length of the string X_i. Let l_X denote the length of X, l_X = โ_i=1^q n_i. Then l_X โค T(n) because the time cost for the queries is equal to the length of the output produced by noisy quantum devices. According to Lemma <ref>,
S(X) โฅ l_X(1-ฯต),
where ฯต = (1-p)^t. Now suppose _t M' decides a language L in T(n) time, append the random choices X' made by M' to X, to form Y = (X,X'). Then l_Y = T(n),
S(Y) = S(X) + S(X')
โฅ l_X' + l_X(1-ฯต)
โฅ (l_X + l_X') (1-ฯต)
= T(n) (1-ฯต)
The first equality comes from the random choices X' made by M' is independent of X. By definition, M' computes L means that
โ x โ{0,1}^*, Pr[M'(x,Y) = L(x)] โฅ2/3
and M' halts in T(|x|) steps. Then we construct a M which operates in the same manner as M' except that the random string X of M' is generated by noisy quantum devices, whereas the random string of M is generated by random choices and is uniformly distributed, denoted as Z. According to Lemma <ref>,
D(Y||Z) = T(n) - S(Y) โค T(n) ฯต
in which ฯต = (1-p)^t. M and M' operate in the same manner, that is, โ x โ{0,1}^n,y โ{0,1}^T(n), M(x,y) = M'(x,y). Then we have the following equation:
D(M'(x,Y)||M(x,Z)) โค D(Y||Z) โค T(n) ฯตโค1/32
The first inequality is from data processing inequality, the second inequality is from Eq. (<ref>), and the third inequality is from the inequality t โฅ1/2 |log(1-p)|log(T(n)) + c stated in Theorem <ref>.
By Pinsker's inequality,
||M'(x,Y)-M(x,Z)||_1 โคโ(2D(M'(x,Y)||M(x,Z)))โค1/4
We can deduce that Pr[M(x,Z) = L(x)] โฅ2/3 - 1/8 = 13/24.Then we can repeat the M a constant number of times and then take a majority vote on the output results to obtain the final decision. By doing so, we can ensure that the probability of the output being equal to L(x) is at least 2/3. Therefore, L can be decided by a without noisy quantum devices in O(T(n)) time.
Let p be the probability parameter of the p-depolarizing channel in the noisy quantum devices, and let c=-5/2 log(1-p), t=1/2 |log(1-p)|log(T(n)) + c. For any hybrid algorithm that decides a decision problem L in time T(n), by replacing the noisy quantum devices of depth โฅ t with random coins, we can construct a hybrid algorithm M that decides L in time O(T(n)). Thus the noisy quantum devices with depth โฅ t do not provide any advantages for the noisy quantum algorithm.
Let Y denote the output string generated by M', including both the string generated by noisy quantum devices and random coins. Let W_1 correspond to the substring generated by noisy quantum devices with depth โฅ t in the string Y, that is, W_1 = Y_๐ฎ, ๐ฎ is the positions of data that is generated by noisy quantum devices with depth โฅ t. Replace W_1 in Y with a random string W_2 generated by random coins, and denote the new string as Z, i.e., W_2 = Z_๐ฎ. By Lemma <ref>,
D(W_1||W_2) โค l_W_1ฯต
in which ฯต = (1-p)^t. Y, Z can be regarded as generated by passing W_1, W_2 through a same quantum channel ฮ, that is, Y = ฮ(W_1), Z = ฮ(W_2). ฮ denotes the quantum channel, which combines together the rest parts of M except the super-logarithmic depth noisy quantum devices. Then by data processing inequality,
D(Y||Z) = D(ฮ(W_1)||ฮ(W_2)) โค D(W_1||W_2) โค T(n) ฯต
The first inequality is from data processing inequality, and the second inequality is from Eq. (<ref>). Then we derive the same equation as Eq. (<ref>). The rest of the proof follows the same steps as in the proof of Theorem <ref>.
For any quantum algorithms, if a noisy quantum device M' that decides a language L in time T(n) = O(poly(n)) with super-logarithmic circuit depth, then we can decide L with a in the time O(T(n)). This means no advantages exist.
For algorithms that have only circuits of super-logarithmic depths, we can use the result of Theorem <ref> to replace all the circuits with random coins. So the PTM can decide the problem using the same time.
By using this corollary, we can more directly exclude the possibility of quantum advantages of many algorithms, including the examples mentioned in the main text.
ยง PROOFS OF LIMITATIONS OF ENTANGLEMENT PRODUCTION
ยง.ยง Entanglement in a one-dimensional chain
A key observation is that gates act on the qubits in a spatial locality, respecting the one-dimensional chain topology. Consider a qubit in the chain within t layers. Its interaction should be restrained within a distance of t. The restraint can be understood through a physical picture of a light cone, illustrated in Fig.ย <ref>.
This will pose limitations on entanglement spreading and the bipartite entanglement between halves of the chain. We develop the intuition into the following lemma.
For a one-dimensional lattice, namely a chain, at a fixed number of layers t, bipartite entanglement E(ฯ(t)) is upper bounded by 2t. If A contains one end of the chain, the upper bound will become t. The upper bounds hold for any mixed-state entanglement monotone.
The state after t layers is written as
ฯ(t) = ฮโ๐ฐ_tโฮโโฏโฮโ๐ฐ_2โฮโ๐ฐ_1 (|0โฉโจ^|โ n),
where ๐ฐ(ฯ) = U ฯ U^โ is the superoperator for each layer of gates. Now, we divide gates in ๐ฐ_t into two kinds: ๐ฐ^(1)_t contains gates that act across A and Aฬ
; while ๐ฐ^(2)_t contains other gates, which act on two qubits both in a same subsystem. Importantly, ๐ฐ^(2)_t is a local operation with regard to the partition and has no contribution to entanglement. We construct the following state by reducing ฮ and ๐ฐ^(2)_t,
ฯ^' (t) = ๐ฐ^(1)_t (ฯ(t-1)),
satisfying ฯ(t) = ฮโ๐ฐ^(2)_t (ฯ^'(t)). It can be converted to ฯ(t) by local operations, thus E(ฯ^' (t)) โค E(ฯ(t)). And by the definition of ๐ฐ^(1)_t, the size of its support is at most two.
Similarly, to reduce the (t-1)th layer, we again devided ๐ฐ_t-1 into two kinds. This time the gates and noise to be reduced must satisfy an additional condition that they should commute with ๐ฐ^1_t, which requires that they do not overlap with sup๐ฐ^1_t in general.
ฯ^' (t-1) = ๐ฐ^(1)_tโฮ_sup(๐ฐ^(1)_t)โ๐ฐ^(1)_t-1 (ฯ(t-2)) = ๐ฐ^'_t-1 (ฯ(t-2)),
where the combined channel ๐ฐ^'_t-1 satisfying |sup(๐ฐ^'_t-1)| โค 4. It can again be converted to ฯ^' (t) by local operations.
We follow the procedure and iteratively reduce all layers of gates. The final resulting state is ฯ^' (1) = ๐ฐ^'_1 (|0โฉโจ^|โ n) with |sup(๐ฐ^'_1)| โค 4t. In conclusion, E(ฯ(t)) โค E(ฯ^' (1)) โค 2t. The whole procedure is illustrated in Fig.ย <ref>.
In the case where A contains one end of the chain, the support can only grow to one side. Following the same procedure, the size of the support has an upper bound of t. Thus, E(ฯ(t)) โค t.
Our result is a generalization of previous work that establishes a similar bound only for the entanglement entropy of pure states <cit.>. As far as we know, no such bounds have been derived for the mixed-state case, which is essentially the case for the analysis of noisy quantum devices.
Then, by combining the results with Lemma <ref>, we derive the upper bounds of quantum relative entropy of entanglement and quantum mutual information.
For any noisy quantum devices with n qubits in one-dimensional qubit topology, quantum relative entropy of entanglement or quantum mutual information E(A:Aฬ
), between a contiguous half A and the other half Aฬ
, is upper bounded by
E(A:Aฬ
) โค{ log(n)/-log(1-p) n > 1/1-p
1 n โค1/1-p.,
where p is the noise strength in Eq. (<ref>). Quantum mutual information I(A:Aฬ
) obeys the same bound.
First, we show that both of the two entanglement monotones are upper bounded by D(ฯฯ_0). For quantum relative entropy of entanglement, this is because
E_R(A:Aฬ
) = min_ฯโ SEP D(ฯฯ) โค D(ฯฯ_0),
where SEP denotes the set of separable states over A and Aฬ
partition. For quantum mutual information, this is because
I(A : Aฬ
) = S(A) + S(Aฬ
) - S(ฯ) โค n - S(ฯ) = D(ฯฯ_0).
Combined with Lemma <ref>,
E(A:Aฬ
) โค D(ฯฯ_0)โค n (1-p)^t.
From Lemma <ref>, we have another upper bounds
E(A:Aฬ
) โค t.
By combining the two upper bounds, we have
E_R(A:Aฬ
)โคmin{ n (1-p)^t, t }โค t^โ,
where t^โ is the greatest integer that satisifies n (1-p)^t^โโฅ t^โ. We consider the following two cases.
* If n > 1/1-p, then t^โโฅ 1. So we have t^โโคlog(n)/-log(1-p), and the upper bound follows as
E(A:Aฬ
) โค t^โโคlog(n)/-log(1-p).
* Otherwise, t^โ = 0. Then E(A:Aฬ
) โค 1.
Note that n > 1/1-p is the case for usual quantum devices, whose noisy level is at least no more than 1/2 and have a few qubits. So we choose only this case to present in the main text.
In one-dimensional topology, bipartite entanglement is known as an important indicator of classical simulatability. The general idea is that the more strongly two half chains are entangled, the more classical computational resources will be required to faithfully describe the state in a data structure in a classical computer. For a quantum state with a limited amount of entanglement, the matrix product density operator (MPDO) ansatz is designed to efficiently express the state, which is a mixed-state generalization of the famous matrix product state (MPS) ansatz. In most practical cases, the ansatz is expected to be capable of simulating mixed-states with bipartite entanglement up to O(log n) with time and memory complexity polynomial in n <cit.>. The validity of MPDO is also numerically verified in simulating random noisy circuits, where up to hundreds of qubits are simulated with noise strength p = 10^-2 <cit.>.
ยง.ยง Entanglement in two-dimensional lattice
For general dimensional topologies, if the entanglement between subsystem A and its complement Aฬ
scales with the area of the boundary, the system follows the area-law scaling. Alternatively, if the entanglement scales with the volume of A, the system follows the volume-law scaling.
For two-dimensional lattices, we take A as a square, whose size is (n^1/2, n^1/2). And one of A's vertex is on a vertex of the whole lattice.
In a two-dimensional lattice with the number of qubits n > (3/1-p)^2, for both quantum mutual information and quantum relative entropy of entanglement, we have upper bounds
E(ฯ(t)) < 1/2log(n)-1/-log(1-p) 2n^1/2+ (1/2log(n)-1/-log(1-p))^2.
In two-dimensional lattices, a light cone lemma can be established in a similar way. Now, the interaction of qubits can expand in two different dimensions. So the size of the support of the reduced channel, at last, will be upper bounded
|sup(๐ฐ^'_1)| โค (n^1/2+t)(n^1/2+t) - n = 2tn^1/2 + t^2.
For both quantum mutual information and quantum relative entropy of entanglement, denoted by E(A:Aฬ
), we have the following two upper bounds,
E(A:Aฬ
) โค n (1-p)^t,
E(A:Aฬ
) โค 2tn^1/2 + t^2.
Combining the two bounds, we have
E(A:Aฬ
) โคmin{ n (1-p)^t, 2tn^1/2 + t^2 }โค 2t^โ n^1/2 + t^โ^2,
where t^โ is again the greatest integer that satisfies n (1-p)^tโฅ 2n^1/2t + t^2. With the requirement of n > (3/1-p)^2, we have
n (1-p) > 3 n^1/2โฅ 2 n^1/2 + 1.
Then we assure that t^โโฅ 1 and can further derive
n(1-p)^t^โโฅ 2n^1/2 + 1 > 2 n^1/2.
This will give t^โ an upper bound,
t^โโค1/2log(n)-1/-log(1-p).
Finally, we have the upper bound
E(A:Aฬ
) < 1/2log(n)-1/-log(1-p) 2n^1/2+ (1/2log(n)-1/-log(1-p))^2.
When n is sufficiently large, the upper bound scales as n^1/2log (n), exhibiting an area-law scaling with an additional logarithmic factor. However, unlike the one-dimensional case, this entanglement scaling has no known implications for classical simulatability in the two-dimensional lattice. This is because there is no known efficient classical simulation algorithm for even area-law systems in two dimensions. Projected Entangled Pair States (PEPS), as a two-dimensional extension of MPS, have exponential contraction complexity.
As in the one-dimensional case, the entanglement scaling we propose presents a limitation on the simulation of two-dimensional quantum systems. However, it is feasible to use a two-dimensional noisy quantum device to simulate a one-dimensional quantum system <cit.>.
ยง ESTIMATION OF THE DEPOLARIZING NOISE STRENGTH P IN REALISTIC QUANTUM DEVICES
In experimental settings, the relaxation time T_1 or dephasing time T_2 are commonly used to describe the duration before decoherence, which is also referred to as coherence time. In general, the strength of noise is related to coherence time through p โผT_g/T_1, where T_g is the duration of a two-qubit gate operation.
|
http://arxiv.org/abs/2306.12501v2
|
20230621182301
|
Rotation-invariant web bases from hourglass plabic graphs
|
[
"Christian Gaetz",
"Oliver Pechenik",
"Stephan Pfannerer",
"Jessica Striker",
"Joshua P. Swanson"
] |
math.CO
|
[
"math.CO",
"math.QA",
"math.RT",
"05E10, 17B37, 13A50, 18M15"
] |
Webs give a diagrammatic calculus for spaces of tensor invariants. We introduce hourglass plabic graphs as a new avatar of webs, and use these to give the first rotation-invariant U_q(_4)-web basis, a long-sought object. The characterization of our basis webs relies on the combinatorics of these new plabic graphs and associated configurations of a symmetrized six-vertex model. We give growth rules, based on a novel crystal-theoretic technique, for generating our basis webs from tableaux and we use skein relations to give an algorithm for expressing arbitrary webs in the basis. We also discuss how previously known rotation-invariant web bases can be unified in our framework of hourglass plabic graphs.
Varstrometry for Off-nucleus and Dual sub-Kpc AGN (VODKA). SDSS J1608+2716: A Sub-arcsec Quadruply Lensed Quasar at z=2.575
[
July 31, 2023
============================================================================================================================
ยง INTRODUCTION
Over the last four decades, the classical theory of spaces of _r(โ)-invariants has been extended with the aid of diagrams called webs, whose calculus has powerful topological applications. For _2, the TemperleyโLieb basis consists of tensor invariants of non-crossing matchings of points around a disk and can be used to compute the Jones polynomial. For _3, Kuperberg <cit.> introduced the remarkable non-elliptic web basis with many beautiful properties. However, a rotation-invariant extension of Kuperberg's basis to higher ranks has proven elusive ever since its introduction in 1996. Our main result provides the first such basis for _4, and indeed for its quantum deformation U_q(_4).
[See <Ref>]
The collection โฌ_q^c of tensor invariants of top fully reduced hourglass plabic graphs of type c is a rotation-invariant web basis for the invariant space _U_q(_4)(โ_q^c V_q).
Such a rotation-invariant basis has long been desired by those developing the theory of webs; see, e.g., <cit.> for such remarks. The lack heretofore of suitable generalizations of the _3-web basis to higher rank has also been specifically lamented in applications of webs to cluster algebras <cit.>, enumerative combinatorics <cit.>, representation theory <cit.>, and dimer models <cit.>.
The key insight behind <Ref> is our introduction of a combinatorial framework we call hourglass plabic graphs, so named because they allow certain half-twist multi-edges called hourglasses. These graphs are an extension of Postnikov's plabic graphs; while Postnikov's graphs are governed by a single trip permutation, ours crucially involve a tuple _โ of such permutations. To establish <Ref>, we describe a map ๐ฏ from move-equivalence classes of certain hourglass plabic graphs to 4-row fluctuating tableaux (a class including standard tableaux) and a map ๐ข, based on growth rules, in the other direction (see <Ref> for an example). In <cit.>, we showed that orbits of promotion on fluctuating tableaux are tracked by a tuple _โ of promotion permutations.
[See <Ref>]
The maps ๐ฏ and ๐ข are mutually inverse bijections between move-equivalence classes of fully reduced hourglass plabic graphs and 4-row rectangular fluctuating tableaux. Furthermore, this bijection satisfies _โ(G)=_โ(๐ฏ(G)) and consequently intertwines promotion of tableaux with rotation of hourglass plabic graphs.
The growth algorithm moreover allows us to deduce that the change of basis between โฌ_q^c from <Ref> and the monomial basis is unitriangular over โค[q, q^-1] (see <Ref>).
This framework unifies the _r-web bases for r โค 4,
as well as the โ2-columnโ case in arbitrary rank (see <Ref>). It is also remarkably rich from a purely combinatorial perspective. For example, two extreme move-equivalence classes for r=4 are naturally identified with the lattices of alternating sign matrices and plane partitions in a box, respectively (see <Ref>).
ยง.ยง Hourglass plabic graphs
Previous efforts to generalize Kuperberg's work have largely focused on the representation theory of quantum groups (see e.g.ย <cit.>). We instead take a combinatorial approach, which we develop throughout <Ref>; in <Ref> this combinatorial theory straightforwardly yields our rotation-invariant web basis of _U_q(_4)(โ_q^c V_q) from <Ref>. We now describe the combinatorial ingredients of <Ref> before returning to discussion of web invariants.
Plabic graphs were introduced by Postnikov to study the totally nonnegative Grassmannian <cit.>. These are planar graphs embedded in a disk with interior vertices colored black or white. The trip permutation of a plabic graph is the bijection obtained by starting at a boundary vertex and traveling through the graph, taking a left at white vertices and a right at black vertices, until another boundary vertex is reached. Postnikov showed that reduced plabic graphs are classified up to move-equivalence by their trip permutation.
Building on connections between webs and plabic graphs due to FraserโLamโLe <cit.>, HopkinsโRubey <cit.> recently observed that Kuperberg's non-elliptic _3 basis webs can be interpreted as reduced bipartite plabic graphs.
KhovanovโKuperberg <cit.> introduced a bijection between _3 basis webs and sign and state strings, which are well-known to correspond to 3-row rectangular tableaux (see, e.g.,ย <cit.>). PetersenโPylyavskyyโRhoades <cit.> showed that this bijection intertwines rotation with tableau promotion in the standard case; Patrias <cit.> extended this to the (generalized) oscillating case. The recent work of HopkinsโRubey <cit.> encodes the structure of tableau promotion in a promotion permutation and shows that the KhovanovโKuperberg bijection carries this permutation to the trip permutation of an _3 basis web.
Motivated by results such as <cit.>, combinatorialists have long desired such a planar model encoding promotion of general rectangular tableaux as rotation.
In <cit.>, we
introduce fluctuating tableaux as a simultaneous generalization of standard, dual-semistandard, and the (generalized) oscillating tableaux of Patriasย <cit.>. We also
extend the HopkinsโRubey encoding by associating an r-row rectangular fluctuating tableaux T to a sequence of promotion permutations _โ(T) = (_1(T), โฆ, _r-1(T)). <Ref> extends this relation to give such a planar model for 4-row fluctuating tableaux.
The key new objects of our combinatorial theory are hourglass plabic graphs, as illustrated in <Ref>. These are r-valent properly bicolored graphs embedded in a disk. A special feature are the hourglass edges, which are multiple edges twisted so that the clockwise orders of their strands around the two incident vertices are the same. Hourglass plabic graphs come with a tuple _โ(G) = (_1(G), โฆ, _r-1(G)) of trip permutations, where _i is obtained by taking the i-th left at white vertices and the i-th right at black vertices. Here, the twists of the hourglass edges are essential for yielding the desired behavior of _โ(G), specifically to give the conclusion from <Ref> that, for each 4-row rectangular tableau T, there is a 4-valent hourglass plabic graph G such that _โ(T) = _โ(G). Remarkably, the same statement holds for all other known rotation-invariant _r web bases: the r < 4 cases and Fraser's <cit.> 2-column web basis for arbitrary _r. (See <Ref> for further discussion.) In future work, we hope to extend our approach to all r.
Hourglass plabic graphs G for r = 4 admit a notion of full reducedness, defined by forbidding certain faces in graphs equivalent to G under moves, most notably square moves and benzene moves (see <Ref>). Full reducedness is in many ways analogous to the well-studied notion of reducedness on plabic graphs, while moves on hourglass plabic graphs lift moves on plabic graphs. In particular, <Ref> entails that two fully reduced hourglass plabic graphs are move-equivalent if and only if their trip permutations coincide, mirroring Postnikov's celebrated characterization of move-equivalence for reduced plabic graphs <cit.>. We moreover establish that full reducedness can be characterized by forbidding certain crossings of trip strands, just as Postnikov demonstrated for reduced plabic graphs <cit.>; this characterization is critical to many of our arguments.
The definition of full reducedness naturally extends to the r <4 case, forbidding certain uniformly defined face configurations. This definition directly recovers the TemperleyโLieb noncrossing condition when r=2 and Kuperberg's non-elliptic condition when r=3, giving a uniform description of basis webs in all three settings.
When r=4, we may transform hourglass plabic graphs into directed graphs that we call symmetrized six-vertex configurations (see <Ref>) and we move back and forth between these realizations as convenient. These configurations appeared independently in Hagemeyer's thesis <cit.>. Symmetrized six-vertex configurations are closely related to the usual six-vertex (square ice) model of statistical physics (see e.g. <cit.>) and we thereby obtain intriguing connections to the combinatorics of plane partitions and alternating sign matrices; see <Ref>.
Our proof of <Ref> involves constructing (<Ref>) the map ๐ฏ from fully reduced hourglass plabic graphs G to fluctuating tableaux. This map involves giving an edge-labeling of G that is obtained from information about the crossings of _โ-strands. Similar labelings for _3 appear in <cit.>. In the reverse direction, we give in <Ref> a growth algorithm ๐ข, analogous to that of <cit.> for _3, that produces an _4-basis web given the labeling of the boundary edges. The growth algorithm is the most combinatorially intricate part of our work. The proof of correctness relies critically on a novel Kashiwara crystal-theoretic technique that we introduce.
ยง.ยง Bases of tensor invariants
Consider ๐ฆ = _r(โ) with its defining representation V = โ^r, its fundamental representations V^ฯ_k = โ^k V, and their duals, which we write as โ^-k V (โ^k V)^*.
Given any tensor product
โ^c V โ^c_1 V โโฏโโ^c_n V
of such representations, the space of linear functionals on โ^c V also carries a ๐ฆ-action. It is a classical problem of invariant theory to describe the subspace (โ^c V) _๐ฆ(โ^c V, โ) of invariant multilinear forms. It is typical to consider only tensor products as in (<ref>), since the category of all finite-dimensional representations may be recovered from these through the Karoubi envelope construction (see e.g.ย <cit.>, <cit.>).
The spaces (โ^c V) of ๐ฆ-invariants connect a range of important objects in algebra and geometry. For example, consider a Grassmannian Gr_r(โ^d) with respect to its Plรผcker embedding into projective space. We may then recover multi-homogeneous parts of the homogeneous coordinate ring of Gr_r(โ^d) as (โ^c V) for appropriate choices of c. Moreover, certain multi-homogeneous parts of this coordinate ring yield concrete constructions of Schur and Specht modules, irreducible representations of _d(โ) and the symmetric group ๐_d, respectively. The homogeneous coordinate ring of Gr_r(โ^d) also carries an important cluster algebra structure <cit.>, as do other rings of invariant polynomials <cit.>.
Replacing ๐ฆ with the corresponding quantum group U_q(_r), we obtain the space _U_q(_r)(โ_q^c V_q) of quantum invariants, deforming (โ^c V). Such quantum invariants and the associated representation theory of quantum groups give rise to many of the most powerful invariants of knots, links, and tangles (see, for example, <cit.>).
For many purposes, one needs to have explicit bases for (โ^c V) or its quantum deformation. Likely the first such basis to be discovered was the standard monomial basis constructed by Hodge <cit.> and greatly developed by Seshadri and collaborators (e.g., <cit.>). While relatively elementary and computable, the standard monomial basis does not enjoy many of the other properties one desires in a basis. As a result, many further authors have developed a panoply of less tractable bases with various assortments of desirable properties. We have, for example, the dual canonical basis, developed independently by Lusztig <cit.> and Kashiwara <cit.>, as well as the dual semicanonical basis of Lusztig <cit.> and the geometric Satake basis (see e.g. <cit.>). More recently, the theory of cluster algebras has engendered further bases, such as the theta basis of Gross, Hacking, Keel, and Kontsevich <cit.>, in the cases (see <cit.>) where (โ^c V) is known to carry a cluster structure. For a recent survey of cluster algebra bases and their relations, see <cit.>.
There is a natural cyclic shift isomorphism
: (โ^(c_1, โฆ, c_n) V) (โ^(c_2, โฆ, c_n, c_1) V)
induced by the natural isomorphism with _๐ฆ(โ^(c_2, โฆ, c_n) V, (โ^c_1 V)^*).
Given a choice of basis as above, one might hope that this isomorphism would preserve the basis, mapping basis elements to scalar multiples of basis elements. In such a case, we say the basis is -invariant. Indeed, instances of -invariance for the dual canonical, geometric Satake, and theta bases have been used to great effect in <cit.>, <cit.>, and <cit.>, respectively. Unfortunately, these three bases are notoriously difficult for computation. It is much easier to compute with the standard monomial basis; however, the standard monomial basis is far from -invariant even in very simple cases and is not well-suited to analysis of quantum link invariants.
ยง.ยง Web bases
Web bases are diagrammatic bases for (โ^c V) or its quantum deformation that seek to surmount these challenges. Each basis element [W] is encoded by a planar bipartite graph W embedded in a disk, so that important algebraic operations are realized as simple graph-theoretic manipulations.
When ๐ฆ = _2, the TemperleyโLieb web basis of non-crossing matchings has long been known (see <cit.>). When ๐ฆ = _3, Kuperberg <cit.> introduced a remarkable web basis involving a non-elliptic condition on trivalent graphs and gave similar constructions for the other simple Lie groups of rank 2. The non-crossing and non-elliptic conditions are clearly rotation-invariant, so the web bases for U_q(_2) and U_q(_3) are -invariant. While Kuperberg's _3 web basis was primarily introduced as a tool for efficient computation of quantum link invariants, it has since found application in areas as diverse as the geometric Satake correspondence <cit.>, cluster algebras <cit.>, and cyclic sieving <cit.>. Its transition matrices to the bases discussed in <Ref> have also been of great recent interest (see <cit.>).
Since Kuperberg's introduction of rank 2 web bases, a primary focus has been his statement:
โThe main open problem related to the combinatorial rank 2 spiders is how to generalize them to higher rank.โ โ<cit.>
Kim <cit.> conjectured web generators and relations for the case of U_q(_4) and Morrison <cit.> extended Kim's conjecture to U_q(_r). CautisโKamnitzerโMorrison <cit.> later proved these conjectures (with a different proof subsequently given in <cit.>). While the diagrammatic relations have thus been determined, generalizing Kuperberg's web basis in a rotation-invariant way has proved difficult.
Several web bases for simple Lie groups of higher rank have appeared in the literature, though none is rotation-invariant. In <cit.>, for ๐ฆ = _r with representations restricted to c_i โ{1, r-1}, Westbury gave a recipe to construct a generator with each possible leading term, thereby obtaining a basis of web diagrams. Unfortunately, this basis is not rotation-invariant.[Some brief remarks in <cit.> suggest that Westbury's basis is rotation-invariant. This is not correct. The smallest example of non-invariance occurs for r=4 and c = (1,1). We thank Bruce Westbury for very helpful correspondence on this point. See also <cit.> for related discussion.] Westbury's basis was extended to products of arbitrary fundamental representations by Fontaine <cit.>. Fontaine's basis also is not rotation-invariant; moreover, in general, it involves making arbitrary choices, so is not well-defined in all cases. Elias <cit.> independently obtained what appear to be the quantum group deformations of these bases; again, his bases involve arbitrary choices and are not rotation-invariant. A further obstacle to the use of these non-rotation-invariant bases is that there is no known topological or combinatorial property that enables determining whether a given web is a basis element, let alone an efficient method for expressing invariants in the basis. More recently, in his thesis, Hagemeyer gave another family of web bases for U_q(_4) <cit.>, which, like our basis, are related to the symmetrized six-vertex model. However, he required arbitrary choices and thus did not obtain a rotation-invariant basis.
Previous work has typically focused on trivalent graph models of representation categories of classical or quantum groups (see e.g.ย <cit.>). Intuitively, the trivalent vertices encode multiplication or comultiplication maps in the exterior algebra of V, and webs are formed by composing such maps (along with evaluation and coevaluation morphisms). See <cit.> for an overview of the graphical calculus of such pivotal categories. Our hourglass plabic graphs build instead on <cit.> by using an r-valent model. (Such r-valent models also appear in the topology literature <cit.>; for the connection of that work with <cit.>, see <cit.>.) This r-valent approach is better suited to rotation invariance. For example, under our approach the determinant map corresponds to r simple edges around a single interior vertex, rather than a full binary tree, as in a trivalent model. See <Ref> for details.
Our main algebraic result, <Ref>, gives the first rotation-invariant web basis for _4, and indeed for U_q(_4). This is the first construction of a rotation-invariant web basis for any simple Lie group of rank greater than 2 (although some apparent web bases for algebras of block-triangular matrices appear in <cit.>).
The family of basis webs of <Ref> for r=4 is not preserved under reflection, since the top condition (see <Ref>) requires that benzene rings are oriented as in the upper-left diagram of <Ref>. Reflection interchanges the upper-left and lower-left benzene rings, so one may dually define a bottom condition using the lower-left diagram, together with an attendant bottom basis. In this way, while we focus in this paper on the top basis, we actually introduce two families of rotation-invariant bases of (โ_q^c V_q), one for the top condition and one for the bottom condition. At q=1, these families are interchanged under reflection, up to reversing the type c. In the standard case when q=1, reflection corresponds to the action of the long permutation w_0. By contrast, the sets of r=2 and r=3 basis webs are both rotation- and reflection-invariant (see <cit.> for discussion). From the behavior of our basis under the action of w_0, we suspect that our basis differs from all of the bases discussed in <Ref>. We further suspect the combinatorial top and bottom conditions can be interpreted geometrically in terms of affine A_3 buildings (cf. <cit.>).
ยง.ยง Outline
<Ref> gives preliminaries on webs and their associated quantum invariants and on plabic graphs. We use a less standard presentation of websโdue essentially to <cit.> in the q=1 caseโin which trivalence is not assumed, so even experts may wish to consult this section.
In <Ref>, we develop the theory of hourglass plabic graphs and full reducedness. This is applied in <Ref> to define and study the map ๐ฏ of <Ref>. The growth rules defining the inverse map ๐ข are developed in <Ref>; the growth rules also allow for the leading terms of basis elements to be read off. In <Ref>, these results are combined to obtain the bijection of <Ref> and subsequently the web basis โฌ_q^c of <Ref>.
In <Ref>, we show how arbitrary web invariants may be expressed in the basis โฌ_q^c. These skein relations are of considerable interest in many applications of webs, particularly in relation to quantum link invariants.
<Ref> contains several combinatorial applications of our work to cyclic sieving, alternating sign matrices, and plane partitions.
Finally, in <Ref>, we discuss how the previously-known rotation-invariant web bases fit into our unified framework of fully reduced hourglass plabic graphs with _โ=_โ.
An extended abstract describing part of this work appears in the proceedings of FPSAC 2023 <cit.>.
ยง PRELIMINARIES
Let [r] {1, 2, โฆ, r}, [r]{-1, -2, โฆ, -r}, and ยฑ[r]{ยฑ1,ยฑ2,โฆ,ยฑ r}. We define ๐_r as the collection of subsets ofย ยฑ [r] whose elements are all of the same sign. We often write i for -i.
ยง.ยง Webs and tensor invariants
The quantum group U_q(_r) is a โ(q)-algebra that arose from the theory of quantum integrable systems; see <cit.> for a definition by generators and relations. The quantum group U_q(_r) deforms the universal enveloping algebra U(_r), so the classical theory of _r is recovered at q=1. Explicit diagrammatic generators and relations for the category of finite-dimensional U_q(_r)-modules were obtained in <cit.> (see also <cit.>), where the generators are drawn as certain trivalent graphs. For our purposes, it is important to use instead an essentially equivalent formulation in terms of tagged r-valent graphs, due essentially to <cit.>.
We are primarily interested in the following U_q(_r)-modules and their tensor products.
* The standard U_q(_r)-module V_q (deforming the defining representation of _r) has standard โ(q)-basis v_1, โฆ, v_r.
* The quantum exterior algebra โ_q^โ V_q (see <cit.>) is a U_q(_r)-module (deforming the classical exterior algebra). On the generators v_i, the quantum exterior product satisfies the q-commutation relations
v_i โง_q v_j =
(-q) v_j โง_q v_i if i < j,
0 if i = j.
The quantum exterior power โ_q^c V_q has โ(q)-basis {v_I : I โ [r], |I| = c}, where we write v_I v_i_1โง_q โฏโง_q v_i_c for I = {i_1 > โฏ > i_c}.
* The linear dual (โ_q^c V_q)^* is a U_q(_r)-module with dual basis { v_I^* }.
As a shorthand, we write โ^-c_q V_q (โ_q^c V_q)^*.
For I โ [r], we write Iโ๐_r for the corresponding subset of negative numbers and set v_I v_I^*.
Given a type c = (c_1, โฆ, c_n) where each c_j โยฑ [r], let
โ_q^c V_q โ_q^c_1 V_q โโฏโโ_q^c_n V_q.
In this paper, we limit attention to the U_q(_r)-modules described in <Ref>. As the Karoubi envelope of the category of such modules recovers the entire category of finite-dimensional representations, this is not a major restriction.
The natural product map โ_q^c_1 V_q โโ_q^c_2 V_q โโ_q^c_1+c_2 V_q given by u โ v โฆ u โง_q v is U_q(_r)-equivariant <cit.>. It may be described explicitly by v_I โง_q v_J โฆ (-q)^โ(I, J) v_I โช J when I โฉ J = โ
and โ(I, J) = |{(i, j) โ I ร J : i < j}|. The more general n-ary product map is associative.
Elements of โ=_U_q(_r)(โ_q^c V_q, โ_q^d V_q) may be obtained by composition and tensoring from the natural product map as well as natural U_q(_r)-module maps arising from coproducts, duals, evaluation, coevaluation, and the identity (plus the pivotal isomorphism between a representation and its double dual). Each such morphism has explicit transition coefficients which are all integral powers ofย q. Webs as in <cit.> are planar graphs between two horizontal lines which are locally given by the building blocks in <Ref> and which represent elements of โ; we will call these CKM-style webs. The morphism corresponding to a CKM-style web is unchanged under planar isotopies fixing the boundary lines. The domain and codomain types c and d may be read off from edges incident to the upper and lower horizontal lines, respectively, by reading edge multiplicities from left to right, where upward oriented edges pick up a negative and correspond to dual exterior powers.
The middle two diagrams in <Ref> include a short, undirected edge called a tag, introduced in <cit.>. These middle diagrams may be thought of as degenerate cases of the trivalent vertex diagrams where c_1+c_2=r. When c=r, the middle diagrams correspond to the U_q(_r)-isomorphism โ_q^r V_q โ
โ(q) and its dual <cit.>. Unlike other edges, tags end as leaves in the middle of the diagram.
As we will see, tags may be moved across an edge of multiplicity c at the cost of multiplying the invariant by (-1)^c(r-c). When r is odd, this factor is always 1, and tags may be omitted completely. Tags are hence a comparatively unimportant bookkeeping device and are frequently ignored or de-emphasized. Our web bases for r=4 will come with essentially canonical choices of tags, allowing us to ultimately omit them from our main result.
The conventions in <cit.> and <cit.> differ in their handling of certain negative signs, and other sources such as <cit.> differ in their handling of powers of q or other details. We follow the algebraic conventions of <cit.> throughout. However, for consistency with standard numbering conventions on plabic graphs, we draw CKM-style webs with the domain on top rather than on bottom. That is, our CKM-style webs are obtained by vertically flipping the diagrams from <cit.>.
We now turn to webs as in <cit.> (where they are referred to as โr-weblike subgraphsโ). While <cit.> explicitly consider only the case q=1, they note that their diagrammatic construction nonetheless applies over U_q(_r) using the framework of <cit.>. For our purposes, it suffices to consider only the (tensor) invariants
[W]_q โ_U_q(_r)(โ_q^c V_q ) _U_q(_r)(โ_q^c V_q, โ(q)),
for W a CKM-style web without vertices on the lower horizontal boundary; we draw such webs by closing the upper horizontal boundary to a circle. See <Ref>.
A U_q(_r)-web (or just web) is a planar graph W embedded in the disk such that
* W is properly bicolored, with black and white vertices;
* all vertices on the boundary circle have degree 1;
* boundary vertices are labeled b_1, b_2, โฆ, b_n in clockwise order;
* all interior vertices have a special โtagโ edge, which immediately terminates in the interior and not at a vertex;
* non-tag edges have multiplicities from [r];
* all vertices in the interior of the disk have incident edge multiplicities summing to r.
These webs are defined up to planar isotopy fixing the boundary circle.
Given a U_q(_r)-web W, the corresponding tensor invariant [W]_q is obtained by converting W to a CKM-style web as in <Ref>. A boundary vertex with edge multiplicity c corresponds to a domain factor โ_q^c V_q for a black vertex and โ_q^-c V_q for a white vertex. The interior vertices of the U_q(_r)-web correspond to composites
โ_q^c_1 V_q โโฏโโ_q^c_k V_q โโ_q^r V_q โโ(q).
Consider the following CKM-style web and the corresponding U_q(_r)-web.
< g r a p h i c s >
They represent the same tensor invariant in _U_q(_5)(V_q โ V_q โ V_q โโ_q^2 V_q, โ(q)), which is similar to a determinant. Boundary vertices in the web on the right are implicitly labeled b_1, b_2, b_3, b_4 like the face of a clock. The โpoint at infinityโ separating vertex b_1 from vertex b_4 is marked on the boundary.
As noted above, the positions of tags around an internal vertex only affect the sign of the corresponding tensor invariant. More precisely, we have the following.
The relations in <Ref> hold, as do the those obtained by reversing all arrows or switching all vertex colors.
The first relation is <cit.>. The second is a consequence of <cit.>.
ยง.ยง Polynomial invariants and proper labelings
A U_q(_r)-web W of type c may be considered as a โ(q)-multilinear map
โ_q^c_1 V_q รโฏรโ_q^c_n V_q โโ(q).
Write elements in โ_q^c V_q as โ_|S| = |c| x_S v_S for scalars x_S with Sโ๐_r. Thus we may interpret the invariant [W]_q as a polynomial
[W]_q โโ(q)[x_S, i]
where for each i, S ranges over cardinality |c_i| elements of ๐_r of the same sign as c_i. The polynomial ring is naturally โค_โฅ 0^n-graded where (x_S, i) = ๐_i, and the invariants [W]_q lie in the multilinear component (1, โฆ, 1).
The web in <Ref> corresponds to the polynomial
โ_a, b, c, {d, e} (-q)^โ(a, b, c, {d, e}) x_a, 1 x_b, 2 x_c, 3, x_{d, e}, 4
where the sum is over all partitions {a}โ{b}โ{c}โ{d, e} = {1, โฆ, 5} and the exponent โ is the number of non-inversions of the permutation (a, b, c, d, e) if we choose d > e.
It is difficult to find a precise statement in the literature of a monomial expansion of general U_q(_r)-web invariants; see <cit.>, <cit.>, <cit.> for certain cases. We next give such a formula which is sufficient for our purposes.
A labeling ฯ of a U_q(_r)-web W is an assignment of subsets of [r] to each edge of W, where an edge of multiplicity m is assigned a subset of size m. A labeling is proper if the edges incident to each internal vertex partition the set [r].
The boundary word โ(ฯ) of a labeling ฯ is the word w = w_1 โฏ w_n in the alphabet ๐_r given by reading the labels of the edges adjacent to b_1,b_2,โฆ,b_n, where the sign is positive if b_i is black and negative if b_i is white.
Given subsets S_1, โฆ, S_m of [r], their coinversion number is
โ(S_1, โฆ, S_m) = |{(a, b) : a โ S_i, b โ S_j, a โค b, and i < j}|.
Given a U_q(_r)-web W with a labeling ฯ, the coinversion number of an internal vertex v is โ(S_1, โฆ, S_n) where S_1, โฆ, S_n are the labels from ฯ around v read in clockwise order starting from the tag on v. The coinversion number โ_W(ฯ) is the sum of the coinversion numbers of all internal vertices.
Finally, suppose w is a boundary word. Create a word ลต over [r] by replacing each Sโ[r] with [r] โ S and then writing the elements of the subsets in w in increasing order. The coinversion number โ_W(w) of w is โ_W(ลต), plus the sum of โ(S, [r] โ S) over all such replaced S.
Let W be a U_q(_r)-web. There are statistics _W(ฯ) โโค on proper labelings ฯ ofย W and _W(w) โ{ยฑ 1} on boundary words w such that
[W]_q
= โ_ฯ (-1)^โ_W(ฯ) q^_W(ฯ) x_โ(ฯ)
= โ_w (โ_ฯ : โ(ฯ) = w q^_W(ฯ)) _W(w) x_w,
where ฯ ranges over the proper labelings of W, w ranges over the boundary words of proper labelings, and x_w x_w_1, 1โฏ x_w_n, n. Furthermore:
* If ฯ and ฯ' are proper labelings of W with boundary words w and w', respectively, then
(-1)^โ_W(ฯ) - โ_W(ฯ') = (-1)^โ_W(w) - โ_W(w').
In particular, _W(w) = (-1)^โ_W(ฯ) for any proper labeling ฯ of W with boundary word w.
* The statistic _W(ฯ) does not depend on the tags of W.
For the first equality in (<ref>), consider the transition coefficients in the v_I and v_I^* bases of morphisms coming from composites of products, duals, evaluations, coevaluations, and pivotal isomorphisms (see p.ย pg:transition_coeff). An edge labeled S corresponds to v_S, and the proper labeling condition precisely ensures the relevant product map around an internal vertex is non-zero. The boundary labeling corresponds to the input variables and hence to the monomial x_โ(ฯ). The transition coefficients of these morphisms are of the form (ยฑ q)^i. The negative signs arise only from the product maps around internal vertices, and the overall sign is (-1)^โ_W(ฯ). Powers of q without accompanying negative signs arise only when the pivotal isomorphism is used. The first equality in (<ref>) now follows.
The second equality in (<ref>) asserts that โ_W(ฯ) depends only on โ(ฯ). This is a consequence of (<ref>), which is equivalent to <cit.>, in the case that W has type (1, โฆ, 1). The general type case follows from this one by โattaching clawsโ and tracking signs.
Finally, let W' be obtained from W by moving a tag past an edge of multiplicity k. By <Ref>,
[W']_q = (-1)^k(r-k) [W]_q.
If S_1 โโฏโ S_m = [r] and |S_n|=k, then
(-1)^โ(S_n, S_1, โฆ, S_n-1) = (-1)^k(r-k).
Hence
(-1)^โ_W(ฯ) = (-1)^k(r-k) (-1)^โ_W'(ฯ).
Applying (<ref>) to [W]_q and [W']_q now yields _W(ฯ) = _W'(ฯ).
Finally, we introduce a term order on โ(q)[x_i, S] with respect to which our U_q(_4)-web bases will be unitriangular.
Suppose w = w_1 โฏ w_n is a sequence of elements of ยฑ[r]. Let i be a new symbol representing either i or r-i+1. Let w = w_1โฏw_n. Define an order w >_ w' if and only if w is lexicographically greater than w', where 1 < 2 < โฏ < r. Extend โ and >_ to sequences w = w_1 โฏ w_n of elements of ๐_r, writing subsets in increasing โค-order. The total degree (w) isย n, which is also the total degree of x_w.
The grevlex order on โ(q)[x_i, S] is given by
x_w <_ x_w' โ (w) < (w') or ((w) = (w') and w >_w').
The tensor invariant associated to a web has -leading monomial given by proper labelings with the -minimal boundary word. In <Ref>, we will show that there is a unique proper labeling with -minimal boundary word associated to each of our U_q(_4)-basis webs.
ยง.ยง Plabic graphs
A major theme in this work is the view of webs as versions of plabic graphs.
A plabic graph is a planar undirected graph, embedded in a disk, whose boundary vertices have degree 1 and are labeled b_1,b_2,โฆ in clockwise order and whose interior vertices are each colored black or white.
Plabic graphs were introduced by Postnikov <cit.> in his study of the totally positive Grassmannian, and have since proven to be fundamental objects in the theories of cluster algebras <cit.> and KP solitons <cit.> and in the physics of scattering amplitudes <cit.>. See <cit.> for a recent survey.
For each boundary vertex b_i of a plabic graph G, one may obtain another boundary vertex b_ฯ(i) by beginning a walk on the unique edge incident to b_i and turning left at each white vertex and right at each black vertex, until the boundary is reached. We call the sequence of vertices and edges in such a walk a trip strand. The function ฯ is a permutation, called the trip permutation of G and denoted (G). The turning rules are called the rules of the road.
Two plabic graphs G and G' are move-equivalent, denoted G โผ G' if they can be connected by a sequence of the moves shown in <Ref>.
It is easily checked that the moves (M1), (M2), (M3) from <Ref> preserve trip permutations, so if G โผ G', then (G)=(G').
In addition to moves on plabic graphs, the reduction (R1), shown in <Ref>, will be important. Unlike moves, reductions do not preserve the trip permutation.
A plabic graph is called leafless if it contains no leaves (i.e., vertices of degree 1), other than perhaps leaves incident to the boundary. An isolated component is a connected component that is not connected to the boundary.
A leafless plabic graph G with no isolated components is called reduced if no G' โผ G admits a parallel edge reduction (R1).
Two trip strands โโ โ' in a plabic graph G are said to have a bad double crossing if they traverse some edge e in opposite directions and subsequently (in the forward direction on both strands) traverse another edge e' in opposite directions. A trip strand has an essential self-intersection if it traverses some edge in both directions. Finally, G is said to have a round trip if following the rules of the road (<Ref>) starting from some edge not incident to the boundary produces an infinite walk.
A leafless plabic graph G with no isolated components is reduced if and only if all of the following conditions hold:
(1) G has no round trips,
(2) G has no trip strands with essential self-intersections,
(3) G has no pair of trip strands with a bad double crossing, and
(4) if (G)(i)=i, then G has a leaf attached to the boundary vertex b_i.
The decorated trip permutation '(G) of a reduced plabic graph records the extra data of the color of the leaf attached to b_i for all fixed points (G)(i)=i.
Two reduced plabic graphs G and G' are move-equivalent if and only if '(G)='(G').
ยง HOURGLASS PLABIC GRAPHS AND SIX-VERTEX CONFIGURATIONS
We now specialize U_q(_r) to the case r=4 for <Ref>; see <Ref> for a discussion of how our methods can be adapted to unify the known web bases in other cases.
ยง.ยง Hourglass plabic graphs
An hourglass graph G is an underlying planar embedded graph G, together with a positive integer multiplicity m(e) for each edge e. The hourglass graph G is drawn in the plane by replacing each edge e of G with m(e)>1 with m(e) strands, twisted so that the clockwise orders of these strands around the two incident vertices are the same. For m(e) โฅ 2, we call this twisted edge an m(e)-hourglass, and call an edge with m(e)=1 a simple edge.
The degree (v) of a vertex v โ G is the number of edges incident to v, counted with multiplicity, while its simple degree (v) is its degree in the underlying graph G.
An hourglass plabic graph is a bipartite hourglass graph G, with a fixed proper black-white vertex coloring, embedded in a disk, with all internal vertices of degree 4, and all boundary vertices of simple degree one, labeled clockwise as b_1,b_2,โฆ, b_n. See <Ref> and <Ref> (lower right) for examples. We consider G up to planar isotopy fixing the boundary circle. We write (v)=1 for a black vertex v and (v)=-1 for a white vertex.
In later sections, we will view hourglass plabic graphs as U_q(_4)-webs, with hourglass multiplicities becoming the web edge multiplicities. However in this section we focus on developing the underlying combinatorics.
The type c of an hourglass plabic graph G is the sequence (c_1,โฆ,c_n) where c_i=(b_i)(b_i) for i=1,โฆ,n. We say the type is oscillating[So named because graphs of this type will be seen to correspond to (generalized) oscillating tableaux.] if |c_i|=1 for all i. A type denoted o is assumed throughout to be oscillating. For general G, write (G) for the associated hourglass plabic graph of oscillating type obtained from G by replacing each boundary vertex b of degree d>1 (connected by a d-hourglass to an internal vertex v) with d boundary vertices of the same color, each connected to v by a simple edge. (If b is connected instead to another boundary vertex, first subdivide its hourglass by two new vertices using the move of <Ref>.)
Note that, for any hourglass plabic graph G, the underlying graph G is a bipartite plabic graph (<Ref>) using the vertex colors from G. Unlike plabic graphs, for which a single trip permutation is traditionally considered, we will associate a 3-tuple of trip permutations to each hourglass plabic graph.
Let G be an hourglass plabic graph of oscillating type with boundary vertices b_1,โฆ, b_n. For 1 โค a โค 3, the a-th trip permutation _a(G) is the permutation of [n] obtained as follows: for each i, begin at b_i and walk along the edges of G, taking the a-th leftmost turn at each white vertex, and a-th rightmost turn at each black vertex, until arriving at a boundary vertex b_j. Then _a(G)(i)=j. The walk taken is called the _a-strand. See <Ref> for an example. Note that _1(G)^-1=_3(G) and _2(G) is an involution. We write _โ(G) for the tuple of these trip permutations. For general G, we define _โ(G)=_โ((G)).
Suppose G is an hourglass plabic graph of oscillating type. Then the _1-strands of G agree with the -strands of G, so in particular _1(G)=(G).
Observe that the sequence of vertices visited by a _1-strand is not affected by the multiplicities of the edges it follows.
As for plabic graphs, hourglass plabic graphs admit certain moves. These are the benzene move and square moves shown in <Ref> and the contraction moves shown in <Ref>. None of the vertices appearing in the moves is allowed to be a boundary vertex; hence, moves do not change the type of the hourglass plabic graph.
Two hourglass plabic graphs G and G' are move-equivalent, written G โผ G', if some sequence of benzene, square, and contraction moves transforms G into G'. An hourglass plabic graph is contracted if contraction moves have been applied to convert all subgraphs from the left side of <Ref> to the corresponding graph from the right side. We write (c) for the set of contracted hourglass plabic graphs of type c. Note that a contracted hourglass plabic graph of oscillating type only contains simple and 2-hourglass edges.
For example, the move equivalence class of the contracted hourglass plabic graph G from <Ref> contains three other graphs, constructed by applying the benzene move, the square move, or both.
Let G โผ G' be move-equivalent hourglass plabic graphs. Then:
(a) _โ(G)=_โ(G'), and
(b) the underlying plabic graphs G and G' are move-equivalent.
Part (a) is easily checked by observing that the benzene, square, and contraction moves all locally preserve _โ. For part (b), notice that each of the moves for hourglass plabic graphs is a composite of the plabic graph moves from <Ref>. For example, after passing to G, the hourglass square moves correspond to expanding the corners of the square with (M2) if necessary, applying the plabic square move (M1), and then contracting the remaining corners with another application of (M2).
As noted in the proof of <Ref>, the hourglass square moves and contraction moves are lifts to G of (composites of) usual plabic moves on G. The benzene move, on the other hand, is trivial on the level of G.
Compositions of benzene moves on an hourglass plabic graph can be seen as a transformation on a dimer cover (perfect matching) of the subgraph of vertices of simple degree 3, where the hourglass edges correspond to the included edges of the dimer cover. For hexagonal lattices, this transformation is well-known to correspond to adding and removing boxes from an associated plane partition (see e.g.ย <cit.>). This is discussed further in <Ref>.
ยง.ยง Fully reduced hourglass plabic graphs
The following definition is central to this work, and should be compared to the notion of reduced plabic graph (<Ref>).
An hourglass plabic graph G with no isolated components is called fully reduced if no G' โผ G has a 4-cycle containing an hourglass (a forbidden 4-cycle). We write (c) for the set of fully reduced hourglass plabic graphs of type c. We write (c) for the set of these which are contracted (as in <Ref>).
<Ref> and <Ref> provide alternative characterizations of fully reduced graphs which avoid the (a priori infinite) move-equivalence class exploration in <Ref>.
<Ref> implies, in particular, that a fully reduced hourglass plabic graph G does not have two or more distinct edges connecting a pair of vertices u,v; otherwise one of the edges could be expanded using a contraction move, creating a forbidden 4-cycle.
The following proposition partially justifies the use of โreduced" in โfully reduced". Note that the converse is not true: G being fully reduced is a stronger condition than G being reduced.
Let G be a fully reduced hourglass plabic graph. Then the underlying plabic graph G is reduced.
<Ref> is key to the proof of
<Ref>, which is an analogue of <Ref> for hourglass plabic graphs, and is a fundamental ingredient in the construction of our web basis.
Two fully reduced hourglass plabic graphs G_1 and G_2 of the same type are move-equivalent if and only if _โ(G_1)=_โ(G_2).
We now transform contracted fully reduced hourglass plabic graphs to another form which will be useful in the proofs of <Ref> and <Ref>. These proofs appear in <Ref>.
ยง.ยง Symmetrized six-vertex configurations
In this subsection, we define certain six-vertex configurations on 4-valent graphs embedded in a disk and show these configurations are in bijection with contracted hourglass plabic graphs. The well oriented configurations (<Ref>) correspond under this bijection to the fully reduced hourglass plabic graphs (<Ref>). In the later sections, we convert freely between these objects, as convenient.
Let G be a planar graph embedded in a disk, with all internal vertices of degree 4 and all boundary vertices of degree 1 labeled clockwise as b_1,โฆ,b_n. A six-vertex configuration D on G is a directed graph with underlying undirected graph G such that the orientation of edges around each vertex is any rotation of:
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
These are sources, sinks, and transmitting vertices, respectively; note that there are six possibilities for the orientation around a single vertex. We write (G) for the set of six-vertex configurations on G.
Symmetrized six-vertex configurations can be seen as a symmetrized version of the well-studied six-vertex model (see e.g. <cit.>),
a model equivalent to ours by reversing the direction of all vertical edges in the allowed vertex configurations.
On a square grid, six-vertex configurations with domain wall boundary conditions (in our convention, corresponding to all boundary edges oriented inward) are well-known to correspond to alternating sign matrices, matrices whose entries are in {0,ยฑ1}, whose rows and columns sum to 1, and whose nonzero entries alternate in sign along each row and column (see e.g. <cit.>). On our graphs, sinks correspond to 1, sources to -1, and transmitting vertices to 0. See <Ref> for further discussion.
Like (hourglass) plabic graphs, six-vertex configurations have a notion of trip permutation.
Let D โ(G). Define _2(D) as the permutation of [n] obtained as follows: for each i, begin at b_i and walk along the edges of G, going straight across each vertex to the opposite edge, until arriving at a boundary vertex b_j. Then _2(D)(i)=j and each walk constructed by this rule is called a _2-strand.
We define _1(D) as the permutation of [n] obtained as follows: for each i, begin at b_i and walk along the edges of D, turning as indicated in <Ref> until arriving at a boundary vertex b_j. Then _1(D)(i)=j and each such walk is called a _1-strand. Define _3(D) via reversal of the _1-strands. Note that _1(D)^-1=_3(D) and
_2(D) is an involution.
As with (hourglass) plabic graphs, we consider an equivalence relation on six-vertex configurations generated by moves. These are the YangโBaxter and ASM moves shown in <Ref>.
The moves of <Ref> are so named because of their correlation to studied moves on six-vertex configurations and alternating sign matrices. The YangโBaxter move corresponds to the star-triangle relation associated to the YangโBaxter equation <cit.>, while the ASM move may be seen as a fiber toggle in the alternating sign matrix tetrahedral posetย <cit.> or as travelling along an edge of the alternating sign matrix polytope <cit.>. Appropriate compositions of these toggles produce the well-studied gyration action ofย <cit.> on fully-packed loops, objects in bijection with alternating sign matrices. Fully-packed loops on generalized domains (such as ours) were considered by Cantini and Sportiello in their proof of the RazumovโStroganov conjecture <cit.> and relate to chained alternating sign matrices <cit.>.
Two six-vertex configurations D and D' are move-equivalent, written D โผ D', if some sequence of YangโBaxter and ASM moves transforms D into D'.
A six-vertex configuration D with no isolated components is well oriented if for every D' with Dโผ D', the underlying undirected graph of D' is simple (no loops and no 2-cycles) and every 3-cycle of D' is (cyclically) oriented. We write (G) for the set of well oriented configurations D from (G).
Consider the following properties that D โ(G) may have:
(P1) The _2-strands of D do not double-cross or self-intersect.
(P2) The segments between the three intersection points of three pairwise crossing _2-strands are oriented (big triangles are oriented).
(P3) Among any four _2-strands, there is a pair of strands which do not cross.
<Ref> shows that well oriented six-vertex configurations are in bijection with contracted fully reduced hourglass plabic graphs. The conditions of <Ref> parallel those for full-reducedness. However, <Ref> gives an effective way to determine whether a six-vertex configuration is well oriented by checking (P1) and (P2).
If D โ(G) satisfies (P1) and (P2), then it satisfies (P3).
If four _2-strands in D are pairwise crossing, then by (P1) these strands form 43 big triangles, but there is no way to consistently direct the edges of these triangles satisfying (P2).
A configuration D โ(G) is well oriented if and only if (P1) and (P2) hold.
Suppose D โ(G) satisfies (P1) and (P2) and D' โผ D. YangโBaxter and ASM moves do not change the multiplicity of intersection of any pair of _2-strands. Since _2-strands in D do not double-cross or self-intersect, the same is true for D', and in particular D' has no loops or 2-cycles. We now argue that D' has oriented 3-cycles. We may assume without loss of generality that D' differs from D by a single move. An ASM move applied to a face F of D cannot create an unoriented 3-cycle, for such a cycle would need to contain an edge bordering F, in which case D would contain an unoriented big triangle (โ_1 โ_2 โ_4 on the left side of <Ref>), violating (P2). A YangโBaxter move applied to D cannot create an unoriented 3-cycle either, for the new 3-cycles in D' (see the right side of <Ref>) are forced to be oriented by property (P2) of D. Thus D' has oriented 3-cycles and D is well oriented.
Now suppose that D is well oriented; we show that (P1) and (P2) must hold. We begin by proving (P2) in the case that D satisfies (P1). Under this assumption, if โ_1,โ_2,โ_3 are three pairwise crossing _2-strands, let T=โ_1โ_2โ_3 denote the big triangle they form. We prove that the boundary of T is oriented by induction on the number of vertices inside T, inclusive of the boundary. As a base case, if T is a 3-cycle, it must be oriented, since D is well oriented. Otherwise, some strand must cross through T. Let v be any vertex on the boundary of T, but not a corner of T. The strand passing into T at v forms a smaller big triangle T' with the boundary of T. By induction T' is oriented, so v is a transmitting vertex. Thus the edge directions along any side of T all agree (otherwise some v along the side would be a source or sink). Since T' is oriented, the orientations of the sides it crosses, say โ_1 and โ_2, must agree. If T is not oriented, it then follows that the remaining side โ_3 has a single edge in T. In this case, all strands crossing through T must cross โ_1 and โ_2 and not cross each other, forming a nested set of triangles sharing vertex โ_1 โฉโ_2 and sharing a common orientation. By a sequence of YangโBaxter moves at โ_1 โฉโ_2, we may remove these strands from T. Since D is well oriented, the resulting D' has oriented 3-cycles; in particular, โ_3 is oriented compatibly with โ_1 and โ_2. Thus T is oriented.
Finally, we prove (P1). Suppose D has a double-crossing or self-intersection in its _2-strands and let R denote the region of the graph this bounds. Suppose also that R contains the minimal number of vertices among all such regions in all move-equivalent configurations. In particular, there are no double-crossings or self-intersections inside R. If R were bounded by a self-intersection, any other _2-strand crossing into R would create a smaller double-crossing, so we may assume that R is bounded by a double-crossing between _2-strands โ_1 and โ_2. Therefore, viewing the interior of R as a well oriented six-vertex configuration Dโ with โ_1 and โ_2 forming the boundary circle, (P1) is satisfied. By the above argument, so is (P2), and by <Ref>, Dโ satisfies (P3). Since D is well oriented, there must be a pair of strands intersecting inside R. Choose such a pair a and b so that the number of vertices in a b โ_1 is minimized (see <Ref>). All strands cutting through this triangle must cross both a and b, and they may not cross each other by (P3). Thus a sequence of YangโBaxter moves may be applied at a โฉ b to remove these strands from the triangle. Finally, a YangโBaxter move may be applied to abโ_1 to obtain a smaller region bounded by a double-crossing, contradicting the minimality of R. This completes the proof of (P1).
The numbers o=(o_1,โฆ,o_n) where o_i=1 (resp. o_i=-1) if the edge incident to b_i is directed inwards (resp. outwards) form the boundary conditions of a six-vertex configuration. We write (o) (and (o)) for the (well oriented) six-vertex configurations with boundary conditions o.
We now define the correspondence ฯ between contracted hourglass plabic graphs and six-vertex configurations.
Given G โ(o), ฯ(G) is the six-vertex configuration formed from G by orienting all simple edges from black vertex to white, removing the vertex coloring, and contracting each 2-hourglass and incident vertices to a single 4-valent vertex. <Ref> shows an example of an hourglass plabic graph and its image under ฯ.
For G โ(o), we have ฯ(G) โ(o).
Let D =ฯ(G). When a 2-hourglass in G is collapsed to a vertex in D, the resulting vertex is still 4-valent. Every white vertex in G incident to no hourglass becomes a sink in D. Every black vertex in G incident to no hourglass becomes a source in D. Each contracted hourglass results in a transmitting vertex. The map ฯ converts boundary colors to edge orientations, so ฯ(G) has boundary conditions o.
The following notion will be used in <Ref>.
For D โ(o), a proper labeling is a labeling of the edges by elements of {1,2,3,4} such that
the labels adjacent to each source or sink are distinct and each transmitting vertex has its two incoming edges labeled a โ b, and its two outgoing edges labeled a and b.
The map ฯ : (o) โ(o) is a bijection intertwining benzene moves with YangโBaxter moves, intertwining square moves with ASM moves, and preserving trip permutations and proper labelings. Moreover, G is fully reduced if and only if ฯ(G) is well oriented.
The map ฯ has an inverse which colors sinks white, sources black, and expands each transmitting vertex to an hourglass from a black to a white vertex in the direction of transmission. The output of ฯ^-1 is contracted, since this procedure does not produce 3- or 4-hourglasses or adjacent pairs of 2-hourglasses.
The YangโBaxter move of <Ref>, is the image under ฯ of the benzene move of <Ref>. The ASM moves of <Ref>, are the images under ฯ of the corresponding square moves of <Ref>. The preservation of trip permutations and proper labelings is by construction. The behavior in <Ref> was chosen to reflect the rules of the road after applying ฯ, while the labelings of <Ref> correspond under ฯ^-1 to proper labelings in the usual sense of hourglass plabic graphs (by labeling hourglass edges with the complement in {1,2,3,4} of the labels that appear on adjacent edges).
Suppose Gโ(o). Then G has no isolated components and no G'โผ G has a 4-cycle containing an hourglass. An unoriented 3-cycle, a 2-cycle, or a loop corresponds under ฯ^-1 to a 4-cycle containing an hourglass. Thus D has no isolated components and no D' โผ D=ฯ(G) contains an unoriented 3-cycle, a 2-cycle, or a loop. By definition, this means D is well oriented. The converse works similarly.
ยง.ยง Matching diagrams
In this subsection, we establish <Ref>, which will be used in the proof of <Ref>.
A matching diagram M is the underlying graph of some well oriented six-vertex configuration D (with edge directions forgotten). The matching of M is _2(D), viewed as a matching of the boundary vertices b_1,โฆ,b_n. By <Ref> and <Ref>, among any four strands in a matching diagram M, there is a pair which does not cross (that is, M has no 4-crossings). We may apply YangโBaxter moves to matching diagrams as to six-vertex configurations, but without remembering the edge orientations.
The following is closely related to the TitsโMatsumoto theorem (see, e.g., <cit.>) on reduced words of permutations.
Any matching diagrams for the same matching are connected by YangโBaxter moves.
We will show that all matching diagrams with a fixed matching m are move-equivalent by constructing a canonical matching diagram Mฬ with matching m to which all diagrams are connected.
We denote a strand in a matching diagram with endpoints b_i and b_j with i<j by โ_i. The region of the disk bounded by โ_i and the boundary segments connecting b_i,b_i+1,โฆ,b_j is called the inside of โ_i and the remainder of the disk, bounded by โ_i and the boundary segments connecting b_j, b_j+1,โฆ,b_n, b_1,โฆ,b_i is the outside.
The matching diagram Mฬ is uniquely characterized by the following property (the abc-property): if a<b<c and strands โ_a, โ_b, โ_c all cross each other, then โ_b and โ_c cross on the outside of โ_a. The abc-property is simultaneously achievable for all a<b<c since Mฬ may be drawn according to the following recipe: Cut the boundary of the disk between b_n and b_1 and unfold the boundary onto a line; then, for each i=1,2,โฆ in order, draw โ_i with an initial vertical segment, during which it crosses exactly those previously drawn strands โ_j such that โ_i,โ_j are required to cross in m; finish drawing โ_i with a horizontal segment followed by a vertical segment, during which it does not cross any previously drawn strands. See <Ref> for an example. Furthermore the abc-property uniquely specifies the matching diagram Mฬ, since it determines the relative positions of the crossings in every 3-crossing โ_a,โ_b,โ_c.
Suppose that M is a matching diagram with matching m having some number d>0 of triples a<b<c which violate the abc-property. Among these violations, choose the one with a maximal, b minimal with respect to a, and c maximal with respect to a,b. By assumption, โ_b and โ_c cross on the inside of โ_a. If no strands enter the region bounded by โ_a,โ_b,โ_c, then we may immediately apply a YangโBaxter move to eliminate this violation. Otherwise there is some strand entering this region which crosses exactly two of โ_a,โ_b,โ_c, since a 4-crossing is impossible by <Ref>; see <Ref>. But in any case, the new strand โ_a',โ_b', or โ_c' creates a new violation {โ_a',โ_b,โ_c}, {โ_a,โ_b',โ_c}, or {โ_a,โ_b,โ_c'} of the abc-property which contradicts the minimality and maximality assumptions imposed on a,b,c. Thus we can reduce the number of violations of the abc-property using the YangโBaxter move, and we see by induction that M โผMฬ.
ยง.ยง Well oriented configurations and monotonicity
We now establish a key _โ-theoretic property of well oriented configurations and fully reduced hourglass plabic graphs.
A symmetrized six-vertex configuration D is monotonic if _2-strands do not revisit vertices or double cross, and if for every _1-strand โ_1, passing through vertices U_1, and every _2-strand โ_2, passing through vertices U_2, the vertices in the intersection U_1 โฉ U_2 are consecutive along both โ_1 and โ_2. Note that we could equivalently impose this condition for _3-strands instead of _1-strands.
Let D be a well oriented symmetrized six-vertex configuration, then D is monotonic.
We use the following lemma in the proof of <Ref>.
Let โ be a _1-strand in a symmetrized six-vertex configuration D. Then the following hold.
(a) Suppose โ passes through edges e = uv and e' = vw. Then โ turns left at v if e,e' are oriented toward v, turns right if e,e' are oriented away from v, and goes straight through v otherwise.
(b) The turns of โ alternate between left and right.
A simple inspection of the turning rules for _1 yields part (a). For part (b), note that if โ has just turned right at v, then the orientation of โ agrees with the orientation of the edge it is traversing, and this will continue to be the case as โ passes straight through any vertices. While this condition is satisfied, โ cannot turn right by part (a). Thus โ must take a left turn between any two right turns, and, symmetrically, must take a right turn between any two left turns.
Let D be a well oriented symmetrized six-vertex configuration. By <Ref>, _2-strands do not revisit vertices or double cross. Suppose that D is not monotonic, with some _1-strand โ_1 intersecting a _2-strand โ_2, separating from it after some vertex v, and then re-intersecting at some vertex v' (see <Ref>).
Let v_1,โฆ,v_k be the vertices between v,v' along โ_1 at which โ_1 turns, and write v=v_0, v'=v_k+1 for convenience. By <Ref>, these turns alternate between left and right, with inward pointing edges at left turns and outward pointing edges at right turns, and with orientations not changing along the intervening segments on which โ_1 goes straight.
If k=1, then โ_2 and the _2-strands s,s', passing through {v,v_1} and {v_1,v'}, respectively, form an unoriented triangle, contradicting the fact that D is well oriented.
Thus we have k โฅ 2. The polygon with vertices v,v',v_1,โฆ,v_k has two _2-strands entering the interior of the polygon at v_2; these strands s and s' must recross the boundary of the polygon elsewhere. Let s be the strand passing through v_1 and v_2.
The strand s may not exit the polygon through any of the segments v v_1, v_1 v_2, v_2 v_3, or vv', lest it create an unoriented triangle or double crossing of _2-strands. Thus s exits the polygon at some vertex u, further along โ_1. But then, replacing โ_2 by s, v by v_2, and v' by u, we have a smaller instance of _1- and _2-strands intersecting, separating, and re-intersecting; by induction on k this is impossible.
An hourglass plabic graph G is monotonic if _2-strands do not revisit vertices or double cross, and if for every _1-strand โ_1, passing through vertices U_1, and every _2-strand โ_2, passing through vertices U_2, the vertices in the intersection U_1 โฉ U_2 are consecutive along both โ_1 and โ_2. Note that we could equivalently impose this condition for _3-strands instead of _1-strands.
<Ref> is immediate from the definition of ฯ (<Ref>).
A symmetrized six-vertex configuration D is monotonic if and only if the contracted hourglass plabic graph ฯ^-1(D) is.
An hourglass plabic graph G is fully reduced if and only if (G) is fully reduced.
A sequence of moves on (G) produces a forbidden 4-cycle if and only if the same sequence of moves applied to G does.
An hourglass plabic graph G is fully reduced if and only if it is monotonic.
First, suppose G is fully reduced. We may assume without loss of generality that G is contracted, since contraction moves do not affect monotonicity.
Suppose moreover that G is of oscillating type. Then by <Ref> , D=ฯ(G) is well oriented, and thus by <Ref> D is monotonic. By <Ref>, we conclude that G is monotonic.
If G is of general type, <Ref> implies that (G) is fully reduced, and so by the above argument (G) is monotonic. It is clear then by construction that G must also be monotonic.
For the converse, <Ref> demonstrates that each of the forbidden 4-cycles fails to be monotonic. Furthermore, the moves for hourglass plabic graphs preserve monotonicity. Thus monotonic graphs are fully reduced.
ยง.ยง Move equivalence and trip permutations
We now give some final _โ-theoretic lemmas and the proofs of <Ref> and <Ref>.
Let Gโ(o), and let โ_1 and โ_2 be _1- and _2-strands, respectively. Let v_1,โฆ,v_k be the vertices shared by โ_1 and โ_2, ordered according to the direction of โ_1. For i=1,โฆ,k-1, let e_i be the edge between v_i and v_i+1. Then:
(a) The multiplicities m(e_i) alternate between 1 and 2.
(b) If m(e_1)=2 then k=2.
(c) k is even, so (v_k) โ (v_1); thus if v_1 and v_k are not boundary vertices, โ_1 ends on the opposite side of โ_2 from which it started.
(d) If m(e_i)=1, then (v_i)=(v_1).
Since G is contracted and of oscillating type, it has no edges of multiplicity greater than two, and no pair of consecutive 2-hourglasses. The case m(e_i)=m(e_i+1)=1 is impossible, since โ_1 and โ_2 could not share both edges; this establishes (a).
If m(e_1)=2, then โ_1 and โ_2 must enter v_1 on different simple edges, share e_1, and then leave v_2 on different simple edges, so k=2.
A simple case check shows that if โ_1 and โ_2 share at least one vertex, then they share an edge, so k=0 or k โฅ 2. If k>2, then by parts (a) and (b), we know m(e_1)=m(e_3)=โฏ=1 and m(e_2)=m(e_4)=โฏ=2. If m(e_k-1)=2, then letting โ_3 be the _3-strand obtained by reversing โ_1, the same arguments as above would require k=2. Thus we must have m(e_k-1)=1 and k is even, proving (c). Finally, if m(e_i)=1, then k>2, so m(e_1)=1; then (d) follows from (a) and the bipartiteness of G.
Let G โ(c), let โ be a _1-strand in G, and let v be an interior vertex. Then โ visits v at most once.
By <Ref>, G is monotonic. Suppose โ visits v twice, enclosing a polygon with vertices v=v_0,v_1,โฆ,v_k. Since G is bipartite, this polygon has a vertex, say v_i, at which some _2-strand โ_2 enters the interior of the polygon. Now, โ_2 must also exit the polygon at some vertex v_j. Since G is fully reduced, โ_2 may not have a self intersection, so v_i โ v_j. The only other possibility consistent with the monotonicity of the pair โ_1,โ_2 is v_j=v_i+1. But then G must contain two separate edges between v_i and v_i+1 (the edges traversed by โ_1 and โ_2), impossible in a fully reduced graph.
Let G โ(c) have underlying plabic graph G. We wish to show that G is reduced, which we will do by verifying that the conditions of <Ref> are satisfied.
First, G is leafless, since any vertex v with (v)=1 must be incident in G to a 4-hourglass. Since all vertices of G have degree 4, the endpoints of e would form an isolated component in G, violating full reducedness. This also implies that G has no isolated components.
Assume now that G is of oscillating type. <Ref> and <Ref> ensure that G satisfies conditions (1),(2), and (4) of <Ref>, so it remains to check that G has no bad double crossings.
Suppose that two _1-strands โ_1 and โ_1' cross; then in G, we have one of the local configurations shown in <Ref>. In each case, _2-strands โ_2 and โ_2' are indicated. By <Ref>, the _1-strands diverge from the _2-strands as indicated in the figure. In particular, after the crossing, โ_2 and โ_2' lie between โ_1 and โ_1'. Since G is fully reduced, it is monotonic by <Ref>. Since each of โ_1,โ_1' has already shared vertices with both โ_2 and โ_2', neither can retouch these strands, and thus โ_1,โ_1' cannot recross in the forward direction.
If G is not of oscillating type, the above argument implies that (G) is reduced. It then follows that G is likewise reduced.
Let G_1,G_2 โ(o), and suppose G_1=G_2=P and that the symmetrized six-vertex configurations ฯ(G_1) and ฯ(G_2) have the same underlying matching diagram M. Then G_1=G_2.
We show, by induction on the number of vertices, how to uniquely recover G from P and M.
If M contains some _2-strand that does not intersect any other _2-strand, then both M and P may be split in two pieces, taking that strand as part of the boundary of the pieces, giving smaller pairs of plabic graphs and matching diagrams which, by induction, may each be uniquely lifted to reconstruct G. Thus we may assume that M contains no such strand.
In this case, there exists some consecutive boundary vertices b and b' which are both adjacent in M to the same internal vertex v. The possible neighborhoods of b, b' in P are shown in <Ref>. In the latter two cases, the edge e must come from a 2-hourglass in G, allowing us to uniquely reconstruct this neighborhood of G. After doing so, we can replace the boundary segment surrounding b,b' with the dashed line shown in the figure, to obtain a smaller matching diagram M_1 and plabic graph P_1. By induction, that information allows us to uniquely reconstruct the remainder of G.
Let G_1,G_2 โ(c). If G_1 โผ G_2, then _โ(G_1)=_โ(G_2) by <Ref>(a).
Conversely, suppose _โ(G_1)=_โ(G_2). Since any hourglass plabic graph is move-equivalent to a unique contracted graph, we may also assume without loss of generality that G_1,G_2 are contracted.
Suppose first that G_1,G_2 are of oscillating type.
By <Ref>, the underlying plabic graphs P_1=G_1 and P_2=G_2 are reduced and by <Ref>, we have (P_1)=_1(G_1)=_1(G_2)=(P_2). Thus, by <Ref>, P_1 and P_2 are equivalent under the moves from <Ref>. Consider a sequence of moves transforming P_1 into P_2 that minimizes the number s of times that the square move (M1) is applied, and also such that the only moves occurring before the first square move are those necessary to make the square move applicable. Let F be the face on which the first square move acts. Since G_1 is bipartite, so is P_1, and since G_1 is contracted with all internal vertices of degree 4, P_1 has all internal vertices of degree 3 or 4. Since P_1 is reduced, it has no 2-gonal faces. From this we conclude that F is already a bipartite square face of P_1, so the only moves occurring before the square move at F are the expansions of any degree-4 corners of F into two degree-3 vertices. Let P_1^โ be the plabic graph obtained by applying these expansion moves, then the square move at F, and then contracting with their unique neighbors outside of F all those corners of F that were not expanded. Then P_1^โ =G_1^โ , where G_1^โ is obtained from G_1 by applying the appropriate square move from <Ref>. Furthermore, P_1^โ can be transformed into P_2 by s-1 square moves. Hence, by induction on s, there exists a contracted fully reduced hourglass plabic graph G_1' with G_1 โผ G_1' and G_1'=P_2.
It remains to show that G_1' may be transformed into G_2 by a sequence of benzene moves. Let ฯ be the bijection from <Ref> and define D_1'=ฯ(G_1') and D_2=ฯ(G_2). The symmetrized six-vertex configurations D_1',D_2 are well oriented by <Ref> and satisfy _2(D_1')=_2(D_2). Thus, by <Ref>, the matching diagram M_1' for D_1' can be transformed into the matching diagram M_2 for D_2 by a sequence of YangโBaxter moves. Consider the hourglass plabic graph G_1โ obtained by transporting these YangโBaxter moves to benzene moves on G_1' via ฯ^-1. Since benzene moves do not change the underlying plabic graph, we have G_1โ=G_1'=P_2=G_2, and by construction the matching diagram for ฯ(G_1โ) is equal to the matching diagram M_2 for ฯ(G_2). By <Ref> we conclude that in fact G_1โ=G_2, as desired.
If G_1,G_2 are not of oscillating type, we oscillize to (G_1),(G_2), which are fully reduced by <Ref>. Indeed, these still satisfy _โ((G_1))=_โ((G_2)), so by the previous argument, (G_1) โผ(G_2). Since benzene and square moves cannot be applied to boundary faces, the sequence of moves yielding this move equivalence can also be applied to yield G_1 โผ G_2.
The following useful fact is immediate from the preceding proof.
Suppose that G, G' โ(c) are move-equivalent. Then they may be connected by a sequence of benzene moves followed by a sequence of square moves, or vice versa.
ยง FLUCTUATING TABLEAUX AND SEPARATION LABELINGS
In this section, we recall the notion of fluctuating tableaux of r rows from <cit.>. For r = 4, we establish a map (shown later, in <Ref>, to be a bijection) between move-equivalence classes of fully reduced hourglass plabic graphs and rectangular fluctuating tableaux. See <Ref> for related constructions outside the r=4 setting.
ยง.ยง Fluctuating tableaux and promotion permutations
In this subsection, we recall needed material from the companion paper <cit.>.
A generalized partition with r rows is a tuple ฮป = (ฮป_1,โฆ, ฮป_r) โโค^r where ฮป_1 โฅโฏโฅฮป_r. We visualize generalized partitions as diagrams, which are semi-infinite collections of cells.
We write ๐_i for the i-th standard basis vector of โค^r. If S โ๐_r is a positive subset of {ยฑ 1,โฆ,ยฑ r}, we define ๐_S = โ_i โ S๐_i, while is S is a negative subset, we define ๐_S = - โ_i โ S๐_-i. We say two r-row generalized partitions ฮป, ฮผ differ by a skew column if ฮป=ฮผ + ๐_S for some S โ๐_r. For c โฅ 0, we furthermore write ฮผcฮป if ฮป is obtained from ฮผ by adding a skew column of c boxes and ฮผ-cฮป if ฮป is obtained from ฮผ by removing a skew column of c boxes.
An r-row fluctuating tableau of length n is a sequence
T = (0=ฮป^0 c_1ฮป^1 c_2โฏc_nฮป^n)
of r-row generalized partitions such that ฮป^i-1 and ฮป^i differ by a skew column obtained by adding c_i or removing -c_i cells for all 1 โค i โค n. The partition ฮป^n is called the shape of T. The sequence c = (c_1, โฆ, c_n) โ{0, ยฑ 1, โฆ, ยฑ r}^n is the type of T. Let FT(r,ฮป, c) be the set of fluctuating tableaux with r rows, shape ฮป, and type c. We will drop some parameters from FT(r, ฮป, c) as convenient. In particular, unless otherwise specified, we assume r=4 in the remainder of the paper.
An r-row fluctuating tableau is rectangular if its shape ฮป satisfies ฮป_1=โฏ=ฮป_r. In this case ฮป is called a generalized rectangle. We write (c) for the set of rectangular fluctuating tableau of type c, where no c_j = 0. We say a fluctuating tableau is of oscillating type
if each c_jโ{ยฑ 1}.
Building on work of Patrias <cit.> in the oscillating case, we visualize fluctuating tableaux by writing i in the added cells of ฮป^i - ฮป^i-1 or i in the removed cells of ฮป^i-1 - ฮป^i; see <Ref> for an example.
Fluctuating tableaux interest us because of the following fact.
For any type c, we have
|(c)| = _โ__r(โ^c V) = _โ(q)_U_q(_r)(โ_q^c V_q).
The lattice word associated to a fluctuating tableau T โ(c) is the word L = L(T) = w_1 โฏ w_n on
๐_r,
where ฮป^i = ฮป^i-1 + ๐_w_i.
We may recover T from L(T), so we sometimes identify T and L(T).
A word w is a lattice word of a fluctuating tableau if and only if for every prefix w_1 โฏ w_k and every 1 โค a โค b โค r,
(๐_w_1 + โฏ + ๐_w_k)_a โฅ (๐_w_1 + โฏ + ๐_w_k)_b,
where ๐_S = -๐_S. More concretely, in each prefix we require the number of a's minus the number of a's to be weakly greater than the number of b's minus the number of b's. A fluctuating tableau is rectangular if and only if equality holds when k=n, in which case we call L balanced.
Consider a word L = w_1 โฏ w_n in the alphabet ๐_r. Define
* ฯ(L) = w_nโฏw_1;
* ฯ(L) = ฯ(w_1) โฆฯ(w_n), where ฯ(w) replaces each element i in w with -(i)(r-|i|+1); and
* ฮต(L) = ฮต(w_n) โฆฮต(w_1), where ฮต(w) replaces each element i in w with (i)(r-|i|+1).
Consider ฮผcฮป. Let ฮป = ฮผ + ๐_{i_1 < โฏ < i_c}.
The oscillization of ฮผcฮป is the sequence
(ฮผcฮป) ฮผโฮผ^1 โโฏโฮผ^|c| - 1โฮป,
where
ฮผ^j = ฮผ^j-1 + ๐_i_j
with ฮผ^0 ฮผ and ฮผ^|c| = ฮป.
Given a fluctuating tableau T = (0=ฮป^0 c_1ฮป^1 c_2โฏc_nฮป^n), let (T) be the concatenation of (ฮป^i-1โฮป^i) for 1 โค i โค n. A fluctuating tableau is in the image of oscillization if and only if it is of oscillating type.
Oscillization may be described easily in terms of lattice words. To compute L((T)), we simply erase the braces from L(T), where elements of each letter are written in increasing order, keeping the same initial and final shape.
We now define promotion on fluctuating tableaux; see <cit.> for detailed examples and further discussion.
Fix 1 โค i โค n-1. Let T โ(c) be a fluctuating tableau whose diagram is labeled by 1 < 2 < โฏ < i-1 < โ < i < โฏ and their negatives.
If c_i ยท c_i+1โค 0, for each row R which does not contain any of i, โ, i, or โ, we identify a cell _R in R and call it open. Let j be the largest absolute value of an entry in R less than i, if any exist.
* If c_i โฅ 0, let _R be the cell immediately right of the cell containing j, or the cell containing j, or if j does not exist then let _R be first cell of row R.
* If c_i โค 0, let _R be the cell containing j, or the cell immediately left of the cell containing j, or otherwise the first negative cell of R.
The jeu de taquin slides for _i-1 are the following. See <cit.> for schematic diagrams.
* โ's first move right by swapping places with i's, then move down as far as possible by swapping places with i's.
* โ's first move left by swapping places with i's, then move up as far as possible by swapping places with i's.
* โi pairs move down as far as possible into open cells, and then they move left one column.
* โi pairs move up as far as possible into open cells, and then move right one column.
All other entries are left unchanged. This results in a fluctuating tableau whose diagram is labeled by 1 < 2 < โฏ < i < โ < i+1 < โฏ and their negatives.
Let T be a length n fluctuating tableau.
* The promotion (T) is the result of replacing ยฑ 1's with ยฑโ's, sequentially applying jeu de taquin slides _1, โฆ, _n-1, replacing ยฑโ's with ยฑ (n+1)'s, and subtracting 1 from each entry's absolute value.
* The evacuation (T) is the result of first replacing ยฑ 1's with ยฑโ_n's, sliding them past ยฑ n's, replacing ยฑ 2's with ยฑโ_n-1's, sliding them past ยฑ n's, etc., and finally replacing ยฑโ_i's with ยฑ i's.
* The dual evacuation (T) is the result of first replacing ยฑ n's with ยฑโ_1's, sliding backward past ยฑ 1's, replacing ยฑ (n-1)'s with ยฑโ_2's, sliding backward past ยฑ 1's, etc., and finally replacing ยฑโ_i's with ยฑ i's.
We define promotion permutations as follows. See <cit.> for a more general construction and further details.
Let T โ(r,c) be a fluctuating tableau and 1โค iโค r-1. Then the i-th promotion permutation _i(T)(b) โก |a|+b-1 n if and only if a is the unique value that crosses the boundary between rows i and i+1 in the application of promotion to ^b-1((T)).
We write _โ(T) for the tuple of these promotion permutations.
boxsize=0.85cm
boxsize=0.8cm
Let ฯ = (1 2 โฏ N) be the long cycle and let w_0 = (1, N)(2, N-1)โฏ be the longest element in the symmetric group ๐_N. The following collects the key results we need on promotion permutations.
Let Tโ(r,c), where |c_1| + โฏ + |c_n| = N. Then for all 1 โค i โค r-1:
* _i(T) is a permutation in ๐_N,
* _i(T) = _r-i(T)^-1,
* _i((T)) = ฯ^-|c_1|_i(T) ฯ^|c_1|,
* _i((T)) = w_0 _i(T) w_0.
ยง.ยง The separation labeling
To connect to fluctuating tableaux, we now define an intrinsic labeling of contracted fully reduced hourglass plabic graphs.
Recall from <Ref> the notion of a (proper) labeling, which we now apply to hourglass plabic graphs rather than webs. A labeling assigns to each m-hourglass an m-subset of {1,2,3,4}.
Let G โ(c). The base face of G is the face F_0 incident to the boundary segment between b_n and b_1. We define the separation labeling of G as follows:
* For e a simple edge of G, let F(e) be the face incident to e which is on the right when traversing e from the black vertex to the white vertex, and let โ_1,โ_2,โ_3 be, respectively, the _1-,_2-, and _3-strands traversing e in this direction. If exactly a of โ_1,โ_2,โ_3 separate F(e) from F_0, then set (e)={a+1}. We often omit braces when writing singleton sets.
* For an hourglass e, let v be either endpoint, and define
(e) = {1,2,3,4}โโ_e' โ v
e' โ e(e').
See <Ref> for an example.
Let G โ(c).
Then the separation labeling is well defined and proper.
The separation labeling is clearly well defined on simple edges, so it suffices to show that (<ref>) does not depend on the choice of v.
Consider the four simple edges e_1,e_2,e_3,e_4 adjacent to the two endpoints of an hourglass; see the left side of <Ref> for notation we will use. To compute (e_1), we must determine which of s_2,โ_2,โ_3 separate the face F_1 from the base face F_0. Since G is fully reduced, and hence monotonic by <Ref>, the _2-strands s_1,s_2 do not recross, nor do any _1-strands retouch either _2-strand. Thus we see that s_2,โ_3 separate F_1 from F_0, and that โ_2 separates the two faces if and only if it reaches the boundary to the right of F_0. Similarly, computing (e_2), we see that s_1,โ_4 separate F_2 from F_0, and that โ_2 does so if and only if it ends to the left of F_0. Thus we conclude that {(e_1),(e_2)}={3,4}. A similar analysis shows {(e_3),(e_4)}={3,4}, so that the hourglass may be consistently labeled by {1,2}; moreover, the resulting labeling is proper around these vertices. The cases differing from <Ref> by a color reversal or the relative location of F_0 are analogous.
It remains to verify that gives a proper labeling around each simple vertex; see the right side of <Ref>. It is easy to see that F_2 and F_0 are separated by s_2,โ_2 and that F_4 and F_0 are separated by s_1,โ_3. Furthermore, exactly one of F_2,F_4 is separated from F_0 by โ_4. Similarly, exactly one of F_1,F_3 is separated from F_0 by โ_1. Thus {(e_1),โฆ,(e_4)}={1,2,3,4} and is a proper labeling.
ยง.ยง From labelings to lattice words
We now show that the separation labeling of a contracted fully reduced hourglass plabic graph gives rise to a rectangular fluctuating tableau.
Given G โ(c), with boundary vertices b_1,โฆ,b_n, incident to edges e_1,โฆ,e_n, respectively, the separation word (G)=w_1 โฆ w_n is the word of type c given by setting w_i=(b_i)(e_i) for i=1,โฆ,n. That is, (G)=โ().
For ฯโ๐_n, let (ฯ){i : ฯ^-1(i)>i} denote the set of antiexcedances of ฯ.
Let G โ(o). Then:
(e_i) = 1+| { a : i โ(_a(G)) }|, if b_i is black;
1+| { a : i โ(_a(G)) }|, if b_i is white.
If b_i is black, then the _4-a(G)(i)-strand separates F(e_i) and F_0 when _4-a(G)(i)<i. This is equivalent to i โ(_a(G)). The other case is similar.
Let G โ(o). If b_i is black (resp. white), then b_i, _1(b_i), _2(b_i), and _3(b_i) appear in clockwise (resp. counterclockwise) order. Some of these vertices may coincide.
This claim follows by using <Ref> to track the colors of the vertices at which the other strands diverge from the _2(b_i)-strand.
Let G โ(c). Then (G) is the lattice word of a rectangular fluctuating tableau ๐ฏ(G) โ(c).
We may assume without loss of generality that c is oscillating, since ((G)) being a lattice word implies the same for (G). By <Ref>, is a proper labeling, so (G) is balanced.
We proceed by induction on the number v of internal vertices in G, the theorem being true if v=0 since in that case (G) is an appropriately nested collection of pairs of 1 and 1ฬ
and of 4ฬ
and 4, by <Ref>.
If v>0, let โ denote the _2-strand incident to b_1, and suppose (b_1)=1; otherwise we may reverse all colors, which has the affect of applying the involution ฯ to (G), preserving lattice words. We divide G into smaller hourglass plabic graphs C_1,C_2,C_3 by drawing contours c_1, c_2, c_3, which form the boundaries of the C_i, as follows (see <Ref>):
(1) c_1 starts in the boundary face between b_1 and b_2 and includes all internal vertices to the right of โ;
(2) c_2 starts in the base face and includes the internal vertices of G which lie on โ;
(3) c_3 starts in the base face and includes the internal vertices of G which lie to the left of โ.
Each C_i has boundary vertices on the cut edges of c_i, whose color is determined by bipartiteness. These boundary vertices are numbered so that the starting point of c_i is taken as the base face of C_i.
If all internal vertices of G lie on โ, we call G a backbone; this case is treated separately in <Ref>. Otherwise, by induction, we know that (C_i) is a balanced lattice word for i=1,2,3.
Write '(C_i) for the balanced separation word obtained by using the separation edge labels of G, rather than the intrinsic separation labels used to compute (C_i). Since G is fully reduced and therefore monotonic by <Ref>, and since C_2 and C_3 have the same base face as G, it follows by <Ref> and <Ref> that '(C_2)=(C_2) and '(C_3)=(C_3), the key point being that _i-strands do not reenter any of the contours after having left. Write '(C_1)=u'_1โฆ u'_m and (C_1)=u_1โฆ u_m. In the same way, we see that u'_i=u_i except when i=j,k where j=_1(G)(b_1) and k is the boundary edge of C_1 through which the _1(G)(b_1)-strand passes.
By <Ref> and <Ref>, we have u'_j=2 or 1ฬ
(according to (b_j)) while u_j=1 or 2ฬ
and u'_k=2ฬ
while u_k=1ฬ
. Note that we have w_1=1, since (b_1)=1. Consider the word
z=1 โ'(C_1) โ1ฬ
โ(C_2) โ(C_3),
where โ denotes concatenation. In the example of <Ref>, we have
z=1(1232121341ฬ
42ฬ
)1ฬ
(124ฬ
14ฬ
34441ฬ
)(14ฬ
4ฬ
4ฬ
444234).
Since (C_1) is a balanced lattice word by induction, so is 1 โ'(C_1) โ1ฬ
. Thus, since concatenation preserves lattice words, z is also a balanced lattice word. By construction, we may re-parenthesize z as
z=112321213(41ฬ
42ฬ
1ฬ
124ฬ
14ฬ
)3(4441ฬ
14ฬ
4ฬ
4ฬ
)444234,
grouping together the reversed and sign-reversed substrings coming from the overlaps (with reversed orientations) of c_1 and c_2 and of c_2 and c_3. We may remove such consecutive balanced substrings from a lattice word and obtain another lattice word z'=1123212133444234. By construction, we have z'=(G), so (G) is a balanced lattice word, as desired.
Suppose G โ(o) is a backbone (as in the proof of <Ref>). Then (G) is a lattice word.
Suppose that (b_1)=1; otherwise apply ฯ. Let i_a = _a(G)(1) for a=1,2,3. By <Ref>, we have (b_i_a)=1 for a=1,2,3 and by <Ref> and <Ref> we have (b_i_a)=a+1 for a=1,2,3.
Let โ denote the _2-strand through b_1. It is easy to see that all edges e incident to white boundary vertices on the right of โ have (e)=4, while those incident to black vertices have (e)=1. Those on the left side of โ incident to white (resp. black) boundary vertices have (e)=1 (resp. (e)=4). Let ๐ฐ' be the word obtained from (G) by deleting w_1,w_i_1,w_i_2, and w_i_3 (equal to 1,2,3 and 4, respectively). Then ๐ฐ' is a sequence of 1's and 4ฬ
's followed by a sequence of 1ฬ
's and 4's, and is balanced; any such word is a lattice word. It follows that (G) is also a lattice word.
ยง.ยง Separation words, promotion, and evacuation
We are now able to relate promotion and evacuation of fluctuating tableaux to rotation and reflection of hourglass plabic graphs.
Given an hourglass plabic graph G, let (G) be the graph obtained by rotating G one step counterclockwise with respect to the labeling b_1,โฆ,b_n of the boundary. Let (G) denote the graph obtained by reflecting G with respect to the diameter of the boundary circle passing between b_n and b_1.
Recall that we often identify a fluctuating tableau T with its lattice word L(T). In particular, we write (L(T)) for L((T)), and likewise for .
Let G โ(c). Then:
((G)) =((G)),
((G)) =((G)).
Let (G)=w_1โฆ w_n be the separation word of G. The claim for is straightforward: evacuation <cit.> and reflection both have the affect of applying the involution ฮต to (G).
For a=1,2,3, let i_a = _a(G)(1). Since _i-strands rotate along with the graph, it is clear by <Ref> that ((G))_j-1 = (G)_j (with indices taken modulo n) unless j โ{1,i_1,i_2,i_3}. Thus, by the first balance point characterization of promotion <cit.>, we need to show that i_a is the smallest index weakly greater than i_a-1 such that w_1 โฆ w_i_a has balanced a's and a+1's.
We prove the case a=1, the others being similar. Let C_0 denote the hourglass plabic graph inside the contour c_0 encircling all interior vertices of G that lie to the left of the _1-strand โ starting at b_1 (with respect to its orientation), with base face taken to be the boundary face between b_1 and b_2. Let (C_0)=u_1โฆ u_m; we wish to compare w_2 โฆ w_i_1-1 with u_1 โฆ u_i_1-2.
First, notice that for j>i_1-2, we have |u_j| โ{3,4}, since these are obtained from the separation labeling of the boundary edges e' of C_0 that are incident to โ. The corresponding boundary vertices b' of C_0 are black, as otherwise โ would have turned left to follow e'. Furthermore, since G (and so C_0) is fully reduced, the _2-strand passing through e' must end at one of the genuine boundary vertices b_2,โฆ,b_i_1-1, which have a smaller index in C_0. Thus, by <Ref> and <Ref>, we have _C_0(e') โฅ 3. This implies that, since (C_0) is balanced, the prefix u_1โฆ u_i_1-2 must have balanced 1's and 2's.
Now, let b_j be among b_2,โฆ,b_i_1-1 with incident edge e_j. Since G is fully reduced and therefore monotonic by <Ref>, the _2-strand through b_j may not exit and reenter the subgraph C_0. This implies _C_0(b_j) โ{1,2} if and only if _G(b_j) โ{1,2}. Together with the fact that u_1โฆ u_i_1-2 has balanced 1's and 2's, this implies that w_2 โฆ w_i_1-1, and therefore also w_1 โฆ w_i_1, has balanced 1's and 2's. If (G) had balanced 1's
and 2's for some shorter prefix, this would contradict the fact that (C_1), analyzed in the proof of <Ref>, is a lattice word.
In <Ref>, we have developed the theory of fully reduced hourglass plabic graphs in order to define the map ๐ฏ:(c)/โผโ(c) (<Ref>) and show (<Ref>) that it intertwines rotation with promotion. Our web basis โฌ_q^c for _U_q(_4)(โ_q^c V_q) will consist of a special representative W from each equivalence class in (c)/โผ. In <Ref>, we develop growth rules in order to show that the invariants [W]_q are linearly independent and to define a map ๐ข:(c) โ(c)/โผ. In <Ref>, we show that ๐ฏ and ๐ข are mutually inverse bijections and therefore (by <Ref>) that โฌ_q^c is the desired basis.
ยง GROWTH RULES FROM CRYSTALS
ยง.ยง The growth algorithm
We now give an algorithm which takes as input the lattice word of a rectangular fluctuating tableau with r=4 and constructs a corresponding fully reduced hourglass plabic graph; see <Ref>. While there are 14 growth rules in the r=3 case in <cit.>, our r=4 algorithm involves 88 short growth rules of length 2 or 3 (falling into 10 families), with 2 of those families extending to long growth rules of arbitrary length; see <Ref>. Our proof primarily involves carefully tracking trip and promotion permutations simultaneously by using a novel crystal-theoretic algorithm, which we introduce; see <Ref>.
For G=๐ข(T), the growth labeling ฮ(G) is the proper labeling of G obtained by applying the growth rules and ignoring bars on the labels. Note that โ(ฮ(G)) = w.
At each application of (G2), reading the remaining dangling strands from left to right gives a balanced lattice word w. The algorithm proceeds by selecting certain subwords w_j+1โฏ w_j+p of w and creating either a new oriented โโ or a โโชโ (an end cap) by, respectively, crossing or joining two adjacent strands; all other strands are extended further down without other modification. This process results in a new dangling lattice word w' obtained from w by substituting a subword w_j+1' โฏ w_j+q' for w_j+1โฏ w_j+p.
Let w = 114213{4,3,2,1}311{2,1}4. Begin with dangling strands labeled by (w). First apply growth rule 321โ 411, then apply 44 โโ
and 11โโ
, etc. See <Ref> for the resulting six-vertex configuration and the final fully reduced hourglass plabic graph G, with its growth labeling.
The following is the main result of this section.
Applying the growth algorithm (<Ref>) to T โ(c) always terminates in finitely many steps at an hourglass plabic graph G = ๐ข(T) โ(c). Moreover,
* G is fully reduced,
* _โ(G) = _โ(T),
* ฮ(G) is the unique proper labeling of G with boundary w=L(T),
* all other proper labelings ฯ of G have โ(ฯ) >_ w, and
* i โ(w) (see <Ref>) if and only if boundary vertices b_i and b_i+1 of G are connected to the same vertex.
While any sequence of choices in <Ref> eventually results in some hourglass plabic graph G, in contrast to the growth rules in <cit.> for the case r=3, the resulting G is not independent of the choices made. However, by <Ref> and <Ref>(i)โ(ii), all choices produce hourglass plabic graphs in the same move-equivalence class. We will later see in <Ref> that every move-equivalence class of (c) has an element produced by the growth algorithm. Furthermore, every element of this move-equivalence class has a unique -minimal proper labeling with boundary w. The rules presented here do not in general produce all elements of all move-equivalence classes, though they always produce at least one.
The proof of <Ref> occupies the remainder of <Ref>. It may be skipped by a reader willing to accept <Ref> as a black box; however, we believe the crystal-theoretic techniques developed here may be of independent interest.
ยง.ยง Plumbings, promotion appliances, and good degenerations
We use the notions in <Ref> to track partially defined trip and promotion permutations through each step of the growth algorithm. See <Ref> for an example.
A plumbing from set B' to set B is a tuple ฯ_โ = (ฯ_1, โฆ, ฯ_r-1) of bijections ฯ_i B' โ B โ B' โ B, such that ฯ_s^-1 = ฯ_r-s for all s โ [r-1]. The transpose ฯ_โ^โค of ฯ_โ is the plumbing defined by ฯ_s^โค = ฯ_r-s. The composite of plumbings ฯ_โ from Bโ to B' and ฯ_โ from B' to B is the plumbing ฯ_โโฯ_โ from Bโ to B where (ฯ_โโฯ_โ)_s Bโโ B โ Bโโ B sends each i โ B to the first value of โฏโฯ_s โฯ_s โฯ_s(i) that is in Bโโ B and sends each i โ Bโ to the first value of โฏโฯ_s โฯ_s โฯ_s(i) that is in Bโโ B.
An appliance g_โ = (g_1, โฆ, g_r-1) on A โ B โ C is a tuple of functions g_i B โ A โ B โ C, such that g_s(i) = j โ g_r-s(j) = i for all s โ [r-1] and i, j โ B. The transpose g_โ^โค of g_โ is the appliance defined by g_s^โค = g_r-s. A plumbing ฯ_โ from B' to B acts on an appliance g_โ on A โ B โ C on the right and results in an appliance g_โยทฯ_โ on A โ B' โ C where (g_โยทฯ_โ)_s sends each i โ B' to the first value of โฏโฯ_s โ g_s โฯ_s(i) that is in A โ B' โ C.
Finally, if h A โ C โ A' โ C' is any function, then h ยท g_โ is the appliance on A' โ B โ C' where (h ยท g_โ)_s = ศโ g_s where ศ A โ B โ C โ A' โ B โ C' is the extension of h which fixes B.
We visualize a plumbing ฯ_โ from B' to B as a network of connections from a line of nodes B' at the top to nodes B at the bottom as in <Ref>. We think of inputs i to ฯ_s as inputs to the network, initially headed downward for i โ B' and upward for i โ B. Outputs i of ฯ_s are outputs of the network, headed upward for i โ B' and downward for i โ B. Plumbings are composed by concatenating networks. An appliance g_โ on A โ B โ C is visualized as a line of nodes from A, B, C. Inputs i โ B to g_s initially head downward through a network and finish by heading upward into A, B, or C. Plumbings act on on appliances also by concatenation. The actions satisfy g_โยท (ฯ_โโฯ_โ) = (g_โยทฯ_โ) ยทฯ_โ and (h ยท g_โ) ยทฯ_โ = h ยท (g_โยทฯ_โ).
A plumbing that arises from the trip permutations of a portion of an hourglass plabic graph (or a six-vertex configuration) is realizable. A plumbing without such a realization is called virtual. There are natural notions of the identity plumbing ๐ and of the product of two plumbings, which is given by horizontal concatenation and denoted by ร. For a rectangular fluctuating tableau T or an hourglass plabic graph G, _โ(T) and _โ(G) can be thought of as appliances from [n] to โ
โ [n] โโ
.
The substitutions allowed in the growth algorithm are of the following sort. The growth rules in <Ref> will be our main examples.
A good degeneration is a substitution v_1 โฏ v_p โ v_1' โฏ v_q' where v_i, v_i' โ๐_r, together with a plumbing ฯ_โ from [p] to [q], such that the following hold for every balanced lattice word w = w_1 โฏ w_n:
* Replacing a consecutive substring v_1 โฏ v_p in w with v_1' โฏ v_q' results in a new balanced lattice word w' = w_1 โฏ w_j v_1' โฏ v_q' w_j+p+1โฏ w_n.
* We have
_โ(w) = _โ(w') ยทฯฬ_โ,
where the plumbing ฯฬ_โ = ๐^j รฯ_โร๐^n-j-p corresponds to the replacement w โ w'.
Good degenerations have a natural 4-fold symmetry. Recall the operations on words ฯ,ฯ,ฯต from <Ref>. These operations are applied to hourglass plabic graphs or a realizable plumbings as follows: ฯ acts by inverting the color of all vertices; ฮต acts by reflection through the vertical axis; and ฯ acts by both inversion and reflection. The following is a simple consequence of the interaction between these involutions and _โ(T) using <cit.>.
Suppose v โ v' is a good degeneration with (realizable) plumbing ฯ_โ. Then ฮน(v) โฮน(v') is a good degeneration with (realizable) plumbing ฮน(ฯ_โ) for all ฮนโ{ฯ, ฯ, ฮต}.
The 10 families of growth rules in <Ref> are orbits under this 4-fold symmetry, where the left/right symmetry is ฯ, the top/bottom symmetry is ฮต, and the diagonal symmetry is ฯ. The 14 growth rules in <cit.> can similarly be grouped into 5 families, two of size 4 and three of size 2.
ยง.ยง A crystal-theoretic good degeneration algorithm
We now provide an algorithm for establishing that a given substitution satisfies (<ref>). Our proof relies on Kashiwara's theory of crystals <cit.>. See <cit.> for background and conventions; our conventions mostly match those of the textbook <cit.>, except that we follow <cit.> in using the original tensor product convention, which is opposite to that of <cit.>.
Suppose that v is a word on ๐_r and that uvw is a balanced lattice word. The promotion appliance associated to the inclusion v โ uvw is the appliance ฯ_โ(u, v, w) on A โ B โ C = {1, โฆ, |u|}โ{|u|+1, โฆ, |u|+|v|}โ{|u|+|v|+1, โฆ, |u|+|v|+|w|} defined by
ฯ_s(u,v,w)(i) = _s(uvw)(i) for i โ B.
The appliance ฯ_โ(u, v, w) encodes the rows of the promotion matrix (uvw) (defined as in <cit.>) corresponding to entries from v. We will see that the following crystal-theoretic construction is well-defined and corresponds to the columns of (uvw) with entries from v. Let โฌ(โ^c V) be the U_q(_r)-crystal of words associated to the representation โ^c V. The highest-weight elements of โฌ(โ^c V) are the lattice words of type c and the balanced lattice words are the highest-weight elements of weight zero. All crystals we consider will consist of unions of connected components of such crystals.
Let v be a word in a connected crystal with highest-weight element v^โ, lowest-weight element v^โ, and |v| = b. Fix an increasing and a decreasing path
(v) = v^0 e_s_1 v^1 e_s_2โฏe_s_a v^a = (v^โ),
(v) = u^c f_t_c u^c-1f_t_c-1โฏf_t_1 u^0 = (v^โ).
Let eโ = (s_1, โฆ, s_a), fโ = (t_c, โฆ, t_1). The crystal appliance associated to these paths is the appliance ฯ_โ(eโ, v, fโ) on {1, โฆ, a}โ{a+1, โฆ, a+b}โ{a+b+1, โฆ, a+b+c} defined as follows:
* If v^i = e_s(v^i-1) is applied at the j-th letter, set ฯ_s(a+j) = i.
* If u^i-1 = f_s(u^i) is applied at the j-th letter, set ฯ_s(a+j) = a+b+i.
* If _s(v^โฮต(v^โ))(j) = i where 1 โค i, j โค b, set ฯ_s(a+j) = a+i.
Let uvw be a balanced lattice word. Then there exist eโ, fโ, and h such that
ฯ_โ(u, v, w)^โค = h ยทฯ_โ(eโ, v, fโ).
Here |u|=a, |v|=b, |w|=c, eโโ [r-1]^a', fโโ [r-1]^c', and h {1, โฆ, a'}โ{a'+b+1, โฆ, a'+b+c'}โ{1, โฆ, a}โ{a+b+1, โฆ, a+b+c} is the disjoint union of two weakly increasing functions.
Since _s(uvw) = _r-s(uvw)^-1, ฯ_โ(u, v, w)^โค corresponds to the columns of v in (uvw). These columns may be computed crystal-theoretically as follows. The crystal raising algorithm (see <cit.>) computes ((uvw)) by cutting off the first letter, applying raising operators to reach the highest weight element, and appending the unique letter which yields the original weight. By <cit.>, (uvw)_i,i+j contains s if and only if the raising operator e_s is applied at index j when applying to ^i-1((uvw)), where i+j is taken modulo |uvw|. Equivalently, _s(uvw)(i) = i+j. We illustrate phases of this operation in <Ref>.
During the raising algorithm, the subword corresponding to (v) shifts leftward as initial letters are repeatedly cut off. After the initial phase I consisting of a promotion steps, this subword has shifted to the beginning and has become a highest-weight word. Each promotion step computes one row of (uvw). Raising operators applied outside of the subword do not appear in the columns of v in (uvw) and may be ignored. If a' raising operators are applied to the subword during phase I, we have a path (v) โ(v^โ) in the connected crystal of (v) with a' steps, by the associativity of the crystal tensor product. If e_s is applied at the j-th letter of the subword during step i' of the crystal path, then e_s is applied at the (a+j)-th letter of the original word when computing some row i of (uvw). In this case, ฯ_s(a+j) = i' and _s(uvw)(i) = a+j. The function i' โฆ i is weakly increasing and fulfills (<ref>) for these values.
After phase I, the next phase II consists of b promotion steps. It computes the upper triangle of the rows and columns of v in (uvw). This calculation depends only on the initial b letters, namely (v^โ). Hence _s(uvw)(a+i) = a+j if and only if _s(v^โ v')(i) = j for 1 โค i < j โค b, where v' is any word such that v^โ v' is balanced, so in particular for v' = ฮต(v^โ). Thus (<ref>) also holds for these values. Note that when any particular letter of (v) reaches the beginning of the lattice word in phase II, it has been transformed into 1 or r.
Phases III and IV, as illustrated in <Ref>, determine the other half of the columns of v in (uvw). We may imagine the promotion algorithm is applied in reverse from the bottom up since ^n(uvw) = uvw, in which case phase III produces a decreasing path (v) โ(v^โ) of length c'. During phase IV, the relevant lower triangle of (v^โฮต(v^โ)) agrees with that of (uvw) by <cit.>. Thus (<ref>) holds.
Note that if we track the letters in a particular column of v in (uvw) through this process from phase II to I to III to IV, we have 1 โโฏโ r or rโโฏโ1, so ฯ_โ(eโ, v, fโ) is indeed well-defined. The same argument applies even if eโ and fโ have not necessarily come from some uvw, so crystal appliances are well-defined in general.
The following result is our main tool for proving that a particular substitution is a good degeneration and, in turn, for proving the validity of the growth rules. Our setup applies for general r and in particular can be used to easily re-verify the r=3 growth rules of KhovanovโKuperberg <cit.>. In this paper we focus on the case r=4, but we aim in future work to apply these techniques for higher r (and in other Lie types).
Let v, v' be words on ๐_r with B โ B' ={1,โฆ,|v|}โ{|v|+1,โฆ,|v|+|v'|}. Suppose there is an U_q(_r)-crystal isomorphism sending v to v'. If there is a plumbing ฯ_โ from B to B' such that
ฯ_โ(eโ, v, fโ)^โค = ฯ_โ(eโ, v', fโ)^โคยทฯ_โ
for all eโ, fโ, then v โ v' with plumbing ฯ_โ is a good degeneration.
Suppose uvw is a balanced lattice word. Since v and v' are corresponding vertices in isomorphic crystal graphs, uv'w is a balanced lattice word. Moreover, from the proof of <Ref> and <Ref>, the portions of (uvw) and (uv'w) corresponding to rows or columns from u or w are the same. Hence (<ref>) holds here directly. By symmetry, it now suffices to consider only the rows corresponding to v.
Let _โ(uvw)|_v denote the appliance restricting the domain to entries corresponding to v. By <Ref> and (<ref>),
_โ(uv'w)|_v'ยทฯ_โ = ฯ_โ(u, v', w) ยทฯ_โ = h ยทฯ_โ(eโ, v', fโ)^โคยทฯ_โ
= h ยทฯ_โ(eโ, v, fโ)^โค = ฯ_โ(u, v, w)
= _โ(uvw)|_v.
Here the fact that there is a crystal isomorphism sending v to v' implies that eโ, fโ, and h from <Ref> are the same. We now have (<ref>).
For any proposed substitution v โ v', <Ref> reduces the a priori infinite list of conditions in (<ref>) to a finite calculation. We summarize part of this calculation for the good degeneration 324 โ313 with the following realizable plumbing:
< g r a p h i c s >
First, we compute the portions of the crystals above and below 324 and 313 and pick corresponding paths to the highest and lowest weight elements:
< g r a p h i c s >
Each path yields a possible column in (uvw). We then compute the crystal appliances and verify that applying ฯ_โ to ฯ_โ(eโ, v', fโ)^โค yields ฯ_โ(eโ, v, fโ)^โค:
< g r a p h i c s >
The 88 short rules of length 2 or 3 in <Ref> are good degenerations.
In each case, the result follows from <Ref> exactly as in <Ref>. The calculations are lengthy, routine, and omitted. Using <Ref> reduces the labor considerably.
ยง.ยง Long good degenerations
We now prove that the two families of long growth rules in <Ref> are also good degenerations. Our argument will involve the plumbings in <Ref>.
The following are good degenerations, which are their own inverses.
* For plumbing : 32 โ 23, 22 โ 33, 23 โ 32, 31โ 24, 21โ 34, 32โ 14, 42โ 13, 43โ 12.
* For plumbing ^+-_-+: 12โ21, 21โ12, 22โ11.
* For plumbing ^-+_+-: 34 โ 43, 43 โ 34, 33 โ 44.
This is a direct verification using <Ref>.
Strictly speaking, the plumbings in <Ref>(ii) and (iii) are only realizable in one direction. For instance, the substitution 21 โ 12 would normally expect to be associated to the plumbing ^-+_+- in order for the six-vertex edge directions to be consistent with the label signs, though this is not in fact a good degeneration. Good degenerations are typically not invertible, since plumbings do not in general possess inverses; however, these good degenerations are invertible.
The following are good degenerations, for all k โโค_โฅ 0.
* For plumbing ^โ_++ร๐^k+1: 322^k4 โ 412^k4.
* For plumbing ^++_++ร๐^k+1: 142^k4 โ 412^k4.
* For plumbing ^++_โร๐^k+1: 232^k4 โ142^k4.
* For plumbing ฮถ^(k): 412^k4 โ322^k4.
Moreover, the composite ฮถ^(k)โ (^โ_++ร๐^k+1) acts as the identity on the appliance ฯ_โ(eโ, 322^k4, fโ)^โค.
Consider (i). In <Ref>, we depict finite state machines representing all elements reachable from 322^k4 by e_i's or by f_i's, together with information equivalent to _โ(v^โฮต(v^โ)). One may check the correctness of <Ref> directly, by using the bracketing rule for the action of crystal raising and lowering operators. Likewise, <Ref> depicts equivalent information for the word 412^k4. We may read off promotion appliances from this data as in <Ref>. Thus (<ref>) holds in this case. The claim about the composite may also be verified directly from <Ref>.
The other verifications are virtually identical and the details are omitted. In each case the finite state machines from <Ref> are the same as for 142^k4, 232^k4, and 142^k4 aside from the first two and final letters, which may be read off from the k=0 case.
The growth rules in <Ref> are all good degenerations.
The short rules were handled in <Ref>. It remains to consider the two families of long rules in <Ref>. For these, we induct on length; the length-3 base cases are contained among the short rules. We focus on 14wx where w โ{3, 2}^* and x โ{2, 3, 4, 1}; the argument for 23wx is identical. First suppose x=2.
* If w=โฏ2, apply 14โฏ22 โ 14โฏ33โ 41โฏ33โ 41โฏ22. The first and third steps use the plumbings in <Ref>(i) and the second step uses induction. The first and third plumbings cancel each other.
* If w=โฏ3, apply 14โฏ32 โ 14โฏ23โ 41โฏ23โ 41โฏ32 likewise.
The x=3 and x=1 cases are extremely similar and are omitted.
Now suppose x=4. If w=โฏ3, the above argument again applies, so take w=โฏ2. If w contains no 3, the result is <Ref>(ii). Otherwise, w = โฏ322^k for some k โโค_โฅ 0. Now apply the substitutions 14โฏ322^k4 โ 14โฏ412^k4 โ 41โฏ412^k4 โ 41โฏ322^k4, where the first and third steps use <Ref> and the second step uses induction. While the plumbings from the two applications of <Ref> do not cancel, nonetheless their composite acts as the identity on the promotion appliances of 322^k4. Hence we may remove their composite from the structure entirely, completing the induction and proof.
ยง.ยง Completeness of the growth rules
We now show that any balanced lattice word w of oscillating type with r=4 has at least one applicable growth rule. Recall the notation i from <Ref>, which represents either i or r-i+1. Note that the involutions ฯ and ฮต act on w by reversing and performing substitutions 1ฬโ4ฬ, 2ฬโ3ฬ. The composite ฯ fixes w. By inspecting <Ref>, we have the following.
At least one of the short growth rules in <Ref> applies whenever any of the following patterns appears in w:
1ฬ1ฬ4ฬ, 1ฬ2ฬ3ฬ, 1ฬ2ฬ4ฬ, 1ฬ3ฬ2ฬ, 1ฬ3ฬ3ฬ, 1ฬ3ฬ4ฬ, 1ฬ4ฬ4ฬ, 2ฬ2ฬ4ฬ, 2ฬ3ฬ4ฬ, 3ฬ2ฬ4ฬ.
At least one of the growth rules in <Ref> applies whenever any of the following patterns appears in w:
1ฬ4ฬx4ฬ, 1ฬx1ฬ4ฬ, 2ฬ3ฬx4ฬ, 1ฬx2ฬ3ฬ,
where x is a word consisting entirely of 2ฬ's and 3ฬ's.
The first two patterns and last two patterns are equivalent under ฯ. For 1ฬ4ฬx4ฬ, apply ฯ to consider 14ฬx4ฬ. A rule applies to 11, so we have 14x4ฬ. If x contains a 2 or 3, a long growth rule applies to the prefix ending at the leftmost 2 or 3 in x. Otherwise, a long growth rule still applies. The argument for 2ฬ3ฬx4ฬ is essentially identical.
At least one of the growth rules in <Ref> applies to any non-empty balanced lattice word w.
Consider w, which must begin with 1ฬ and which must end with 4ฬ. Consider the rightmost 1ฬ. It is followed by a letter other than 1ฬ, so we consider the following three cases.
* โฏ1ฬ2ฬโฏ: this must be โฏ1ฬ2ฬ2ฬ^k3ฬโฏ or โฏ1ฬ2ฬ2ฬ^k4ฬโฏ. In the first case, apply <Ref>. In the second case, we have โฏ1ฬ2ฬ4ฬโฏ or โฏ2ฬ2ฬ4ฬ, so apply <Ref>.
* โฏ1ฬ3ฬโฏ: this must be โฏ1ฬ3ฬ2ฬโฏ, โฏ1ฬ3ฬ3ฬโฏ, or 1ฬ3ฬ4ฬโฏ, so apply <Ref>.
* โฏ1ฬ4ฬโฏ: since w is reverse lattice, w cannot end in 1ฬ4ฬ, so we have โฏ1ฬ4ฬw4ฬโฏ where w consists entirely of 2ฬ's and 3ฬ's, so apply <Ref>.
ยง.ยง Linearized diagrams, nice labelings, and unitriangularity
Recall the -order from <Ref>. Our next goal is to show that the growth labeling ฮ(G) yields the unique grevlex-maximal monomial in [G]_q. We will work in somewhat more generality so that our results will apply to all G โ(c), rather than just those produced by the growth algorithm. We begin by considering diagrams similar to those appearing at the top of <Ref>.
A linearized diagram is a six-vertex configuration obtained by beginning with a row of directed dangling strands and at each step either (i) combining an adjacent pair of strands in an , or (ii) combining an adjacent pair in a โช, with all other strands extended directly down in either case, until there are no more dangling strands.
A signed proper labeling ฯ of a linearized diagram is a labeling of the edges by ยฑ{1, โฆ, 4} whose absolute value is a proper labeling (see <Ref>) and where the sign is positive for downward-pointing arrows and negative for upward-pointing arrows. The boundary โ(ฯ) of ฯ is the word of oscillating type obtained by reading off the topmost labels from left to right.
A nice labeling of a linearized diagram is a signed proper labeling where the labels on the 's and โช's are those that appear in the 's and โช's of the growth rules in <Ref>, plus those appearing in <Ref>.
The output G of the growth algorithm can be seen as a linearized diagram. The growth labeling ฮ(G) is a nice labeling without the configurations appearing in <Ref>. Strictly speaking, G also includes witness information, though linearized diagrams are more flexible and do not insist upon having witnesses. The additional configurations in <Ref> do not correspond to good degenerations, though we will find that they arise when applying YangโBaxter moves to such G. Despite ignoring witnesses, linearized diagrams with nice labelings track more information than circular diagrams (such as in the lower-left of <Ref>), since they give a consistent set of orientations for each and โช.
The collection of possible nice labelings of 's possesses the following properties.
Suppose ab โ cd is the portion of a good degeneration from the growth rules in <Ref> corresponding to the , or similarly one of the additional labelings in <Ref>. Then the following hold:
* Either รฃ=bฬ=cฬ=dฬ, or รฃ < cฬ and cd >_ ab.
* Fixing labels ab at the top of the and assigning labels cd at the bottom is the -maximal way to create a signed proper labeling at these vertices.
* Given an properly labeled by pq at the top and ts at the bottom, if ts โฅ_ cd, then pq โฅ_ ab.
* If u, v are words of oscillating type where ucdv is a lattice word, then uabv is a lattice word with the same weight.
All properties may be checked case-by-case. For example, consider ab = 22โ11 = cd. For (i), รฃ = 2ฬ < 4ฬ = cฬ. For (ii), expanding the in this case yields two internal vertices connected horizontally by an hourglass. Labeling the top strands with 2's forces any proper labeling to label both bottom strands with the same number. The -maximal way of doing so uses 1's at the bottom, as in the growth rule. For (iii), we must consider t=1, s=1, 2, 3, 4. Using s=1, proper labelings can have pq=22, 33, 44. Each of these satisfies pq โฅ_ 22. All other cases are similar. For (iv), the mixed-sign cases like 22โ11 in fact involve a pair of vertices that correspond under a crystal isomorphism. For the remaining cases like 12 โ 21, the direction in (iv) matters.
Equality in <Ref>(i) occurs precisely for the additional labelings in <Ref>, which do not occur in growth labelings. The following important fact is then immediate.
If w โ w' is obtained by applying a growth rule, then either w' is shorter than w, or w' >_ w. In particular, the growth algorithm cannot enter an infinite loop.
Linearized diagrams with nice labelings have the following key property. Note that it applies to the output ๐ข(T) of the growth algorithm using the growth labeling.
Let G be a linearized diagram with a given nice labeling ฯ and suppose ฯ is a (signed) proper labeling of G. Then:
* โ(ฯ) โฅ_โ(ฯ),
* if โ(ฯ) = โ(ฯ), then ฯ = ฯ, and
* โ(ฯ) is a balanced lattice word.
In particular, linearized diagrams have at most one nice labeling.
We induct on the number of steps in the linearized diagram. The base case is trivial. If an end cap is the topmost step in G, the result is obvious by induction. If instead an is the topmost step in G, let G' be the subgraph of G without this and let ฯ', ฯ' be the proper sub-labelings of ฯ and ฯ restricted to G'. Since ฯ' is a nice labeling, induction gives โ(ฯ') โฅ_โ(ฯ'). By <Ref>(a), โ(ฯ') โฅ_โ(ฯ), hence โ(ฯ') โฅ_โ(ฯ). Note that โ(ฯ), โ(ฯ') and โ(ฯ), โ(ฯ') differ only at the positions of the . Suppose that the in ฯ is labeled by ab at the top and cd at the bottom. Further suppose that the in ฯ is labeled by pq at the top and ts at the bottom, so that the corresponding portion of โ(ฯ') is ts and โ(ฯ) is pq. If โ(ฯ') does not agree with โ(ฯ) up until the , then โ(ฯ') โฅ_โ(ฯ) implies โ(ฯ) >_โ(ฯ) and we are done. So, suppose โ(ฯ') agrees with โ(ฯ) at least until the . Since โ(ฯ') โฅ_โ(ฯ'), we have ts โฅ_ cd. By <Ref>(iii), we have pq โฅ_ ab. If pq โ ab, then โ(ฯ) >_โ(ฯ), and (i) holds. If pq = ab, then by <Ref>(ii), cd โฅ_ ts, so cd = ts. Now โ(ฯ') and โ(ฯ') agree through at least the , so โ(ฯ') โฅ_โ(ฯ') implies โ(ฯ) โฅ_โ(ฯ), giving (i).
Now suppose โ(ฯ) = โ(ฯ). In the notation above, this gives ab=pq and so, as before, cd=ts. Thus โ(ฯ') = โ(ฯ'), so ฯ' = ฯ' by induction, and hence ฯ = ฯ, giving (ii).
For (iii), โ(ฯ) is a balanced lattice word by <Ref>(iv) and induction.
Our next goal is to show that one may apply YangโBaxter moves to linearized diagrams in such a way that nice labelings are preserved.
The possible oriented triangles in linearized diagrams are those configurations depicted in <Ref>.
The argument involves a straightforward though tedious case-by-case check. The details are omitted.
Suppose one of the oriented triangles in <Ref> appears in a nice labeling of a linearized diagram. The complete list of possible boundary conditions is in <Ref>.
Consider configuration (a) in <Ref>, with boundary conditions uvwxโ pq. To be a fragment of a nice labeling, the โช must be labeled 1. Examining the allowed nice labelings, we find u=1=x, the uppermost vertex is vw โ pq, and at least one of p or q is 4. If p=q=4, we must have v=w=3, so 1331โ 44 is a valid nice labeling boundary condition this counterclockwise-oriented type (a) triangle. All other cases are similar and are omitted. The fundamental involutions provide symmetries which simplify the argument. Specifically, ฮต, ฯ interchange clockwise and counterclockwise orientations by reflecting or reversing arrows, respectively, and their composite ฯ fixes clockwise and counterclockwise orientations.
The counterclockwise boundary condition 1421โ 24 involves only labelings coming from growth rules and not the additional nice labelings from <Ref>. The clockwise version of the same boundary condition does however involve the additional nice labelings, which is conceptually where the additional nice labelings arise.
Suppose G and G' are six-vertex configurations related by a YangโBaxter move. If G can be written as a linearized diagram with a nice labeling, then the same is true of G'. Moreover, the nice labelings of G and G' are the same except inside the triangle at which the move was applied.
By <Ref>, every oriented triangle in a linearized diagram looks locally like one of the configurations in <Ref>. It suffices to show that every such local configuration can be replaced by one with the same boundary conditions and opposite orientation. Most of the possible boundary conditions listed in <Ref> are symmetric in this way. A sample exception is the clockwise-oriented configuration 142241โโ
of type (d). This may be replaced by the counterclockwise-oriented configuration on the left in <Ref>, which arises from appending end caps to a configuration of type (c). Exceptions of type (a)/(b) and (f) may be handled similarly using this type (c) configuration but using only the right-most end cap or the left and right end caps, respectively. Exceptions of type (g) may be handled with the configuration on the right in <Ref>.
ยง.ยง Growth rules and descents
Here, we define descents of fluctuating tableaux and show how they appear in the growth algorithm.
Let w be a lattice word on ๐_r. Say (w) = w_1 โฏ w_n. The descent set of w is
(w) = {i โ [n-1] : 0 < w_i < w_i+1 or w_i < w_i+1 < 0}.
Let G be an hourglass plabic graph obtained by repeatedly applying growth rules starting from a balanced lattice word w = w_1 โฏ w_n of oscillating type. If i โ(w), then the boundary nodes associated to w_i and w_i+1 are connected by an edge in G.
Consider a descent w_i w_i+1 = 14. Growth rules with 14 โ 41 have the required property. None of the growth rules can change only the 4 or only the 1, so 14 โ 41 must eventually be applied.
Now consider a descent 13. Growth rules with 13 โ 31 or 13 โ24 have the required property. Other growth rules may only change the 3, namely 31โ13, 34 โ12, 34 โ 43. The first two will result in 13โฏโ 11โฏ, and 11 must eventually result in an end cap. The third would require 134 โ 143, but that rule requires witnesses 3 or 4 rather than 1, so it does not apply.
The remaining descents may be handled similarly. The details are omitted.
ยง.ยง Growth rules and fully reduced graphs
We now show that the output ๐ข(T) of the growth algorithm is fully reduced. By <Ref>, we may equivalently show that ๐ข(T) is monotonic (see <Ref>). Our argument is inductive and verifies, by further analyzing promotion appliances, that growth rules preserve monotonicity. We begin with the significantly simpler _2 conditions.
Let {a, c}, {b, d} be four distinct numbers in [n]. By relabeling if necessary, we may suppose a < c, b < d, and a < b. We say they form a crossing if a < b < c < d.
Let T' be a tableau of oscillating type with w'=L(T'). Suppose that G' is an hourglass plabic graph of the same boundary type as w' and with _โ(G') = _โ(w'). Suppose further that G is obtained from G' by applying one of growth rules in <Ref>. If the _2-strands in G' have no self-intersections or double crossings, then the same is true of G.
First, consider self-intersections of _2-strands in G. Since G' has no _2 self-intersections, the only way to introduce a self-intersection by attaching an is for a _2-strand to leave and re-enter the through the bottom two edges. That is, _2(G')(i) = i+1 where i, i+1 are the positions corresponding to the in w'. Since _โ(G') = _โ(w'), we have _2(w')(i) = i+1. By <Ref>, ฯ_2(eโ, v', fโ)(i') = i'+1 where i', i'+1 are the positions corresponding to the in v' in w'. Graphically, the second promotion appliance of v' would need a โcupโ below where the is placed. We see in <Ref> that this does not occur, and one may check directly that the same is true for all short rules, so G does not contain _2-strand self-intersections in these cases.
Now, consider double intersections of _2-strands in G. Since G' has no _2-strand double intersections, they arise in G only from the two _2-strands leaving the bottom two edges of the and crossing in G'. This configuration occurs if and only if {i, _2(G')(i)}, {i+1, _2(G')(i+1)} form a crossing. As before, such a crossing would be apparent as a crossing in the promotion appliance of v'; we see this does not occur in <Ref>, and we may check it does not occur for any short rule. Hence, G does not contain _2-strand double crossings in these cases. The _2 conditions may be summarized as requiring that ฯ_2(eโ, v', fโ)(i') โ i' + 1 and that {i', ฯ_2(eโ, v', fโ)(i')}, {i'+1, ฯ_2(eโ, v', fโ)(i'+1)} do not form a crossing.
Finally, consider the long rules. The preceding _2 conditions may be directly checked for the promotion appliances of the targets of <Ref>(i)-(ii), e.g.ย by inspecting the second row of <Ref>. For the full long rules, we may repeat the inductive argument in <Ref> with the additional hypothesis that the promotion appliances of the targets of the long rules satisfy the _2 conditions. The base cases are short rules and <Ref>(i)-(ii). We describe how to modify a representative inductive step. Recall that we used the substitutions 14โฏ22 โ 14โฏ33โ 41โฏ33โ 41โฏ22, where the first and third steps use the plumbings in <Ref>(i) and the second step uses a shorter long rule. By induction, ฯ_2(eโ, 41โฏ33, fโ) satisfies the _2 conditions. Now ฯ_2(eโ, 41โฏ33, fโ) = ฯ_2(eโ, 41โฏ22, fโ) โ (๐^k+1ร๐ง_2) and ๐ง_2 is the identity, so ฯ_2(eโ, 41โฏ22, fโ) satisfies the _2 conditions. The other cases use the s=2 plumbings in <Ref> in place of ๐ง_2, which likewise do not interfere with the _2 conditions.
We give a similar but more technical argument for the _1, _2 monotonicity conditions. First, we show that promotion appliances can detect the touching of _1- and _2-strands.
Let T be a tableau of oscillating type with w = L(T), where w has a fixed consecutive subword v. Let G be an hourglass plabic graph of the same boundary type as w, with _โ(G) = _โ(w). Suppose i โ j are indices in w corresponding to v, while _1(G)(i) = _2(G)(j) is an index in w not corresponding to v. Then {i, ฯ_3(eโ, v, fโ)(i)}, {j, ฯ_2(eโ, v, fโ)(j)} form a crossing, where eโ, fโ are as in <Ref> applied to v in w.
Let k = _1(G)(i) = _2(G)(j). By construction of promotion appliances, ฯ_3(eโ, v, fโ)(i) โ ฯ_2(eโ, v, fโ)(j), and they are distinct from i, j. However, the function h from <Ref> is non-injective here and gives no information about the necessary inequalities.
Instead, let y be obtained from w by replacing the letter at index k with its standardized complement, e.g.ย y โ w could be obtained by 123 โ4. Correspondingly, let H be obtained from G by attaching a โclawโ at the boundary vertex k. By <Ref>(c), _1(H)(i) โ _2(H)(j) and the two strands cross. Attaching a claw preserves monotonicity by inspection, so {i, _1(H)(i)}, {j, _2(H)(j)} form a crossing. Since _โ(G) = _โ(w), we see _โ(H) = _โ(v). By <Ref> applied to v in y, we see that {i, ฯ_3(eโ_0, v, fโ_0)(i)}, {j, ฯ_2(eโ_0, v, fโ_0)(j)} form a crossing.
We claim that eโ_0 = eโ and fโ_0 = fโ. Indeed, they are each obtained by tracking the subword v in w or y through the promotion algorithm. The crystal operators applied to the positions of v are the same in either case since y and w differ by, e.g.,ย 123 โ4, which is applied at indices disjoint from the positions of v and which comes from a crystal isomorphism.
Let v be a subword of a word w of oscillating type, suppose ฯ_โ = ฯ_โ(eโ, v, fโ) from <Ref> applied to v in w, and let a โ b be indices in w corresponding to v. We say that ฯ_i(a) intersects ฯ_j(b) if
* ฯ_i(a) = b; or
* ฯ_j(b) = a; or
* ฯ_i(a) = ฯ_j(b); or
* a, b, ฯ_i(a), ฯ_j(b) are all distinct and {a, ฯ_i(a)}, {b, ฯ_j(b)} form a crossing.
Under the hypotheses of <Ref>, if G' is monotonic, then G is monotonic.
By <Ref>, we need only show that G satisfies the _1, _2 condition from <Ref>. As before, a violation of this condition in G must involve one of the _1-strands in the additional plumbing through the attached ๐ท. Let ฯ_โ = ฯ_โ(eโ, v', fโ) from <Ref> applied to v' in w' and pick a โ b corresponding to positions of v' in w'. Combining _โ(G') = _โ(w'), monotonicity in G', and <Ref>, we see that if the _i(G')(a)- and _j(G')(b)-strands cross or touch, then ฯ_r-i(a) and ฯ_r-j(b) intersect.
One may check that requiring the following conditions on promotion appliances of v' (in addition to the _2 conditions verified above) ensures that G is monotonic. We may examine only a single ฯ, ฯ, ฯต representative plumbing ฯ_โ by <Ref>. Let a, b be the consecutive indices of the ๐ท of v' in w'.
* ฯ_โ = ๐ท^+-_-+ร๐^k: {ฯ_3(a), ฯ_2(b)}; {ฯ_1(a), ฯ_2(b)}; {ฯ_1(b), ฯ_2(a)}; and {ฯ_3(b), ฯ_2(a)} do not intersect.
* ฯ_โ = ๐ท^โ_++ร๐^k+1 or ฯ_โ = ๐ท^++_++ร๐^k+1: {ฯ_3(a), ฯ_2(b)} as well as {ฯ_1(b), ฯ_2(a)} do not intersect. Further, ฯ_1(c) = b for some c > b in v', and for all k โ [b, c], {ฯ_2(k), ฯ_1(a)} do not intersect.
These conditions may be checked directly for the promotion appliances of the short rules in <Ref>. They may also be checked for the promotion appliances of the targets of <Ref>(i)-(ii), e.g.ย by inspecting the second row of <Ref>.
It again remains to consider the long rules. It suffices to consider those which apply ๐ท^++_++ or ๐ท^โ_++ with target of the form 41wx where w is a word consisting of 2, 3's and x โ{4, 3, 2, 1}. Hence we must check condition (ii) for these targets. We may slightly strengthen the inductive proof of <Ref> to see that ฯ_3(b) = c where c corresponds to the position of the 1, 2, 3, 4 in the long rule. We also note that ฯ_s(a) < a for s=1, 2, 3, since v' = 41โฏ begins with 4, which must be turned into a 1 by the first phase of the crystal raising algorithm. In particular, ฯ_s(a) โ b, c, c+1. Similarly, ฯ_s(b) โฅ a. Hence {ฯ_3(a), ฯ_2(b)} and {ฯ_1(b), ฯ_2(a)} do not intersect.
We also now see that all conditions hold for {ฯ_2(k), ฯ_1(a)} to not intersect for k โ [b, c], except perhaps the crossing condition. This is significantly more difficult to rule out. Equivalently, we must show that when applying raising operators to 41wx, after applying e_1 to the first letter we never apply e_2 to any other letter. This condition may be read off directly from <Ref> where x โ{4, 3, 2}. The x=1 case is similar and is omitted.
All that remains is to prove the correctness of <Ref>. Here we use notation like โจ2^ฮฑ, 3^ฮฒโฉ to denote any of the ฮฑ+ฮฒฮฑ words with with ฮฑ copies of 2 and ฮฒ copies of 3.
The portion of the crystal above any 41โจ2^ฮฑ, 3^ฮฒโฉ4 is a subset of the diagram in <Ref>.
Beginning at the bottom of the diagram in <Ref>, one may verify its correctness through routine applications of the bracketing rule, except for 41โจ2^ฮฑ, 3^ฮฒ, 4^ฮณโฉ(5)2, which is more complex. The additional constraint is as follows.
(5) For clarity, we write 2 = ) and 3 = (. Begin with an unmatched initial sequence )โฏ)(โฏ(. Add (possibly empty) complete matchings of ( and )'s on either side of each of the letters of the initial sequence. Finally, add 4's arbitrarily after the rightmost matched (.
Note that (5) implies that the suffix of the word starting after the rightmost matched ( is of the form )โฏ) (all matched) followed by )โฏ)(โฏ( (all unmatched), with 4's placed arbitrarily. All 4's are in the suffix.
As an example, suppose (5) and (6) each hold, so all ('s are paired with 4's to their left. Then all ('s are in the suffix and are hence unmatched. Thus the complete matchings from (5) are all empty, and every ) is left of every (, which is (7).
We verify the correctness of (5). The initial conditions from (1) and (2) give all ('s left of all 4's, and all ('s are matched with an ), so (5) holds. We may restate the action of (3) as replacing the rightmost unmatched ) with (. This does not affect the matching and so preserves (5).
Finally, we may restate the action of (4) as pairing 4's and ('s and replacing the rightmost ( not paired with a 4 to its left with a 4. First suppose the replaced ( is in the suffix, so it is unmatched, and replacing it with 4 preserves (5). Now suppose the replaced ( is in the prefix. The prefix contains no 4's, so the replaced ( must be the final letter of the prefix. This operation does, however, affect the matching. In the original suffix, in the )โฏ) (all matched), the rightmost matched ) will now be searching for a partner. There are two cases.
* The rightmost unmatched parenthesis in the prefix was (. This will now match with the aforementioned rightmost matched ).
* The rightmost unmatched parenthesis in the prefix was ) or did not exist. The aforementioned rightmost matched ) becomes unmatched.
In each case, (5) continues to hold.
ยง.ยง Proving the growth algorithm
We now piece together the preceding results to establish the growth algorithm theorem.
We begin with the oscillating case. The growth algorithm terminates in an hourglass plabic graph by <Ref> and <Ref>, using induction on lattice words of oscillating type under length and then -order. The same induction using <Ref> gives condition (ii), _โ(G) = _โ(T) for G = ๐ข(T) with w = L(T) of oscillating type. Condition (i) now follows from <Ref> and <Ref>. Conditions (iii) and (iv) are <Ref>. The forward implication of (v) is <Ref>. Conversely, suppose b_i and b_i+1 of G are connected to the same vertex, where i โ [n-1]. Their boundary labels must have the same sign, and they must increase by the -minimality condition (iv), which completes the argument for (v). Each property extends immediately to the general fluctuating case.
ยง THE HOURGLASS WEB BASIS
ยง.ยง The main bijection
<Ref> shows that the separation labeling determines a map ๐ฏ from contracted fully reduced hourglass plabic graphs to rectangular fluctuating tableaux of the same type. Since move-equivalent graphs have the same trip permutations (<Ref>), and thus the same separation labels on the boundary edges (<Ref>), we can in fact consider this function as a map
๐ฏ: (c)/โผโ(c).
<Ref> shows that growth rules define a map ๐ข: (c)โ(c), which induces a map
๐ข:(c) โ(c)/โผ.
<Ref> below shows that ๐ฏ and ๐ข are mutually inverse bijections; this is our main combinatorial result, enabling the construction of our web basis.
The maps ๐ฏ:(c)/โผโ(c) and ๐ข:(c) โ(c)/โผ are mutually inverse bijections. Furthermore, this bijection satisfies _โ(G)=_โ(๐ฏ(G)). Consequently, it intertwines promotion and evacuation of tableaux with rotation and reflection of hourglass plabic graphs.
Let T โ(c) and G=๐ข(T). By <Ref>, we have G โ(c) and _โ(G)=_โ(T). By <Ref>, ๐ฏ(G) โ(c). Thus by <Ref> and <cit.>, we have ๐ฏ(G)=T, so ๐ฏโ๐ข is the identity on (c).
Now let G โ(c) and let T=๐ฏ(G). By <Ref>, T โ(c). By <Ref>, we therefore have that ๐ฏ(^i(G))=๐ซ^i(T) for all i. Let ฯ=(1 2 โฆ n) โ๐_n. It is easy to see that if (ฯ^i ฯฯ^-i) = (ฯ^i ฯ' ฯ^-i) for all i, then ฯ=ฯ'. Thus by <cit.> and <Ref> we have _โ(T)=_โ(G). Let G'=๐ข(T); by <Ref>, we have _โ(T)=_โ(G'). Thus by <Ref>, we have G' โผ G, so that ๐ขโ๐ฏ is the identity on (c)/โผ.
We have now shown that ๐ฏ and ๐ข are mutually inverse bijections satisfying _โ=_โ. The rest of the theorem then follows from <Ref>.
ยง.ยง Unitriangularity from separation words
We now make precise the conversion from fully reduced hourglass plabic graphs to webs. Note that the following tagging convention only affects [W]_q by a sign.
Let W โ(c). We view W as a web of type c (see <Ref>) by considering each m-hourglass as an edge of multiplicity m, and by tagging the vertices v of W as follows:
* If v is incident to a 2-hourglass and two simple edges, place the tag between the simple edges.
* If v is incident to four simple edges, then the two _2-strands passing through v divide the disk into four sectors (they do not double-cross since W is fully reduced). Place the tag in the sector containing the base face of W.
* If v is incident to a boundary 3-hourglass and a simple edge, place the tag on the side of the _2 strand through the simple edge which contains the base face.
* If v is incident to two boundary 2-hourglasses, place the tag on the side of the base face.
Note that there is no choice when v is incident to a boundary 4-hourglass.
The following unitriangularity result is analogous to <cit.> (see also <cit.>).
Let G โ(c). Then the separation word w = (G) is the unique -minimal boundary word among all proper labelings of G. Thus
[G]_q = ยฑ q^a x_w + โ_v >_ w d_w^v(q) x_v
for some a โโค and d_w^v(q) โโค[q, q^-1], where v ranges over words of type c.
If G = ๐ข(T) arises from the growth algorithm, the result follows from (<ref>) and <Ref>(iii)โ(iv). (The tagging only results in a global sign change.) If G' is benzene-equivalent to such G, the result holds for G' by <Ref> and <Ref>. Finally, if Gโ is square-move-equivalent to such G', then [G']_q = [Gโ]_q. By <Ref>, we may apply benzene moves before square moves; hence, by <Ref>, this covers all G โ(c).
ยง.ยง Distinguished representatives and the web basis
The bijection from <Ref> implies that |(c)/โผ| = |(c)|. By <Ref>, this quantity is also equal to _โ(โ^c V)=_โ(q)_U_q(_4)(โ_q^c V_q). In this section, we obtain our U_q(_4)-web basis โฌ_q^c by producing a distinguished web invariant from each move-equivalence class of (c).
A benzene face in an hourglass plabic graph is a face that admits a benzene move. We say a benzene face is clockwise (resp. counterclockwise) if in each hourglass edge the white vertex precedes the black vertex in clockwise (resp. counterclockwise) order; equivalently, if the corresponding triangle in the six-vertex configuration is oriented clockwise (resp. counterclockwise). Let โฌ be a benzene-move-equivalence class of contracted fully reduced hourglass plabic graphs. For x, y โโฌ, write x y and say y covers x when y is obtained by applying a benzene move to a clockwise benzene face in x, resulting in a counterclockwise benzene face in y. Let โผ be the reflexive, transitive closure of this relation onย โฌ.
The relation (โฌ, โผ) is a partially ordered set with a unique maximum, which may be reached by starting at an arbitrary element and applying covering relations arbitrarily until no more apply.
Consider a sequence of covers in (โฌ,โผ), corresponding to benzene moves on faces F_1,F_2,โฆ. Between any two benzene moves at the same face F, a benzene move must be applied to all faces F' sharing an edge with F. Since no moves can be applied to boundary faces, this implies that F appears finitely many times in the sequence. This guarantees antisymmetry, so that โผ is a partial order.
Now, โฌ possesses the diamond property: if x y_0 and x y_1, then there is some z โโฌ with y_0 z and y_1 z. Indeed, y_0 and y_1 are obtained from x by benzene moves on two distinct faces which, by the orientation condition, must be non-adjacent, so the two benzene moves commute. The diamond property implies that โฌ contains upper bounds, and thus has a unique maximal element, by Newman's Lemma <cit.>.
Standard techniques (see e.g. <cit.>) can be used to prove that โฌ is in fact a distributive lattice, but we will not use this fact.
A contracted fully reduced hourglass plabic graph G that is maximal in its benzene-move-equivalence class (in the sense of <Ref>) is called top fully reduced. Equivalently, G โ(c) is top fully reduced if and only if it has no clockwise benzene faces.
The collection
โฌ_q^c{[W]_q : W a top fully reduced hourglass plabic graph of type c}
is a rotation-invariant web basis for _U_q(_4)(โ_q^c V_q).
Let ๐ be a move-equivalence class of graphs from (c). By <Ref>, square moves and benzene moves on fully reduced graphs commute: any G,G' โ๐ may be connected by first applying a sequence of benzene moves and then a sequence of square moves. By <cit.>, square moves do not affect the web invariant; thus any top fully reduced hourglass plabic graphs W,W' โ๐ satisfy [W]_q=[W']_q. We write [๐]_q for this common value.
By <Ref> and <Ref>, the invariants [๐]_q for ๐โ(c)/โผ are linearly independent, since they have distinct leading terms. Since
|(c)/โผ| =|(c)|=_โ(q)_U_q(_4)(โ_q^c V_q)
by <Ref> and <Ref>, the invariants [๐]_q form a web basis.
Rotation invariance follows by observing that the rotation of a top fully reduced graph is again top fully reduced.
For any complete collection of representatives โ for (c)/โผ, the corresponding invariants [G]_q for G โโ form a โ(q)-basis for _U_q(_4)(โ_q^c V_q).
They are linearly independent by <Ref>, and there are the right number by (<ref>) and <Ref>.
We thank Greg Kuperberg for suggesting <Ref> towards making connections with the dual canonical basis.
Let E = ([x_v] [G_w]_q) be a matrix whose rows are indexed by words v of type c listed in decreasing order and whose columns are indexed by w โ(c) listed in decreasing order, where G_w โ(w) with (G) = w. Then E is in row echelon form, and each column has a pivot of the form ยฑ q^a for a โโค.
This follows by inspecting the matrix E and using <Ref>.
ยง UNCROSSING AND REDUCTION RELATIONS
Using the braiding of the representation category for U_q(_r) (see <cit.>), one may assign invariants [T]_q to tensor diagrams T, variants of webs in which edges may cross over one another. When T is the projection of a (colored) knot, link, or tangle, [T]_q yields quantum knot invariants (e.g., <cit.>) of great interest. The reader may take the relations in <Ref> (adapted to our conventions from <cit.>) as defining [T]_q in the case r=4.
In <Ref> below, we describe how the invariant [T]_q of an arbitrary U_q(_4)-tensor diagram may be expanded in the web basis โฌ_q^c of <Ref>. The relations appearing in this section are drawn from <cit.> and adapted to our conventions. As noted by Kuperberg <cit.>, such reduction algorithms to a web basis allow for the efficient computation of quantum link and tangle invariants. In the case c=(1,โฆ,1), where the representation of ๐_n on (โ^c V) via permutation of tensor factors is a rectangular Specht module, the uncrossing relations also allow for the computation of the actions of the simple reflections in the basis โฌ^c.
Let [k]_q (q^k-q^-k)/(q-q^-1).
For T a tensor diagram of type c, <Ref> expresses [T]_q in the basis โฌ_q^c.
If T is a tensor diagram, then the uncrossing relations unambiguously express [T]_q as a combination of web invariants. By <Ref>, each web that is not fully reduced is modified by one of the Steps (2โ4). If a web is modified only by a benzene move in Step (4), then it will also be modified by Step (2) on the subsequent iteration. Since the loop deletion and 4-cycle relations (<Ref>) all decrease the number of faces, the algorithm terminates in a fully reduced web. Finally, by <Ref>, there is a sequence of benzene moves converting any fully reduced web into a basis web (and in fact this sequence may be chosen arbitrarily).
ยง COMBINATORIAL APPLICATIONS OF THE WEB BASIS
In this section, we collect results related to our web basis โฌ_q^c that are of particular interest in combinatorics.
We first re-prove a cyclic sieving result. We then summarize two particular extreme move-equivalence classes relating to alternating sign matrices and plane partitions, respectively.
ยง.ยง Cyclic sieving and basis webs
<Ref> can be used to easily recover the 4-row case of a celebrated cyclic sieving result of Rhoades <cit.>, originally proved using KazhdanโLusztig theory. The 3-row case was re-proven in <cit.>, using Kuperberg's U_q(_3)-web basis. For the definition of the q-hook length polynomial below, see <cit.> or <cit.>.
Let n = 4k, ฮป=(k,k,k,k), and ฮถ=e^2 ฯ i/ n. The number of standard tableaux of shape ฮป fixed by ๐ซ^d is f^ฮป(ฮถ^d), where f^ฮป(q) is the q-hook length polynomial.
Let ฯ=(1 2 โฏ n) โ๐_n. By Springer's theory of regular elements <cit.>, we have ฯ^ฮป(ฯ^d)=(-1)^df^ฮป(ฮถ^d), where ฯ^ฮป is the character of the Specht module ๐ฒ^ฮปโ
(V^โ n). On the other hand, since the basis โฌ^(1^n) is permuted up to sign by the action of ฯ and is in ๐ซ-equivariant bijection with standard tableaux of shape ฮป, this character value is a signed count of the number of ๐ซ^d-fixed tableaux.
Moreover, each fixed point contributes a sign of precisely (-1)^d as follows. Suppose W โ((1^n)). Using our tagging conventions, [(W)]_q = (-1)^k ยท([W]_q) where k is the number of vertices of simple degree 4 on the _2(1)-strand of W, and is the cyclic shift isomorphism from <Ref>. For a six-vertex configuration ฯ(W), this strand has two incoming boundary edges. There must be an odd number of direction changes of the arrows along the strand, which occur precisely at sources or sinks, so k is odd. The result follows.
ยง.ยง Alternating sign matrices and square moves
The set _n of alternating sign matrices consists of n ร n matrices with entries from {-1, 0, 1} where the nonzero entries of each row and column begin and end with 1 and alternate between 1 and -1. Alternating sign matrices have long been of interest in enumerative combinatorics (see, e.g., <cit.>). Recently, they have obtained further significance through connections <cit.> to back stable Schubert calculus <cit.>.
Permutation matrices are instances of alternating sign matrices. Indeed, _n is the MacNeille completion of the strong Bruhat order on ๐_n <cit.>, i.e.ย _n is the smallest lattice containing Bruhat order. More strongly, _n is a distributive lattice with covering relations given by adding [ -1 1; 1 -1 ] where possible. The sub-poset of join-irreducible elements can be realized as a certain tetrahedral poset embedded in three-dimensional space (see <cit.> and <cit.>). Hence _n can be viewed as the order ideals of
this tetrahedral poset.
Let T be the โsuperstandardโ 4-row standard tableau with lattice word 1^n 2^n 3^n 4^n. The move-equivalence class of ๐ข(T) is in bijection with _n, with square moves corresponding to covering relations. No benzene moves apply to elements of this class.
Consider the six-vertex configurations obtained by placing 4n incoming strands around a square grid and assigning orientations to the middle in all possible ways. Such configurations are in bijection with configurations of the classical six-vertex model (with domain wall boundary conditions) by reversing all horizontal arrows, and these in turn are in bijection with _n by <cit.>. The composite bijection from the six-vertex configurations to _n may be realized by replacing sinks with 1, sources with -1, and transmitting vertices with 0. See <Ref> for examples.
These six-vertex configurations are clearly well-oriented by <Ref> and admit no YangโBaxter moves, since they contain no big triangles. It is easy to check that the corresponding fully reduced hourglass plabic graphs (see <Ref>) have separation word 1^n 2^n 3^n 4^n.
See <Ref> for further connections between square moves and alternating sign matrix dynamics.
ยง.ยง Plane partitions and benzene moves
The set (a ร b ร c) of plane partitions in the a ร b ร c box consists of the a ร b matrices of nonnegative integers that weakly decrease along rows and columns and have maximum entry at most c. Like alternating sign matrices, plane partitions have long been prominent in enumerative combinatorics (see, e.g, <cit.>) and representation theory (see, e.g., <cit.>). They are now one of the central objects of study in the burgeoning area of dynamical algebraic combinatorics (e.g., <cit.>).
The plane partitions (a ร b ร c) may be equivalently viewed as the configurations of stacked unit cubes in the prism [0, a] ร [0, b] ร [0, c] where gravity acts parallel to (-1, -1, -1). Such plane partitions form a distributive lattice and may be seen as order ideals of the product of chains poset [a]ร[b]ร[c]. They have further alternative interpretations as rhombus tilings or perfect matchings on particular subsets of the hexagonal lattice.
Let a, b, c โโค_โฅ 1. If a โฅ c, let
L = 1^a 4^b 2^c 1^a-c 2^c 4^b 1^c,
and if a โค c, let L = 1^a 4^b 2^a 1^c-a 2^a 4^b 1^c. Let T be the tableau with lattice word L. The move-equivalence class of ๐ข(T) is in bijection with (a ร b ร c), with benzene moves corresponding to covering relations. No square moves apply to elements of this class.
In the hexagonal lattice, begin with a โlarge hexagonโ with a, b, c, a, b, c hexagons, respectively, on each side in cyclic order; see <Ref>. The perfect matchings on this large hexagon can be converted to rhombus tilings by considering the dual graph and omitting edges of the dual graph that cross matched edges of the large hexagon. Alternatively, perfect matchings on the large hexagon may be converted to hourglass plabic graphs by replacing matched edges with hourglass edges. It is not difficult to see that the resulting hourglass plabic graphs are fully reduced, and one may check that the separation labeling gives L. There are no square faces, so no square moves apply. See <Ref> for an example.
ยง HOURGLASS PLABIC GRAPHS RECOVER KNOWN WEB BASES
In this section, we discuss how all known rotation-invariant web bases fit within our framework of hourglass plabic graphs, forbidden 4-cycles, and the _โ=_โ property.
ยง.ยง A uniform characterization for small r
We may extend <Ref> by defining an U_q(_r)-hourglass plabic graph exactly as before, but with all internal vertices of degree r. We define the trip permutations _โ(G)=(_1(G),โฆ, _r-1(G)) of such a graph G in the natural way, extending <Ref>.
In general, U_q(_r)-hourglass plabic graphs admit contraction moves as in <Ref>, which preserve _โ. As before, we say a U_q(_r)-hourglass plabic graph G is contracted if all possible contraction moves have been applied. For r=2 or 3, the contraction moves are the only moves on hourglass plabic graphs.
A 4-cycle of an U_q(_r)-hourglass plabic graph G is called forbidden if the sum of its edge multiplicities exceeds r. For r=2,3, or 4, we write G โผ G' for the move-equivalence relation. We say such a graph G with no isolated components is fully reduced if no G' โผ G contains a forbidden 4-cycle. Top fully reduced graphs are distinguished representatives in each move-equivalence class of fully reduced graphs (as in <Ref>); when r=2 or 3, every fully reduced graph is top, since [G]_q=[G']_q for G โผ G'.
<Ref> below shows how our hourglass plabic graph perspective with forbidden 4-cycles and _โ=_โ, unifies the TemperleyโLieb U_q(_2)-basis, the Kuperberg's U_q(_3)-basis, and our U_q(_4)-basis from <Ref>.
Let r=2,3, or 4. Then (c;r)/โผ is in bijection with (c;r) via _โ=_โ and this bijection intertwines rotation and promotion. Furthermore,
โฌ_q^c,r{[W]_q : W a top fully reduced U_q(_r)-hourglass plabic graph of type c},
is a rotation-invariant web basis of _U_q(_r)(โ_q^c V_q).
When r=2, all hourglass plabic graphs consist of a non-crossing matching of the boundary vertices, with some internal vertices along each arc. Such a graph is contracted if there are 0 or 1 internal vertices on each arc; all such graphs are in the basis and this is exactly the TemperleyโLieb basis. The _โ=_โ condition is an easy verification, and this property implies that rotation corresponds to promotion.
When r=3, the fully reduced graphs are those whose contraction contains no 2-cycles or 4-cycles, since a 2-cycle may be uncontracted into a 4-cycle with edge multiplicities 1,1,1,2. Thus the fully reduced graphs are exactly Kuperberg's non-elliptic web basis. HopkinsโRubey <cit.> showed that the standard bijection between these webs and 3-row standard Young tableaux (for example, via the growth rules of <cit.>) is characterized by _โ=_โ. Finally, the r=4 case is <Ref>.
ยง.ยง The 2-column case
In <cit.>, Fraser found a rotation-invariant _r-web basis for (V^(1^2r)) with r arbitrary, the โ2-column" case. He describes a bijection ฯ_r from rectangular standard tableaux of shape r ร 2 to a basis ๐ฒ_r of _r-webs, which are only characterized as the outputs of the map ฯ_r (see <Ref> for an example). For G = ฯ_r(T) โ๐ฒ_r, all internal faces are 4-cycles. The map ฯ_r in fact requires certain choices to be made, but all possible outputs are connected by the square relation of <cit.>, so that the invariant [G] is well-defined.
boxsize=0.6cm
In forthcoming work, we extend the perspective of this paper to show that the bijection ฯ_r satisfies _โ(ฯ_r(T))=_โ(T) and that after lifting to hourglass plabic graphs, the general square relation preserves trip permutations. Furthermore, the web basis elements [W]_q for W โ๐ฒ_r are characterized as the web invariants of suitably defined fully reduced U_q(_r)-hourglass plabic graphs; full reducedness again includes the prohibition of the forbidden 4-cycles of <Ref> in move-equivalent graphs.
ยง ACKNOWLEDGEMENTS
This project began during the 2021 BIRS Dynamical Algebraic Combinatorics program hosted at UBC Okanagan, and we are very grateful for the excellent research environment provided there. At that conference, Sam Hopkins and Martin Rubey introduced us to their perspective from <cit.> of webs-as-plabic graphs which was foundational in this work. We completed the main result at NDSU (partially supported by NSF DMS-2000592), for whose hospitality we are very thankful. Finally, we are grateful for the resources provided at ICERM, where this paper was mostly written.
We also thank the following people for their helpful comments: Ben Elias, Sergey Fomin, Chris Fraser, Pavel Galashin, Joel Kamnitzer, Mikhail Khovanov, Allen Knutson, Greg Kuperberg, Thomas Lam, Rebecca Patrias, Alex Postnikov, Kevin Purbhoo, Pavlo Pylyavskyy, Brendon Rhoades, Anne Schilling, Travis Scrimshaw, David Speyer, Hugh Thomas, Julianna Tymoczko, Bruce Westbury, Lauren Williams, Haihan Wu, and Paul Zinn-Justin.
amsalphavar
|
http://arxiv.org/abs/2306.02179v1
|
20230603192039
|
Buying Time: Latency Racing vs. Bidding in Fair Transaction Ordering
|
[
"Akaki Mamageishvili",
"Mahimna Kelkar",
"Jan Christoph Schlegel",
"Edward W. Felten"
] |
cs.GT
|
[
"cs.GT",
"cs.CR",
"econ.TH"
] |
The Absolute Age of M92
Daniel Weisz
=======================
We design a practical algorithm for transaction ordering that takes into account both transaction timestamps and bids. The algorithm guarantees that users get their transactions published with bounded delay against a bid, while it extracts a fair value from sophisticated users that have an edge in latency, by moving expenditure from investment in latency improvement technology to bidding.
The algorithm creates a score from timestamps and bids, and orders transactions based on the score. We first show that a scoring rule is the only type of rule that satisfies the independence of latency races. We provide an economic analysis of the protocol in an environment of private information, where investment in latency is made ex-ante or interim stages, while bidding happens at the interim stage where private signals have been observed.
The algorithm is useful for transaction sequencing in rollups or in other environments where the sequencer has privileged access to order flows.
ยง INTRODUCTION
How to design transaction ordering policies is one of the fundamental research questions for decentralized applications, as well as in traditional finance applications, such as order books for currency trading and stock exchanges. The sequencer receives transactions from users and publishes an ordered sequence, which serves as input to the execution stage of the algorithm or the protocol. A transaction ordering policy specifies how the resulting sequence depends on the contents and arrival times of transactions at the sequencer.
One natural policy is a first-come, first-serve (FCFS) rule. To the best of our knowledge, all order book solutions and (optimistic) rollup protocolsย [Rollup chains serve the role of scaling the base layer chains.] use FCFS. There are many advantages to FCFS. First, it is simple and easy to explain to anyone. Second, it also seems to be intuitively fair. It minimizes latency because it allows each transaction to be appended to the sequence immediately upon arrival. The biggest disadvantage of FCFS is that it creates latency competition. This competition is well known in traditional finance, to the extent that a whole industry is formed around it.
Layer one blockchain solutions, however, use more complicated ordering policies. First, transactions usually contain bids (sometimes called tips): some value that a transaction sender is willing to pay to the sequencer. Second, for example, Ethereum groups all transactions that are pending (received but not yet sequenced) in the โmempool,โ and block writers choose both the set of transactions for the next block to execute and the order inside the block as well. This gives a lot of power to the next block writer and to other users that may see arbitrage opportunities by observing transactions in the mempool.
High-frequency trading that takes advantage of having lower latency to reach the market maker constitutes more than half of the whole exchange volume in the traditional equity trade,ย <cit.>.
In decentralized finance, front and back-running other transactions have been acknowledged and well-documented since only more recentlyย <cit.>ย [Front-(back-)running refers to the activity of sending a transaction after observing another transaction in the mempool and putting it before (after) the original transaction in the block to be executed.]. The generic term used to describe gains from these activities is miner (or maximal) extractable value, often dubbed as MEV. Unlike front-running, back-running is not necessarily a negative activity. In fact, it can often be seen as a positive outcome of the system, as it corrects the price movements quickly.
The phenomenon of latency competition also occurs in the rollup context. For example, parties spend resources to try to get closer to the sequencer, in the hope of delivering back-running transactions to the sequencer before others can do soย [Another approach is to create many copies of nodes to which the rollup sequencer feeds the information (in a uniform random order) about the sequence, increasing the chances that some of the nodes get the feed faster.].
Parties with more resources have an advantage in this competition because they can invest in targeting specific data centers to reduce latency.
This is not only unfair but also inefficient, as often not the party with the highest valuation gets its transaction in first.
It would be better if some of these resources could be captured by the protocol. Inspired by these considerations, in this paper, we present a hybrid policy, that maintains the desirable properties of FCFS (such as low latency, and transparency) and also the property to reduce the prevalence of โlatency racing" among transaction submitters. Our algorithm creates a score from timestamps and bids and orders transactions based on the score.
The approach balances fairness and efficiency in transaction execution, as described below.
The ordering policy aims to extract a fair value from users that have an edge in latency, by moving expenditure from investment in latency improvement into bidding.
More concretely, we develop a policy with the following properties:
* Secret mempool: Submitted transactions are not visible to anyone other than the sequencer until they are included in the published sequence. This prevents parties from front-running or sandwiching others' transactions. The sequencer is trusted not to engage in such tactics.
* Low latency: Every transaction that arrives at the sequencer is emitted into the sequence within some time-bound.
* Independence of irrelevant transactions: different latency races should not interact and affect each other.
* Stable after decentralization: the goals of the policy can still be satisfied, after the sequencer is decentralized, assuming a suitable supermajority of sequencers apply the policy honestly.
Over short time intervals, transactions with a higher bid (per-gas priority fee) and a lower timestamp should be ordered first. This is intended to induce parties who are contending for fast placement to do so by increasing their bid rather than expending resources to reduce their latency in delivering transactions to the sequencer.
By design, it is guaranteed that no matter how late arriving senders bid on their transactions, they cannot outbid any transaction that arrived g time (or more) earlier.
The proposed algorithm is simple and fair.
The simplicity stems from the fact that the proposal is almost the same as the first-price all-pay auction. It satisfies all economic properties of those auctions. It is incentive compatible to bid more, as the more one bids, the earlier one's transaction is scheduled. It also motivates to send a transaction earlier, but if the price of bidding is not too high, one prefers to trade money for time. It is fair in that it gives a chance to users with a lower budget or no opportunity to decrease latency. The algorithm makes sure that there are no situations where one transaction with a low bid is executed earlier than another transaction that came to the sequencer right after the low bid transaction and bids a very high value. It also ensures that most of the (unsophisticated) transaction senders can safely bid zero and get their transaction included with at most g time delay.
With the proposed mechanism, players spend the same amount in total, as they would spend if only latency improvement was an available technology. Such a shift in the expenditure is creating less waste from the protocol perspective. Shifted investment in bids can, for example, be used for lowering the regular user fees and the protocol development.
ยง.ยง Related Literature
Fair transaction ordering has recently attracted a lot of attention, seeย <cit.>,ย <cit.>,ย <cit.>, with a focus on how to aggregate the order of transactions seen by multiple different parties/sequencers fairly and efficiently.
The cost of latency in the context of high-frequency trading has been studied inย <cit.>. Decentralized sequencing of the protocol can be implemented using tools from Byzantine Fault Tolerant systems, for the original reference seeย <cit.>. Our approach is closest toย <cit.>,ย <cit.>, as we are interested in an asynchronous network setting. For the implementation version of the latter protocols, seeย <cit.>. For an improved asynchronous BFT implementation and a more recent review of the literature, seeย <cit.>. Latency reduction techniques for the players in the decentralized system is a topic ofย <cit.> andย <cit.>. Avoiding front-running attacks in the total order broadcasts is a topic ofย <cit.>. Cross-domain MEV extraction is discussed and documented inย <cit.>. Inย <cit.> the authors discuss transaction ordering rules that avoid sandwich attacks in the constant function automated markets.
ยง.ยง Organization of the paper
In Sectionย <ref>, we present the algorithm formally. In Sectionย <ref>, we analyze the algorithm from the economic perspective, using the tools from auction and contest theory. In Sectionย <ref>, we compare our proposal to a frequent batch auction approach. In Sectionย <ref>, we discuss the details of the sequencer committee. All proofs are deferred to appendixย <ref> unless stated otherwise.
ยง ALGORITHM
ย
Suppose there are n transactions in the g time interval, labeled with T_1,T_2,โฏ T_n, and sorted by increasing arrival time. Each transaction T_i is characterized by a pair of a timestamp or arrival time, denoted by t_i, and a bid, denoted by b_iโฅ 0. Formally, we view a transaction as a tuple of non-negative reals, T_i=(t_i,b_i)โโ^+รโ^+. We propose to choose the next transaction to post by choosing the transaction T_i that has the highest score s_i. The effect of bidding b_i is the following. The bidder (transaction sender) obtains
high
ฯ(b_i):=gb_i/b_i+c
additional score, where c is some constant. Then, the total score of a transaction T_i is calculated as:
s_i(b_i,t_i):=ฯ(b_i)-t_i.
In general, ฯ function should satisfy the following properties:
* ฯ(0)=0, a normalization, paying zero gives zero priority.
* ฯ'(x)>0, the priority is increasing in a bid, which gives the incentive to bid more for a higher priority.
* lim_xโโฯ(x) = g, guaranteeing that no transaction arriving g time later the other transaction is posted earlier than the latter, but any time advantage of less than g can be outbid.
* ฯโ(x)<0, the priority is concave, equivalently meaning that the cost of producing the signal is convex. This is generally necessary to obtain the interior solution of the equilibrium condition, to prevent pooling, where many types bid the same amount.
We set ฯ as inย (<ref>), to be the simplest function satisfying the above properties. For the analysis in this paper, we assume that c=1.
The sequencer will release transactions in order of decreasing scores. A transaction with score s_i can be released at time g-s_i = t_i + (g-ฯ(b_i)). This is safe because a transaction arriving after that time cannot have a better score. Equivalently, we can say that an incoming transaction with bid b_i will be released by the sequencer after a delay of g-ฯ(b_i).
If transactions arrive at rate r, the space complexity of the algorithm is ฮ(r) and the computational cost per transaction is ฮ(log r), assuming pending transactions are stored in a priority queue, ranked by score.
ยง.ยง Score-Based Algorithms
In this section, we justify why a score-based algorithm is chosen, rather than any other algorithm from a wide range of possible algorithms for transaction ordering. In particular, we show that the independence of irrelevant transactions implies that a score-based algorithm needs to be chosen. The choice of a score based on timestamps and bids follows from the particular environment where we see only a timestamp and a bid. However, using a score, aggregating any kind of transaction information (e.g., transaction size in bytes, transaction type, etc.) relevant to a particular environment, is implied by the independence axiom independently of the context.
There is a set ๐ฏ of possible transactions. The set ๐ฏ can in principle be infinite and even uncountable (for example ๐ฏ=โ^+ if transactions can be identified by their arrival time or โ^+ if transactions can be identified by their timestamp).
An algorithm ๐ assigns to any finite set of transactions ๐ฏ'โ๐ฏ, a linear order ๐(๐ฏ') of ๐ฏ'. The ordering defines the following relations:
* ๐(๐ฏ', T')<๐(๐ฏ', Tโ), with T',Tโโ๐ฏ if ๐ places T' in front of Tโ.
* ๐(๐ฏ', T')โค๐(๐ฏ, Tโ), with T',Tโโ๐ฏ if ๐ places T' not later than Tโ.
We denote by ๐(๐ฏ') the ordered set of transactions and by ๐(๐ฏ',T_A) the position of transaction T_A in the ordered set of transactions ๐(๐ฏ').
We say that algorithm ๐ satisfies the independence of irrelevant transactions if for any pair of transactions T_A, T_B and any pair of finite sets ๐ฏ',๐ฏโโ๐ฏ, we have
๐({T_A,T_B}โช๐ฏ', T_A)<๐({T_A,T_B}โช๐ฏ', T_B) โ
๐({T_A,T_B}โช๐ฏโ, T_A)<๐({T_A,T_B}โช๐ฏโ, T_B).
Next, we define score-based algorithms.
A score is a function s:๐ฏโโ that assigns to each possible transaction Tโ๐ฏ a score S(T).
An algorithm ๐ is a score-based algorithm if it sorts transactions according to a score S, i.e. ๐(๐ฏ',T_1)<๐(๐ฏ',T_2) if and only if S(T_1)>S(T_2), for any ๐ฏ'โ๐ฏ and T_1,T_2โ๐ฏ'.
The following definition is based on similar considerations in the context of utility theoryย <cit.> and set theoryย <cit.>.
We say that a pair (๐,๐ฏ) satisfies Cantor's axiom if there is a countable set ๐ฏ'โ๐ฏ such that for any pair of transactions T_1,T_2โ๐ฏ there exists an instance of the algorithm in which a transaction in ๐ฏ' is ordered between T_1 and T_2, formally there is a finite ๐ฏโโ๐ฏ with T_1,T_2โ๐ฏโ and a T_3โ๐ฏ'โฉ๐ฏโ (possibly T_3=T_1 or T_3=T_2) such that
๐(๐ฏโ,T_1)โค๐(๐ฏโ,T_3)โค๐(๐ฏโ,T_2),
or
๐(๐ฏโ,T_2)โค๐(๐ฏโ,T_3)โค๐(๐ฏโ,T_1).
We have the following correspondence between the independence axiom and score-based algorithms.
* If ๐ฏ is countable, then an algorithm satisfies the independence of irrelevant transactions if and only if it is a score-based algorithm.
* If ๐ฏ is uncountable and (๐,๐ฏ) satisfies Cantor's axiom, then the algorithm ๐ satisfies the independence of irrelevant transactions if and only if it is a score-based algorithm.
The above result extends to the case where the algorithm creates a weak ordering (which can be made strict through a tie-breaking procedure) rather than a strict ordering of transactions. In that case Definitions 2 and 3 are adapted to weak orders; we get a score that might assign the same value to two different transactions. The relaxation to weak orders is useful for the case that the set of transactions is uncountable and not a subset of the real numbers (e.g.ย if ๐ฏ=โ^2_+). In that case, the Cantor axiom is impossible to satisfy for strict orders but satisfiable for weak orders.
Note that in our context assuming ๐ฏ is countable or even finite is safe, as there is a smallest time increments for a time stamp and a smallest bid increment.
Moreover, the algorithm considers transactions in a finite time interval and bids are upper-bounded by the maximum number of tokens in the system. However, for the subsequent economic analysis, it will be more convenient to work with the continuum where differences in time stamps and bids can be arbitrarily small.
Having proven that score-based algorithms are essentially the only ones satisfying the independence of irrelevant alternatives, we turn to selecting the most natural one among them. FCFS is the scoring function that corresponds to scoring transactions by their timestamp only while scoring transactions only by bids corresponds to the first-price auction solution. The scoring function fromย (<ref>) corresponds to the simplest mixture of these two given a constraint that no transactions should be able to outbid a time difference g, as already argued above.
ยง ECONOMIC ANALYSIS
ย
The users, from now on referred to as players, need to take a decision to acquire a technology that can send their transaction to the sequencer t time after some arbitrage opportunity has arisen. We will analyze a simple economic model of this decision problem.
Assume that it costs i user c_i(t) to send a transaction t time after the arbitrage. The only requirement on c_i(t), for now, is that it is decreasing in increasing t. When the arbitrage opportunity arises, a player i has a valuation v_i to have its transaction included for execution the earliest, among those transactions contending for the same arbitrage opportunity. Let us assume there are 2 players with valuations v_1 and v_2, distributed by the distribution functions F_1 and F_2. That is the probability that the valuation of player i is less or equal to x is equal to F_i(x).
For each valuation v, the player may choose the level of time.
We can model this as a function t: V โโ,
such that, t_i(v) is the time chosen by a player i with valuation v. For simplicity, assume that the cost functions and value distributions are the same: c_i(t)=c(t) and F_i =F. Throughout the paper, we assume that F:[0,1]โ[0,1] is a uniform distribution with F_i(x)=x iff xโ [0,1], for iโ{1,2}, when final numerical values are derived. Obtaining numerical values for different distribution functions is very similar to that of uniform distribution, but we choose a uniform for simplicity of exposition. However, most of the computations are done for general distribution functions.
In the following, we consider two different assumptions regarding the investment in latency improvement. In the first model, we assume that the players invest in the latency in advance: they acquire or rent servers close to some sequencer of the protocol, run their nodes constantly, and improve infrastructure. In the second, we assume that the players are able to invest in the latency after they learn about their valuation of the arbitrage. This corresponds to the case where the arbitrage opportunity itself takes some time to be realizedย [Example of such an opportunity is a 12-second delay on the Ethereum network for a transaction to be scheduled.]. In this case, the transaction sender can schedule its transaction through the third-party service, which guarantees the delivery of the transaction within some time interval, once the arbitrage opportunity is realized. In both cases, bidding is naturally assumed to be an interim decision, and in fact, one of the biggest advantages, as the valuation is already learned.
ยง.ยง Ex-ante latency investment
A natural assumption in the model is that players learn their valuations only after they have invested in latency. If players can only compete through latency, the interaction between them becomes a static game. We study equilibria solutions of these games. Similar setting is considered inย <cit.>. The results obtained in the following two subsections are concrete cases of folk results in the microeconomic theory, however, we include their proofs for completeness.
ยง.ยง.ยง Only latency investment
Let x_i be the amount invested in latency by player i (so that he obtains a delay of t_i(x_i)).
Let V_i denote the valuation random variable of player i.
Then, player i has an ex-ante payoff of:
E[V_i]-x_i,
if he invests strictly more than the other player;
1/2E[V_i]-x_i,
if he invests an equal amount (assuming random tie-breaking), and
-x_i,
otherwise.
First, we note that there is no pure strategy Nash equilibrium solution of the game, in which player strategy sets consist of โ^+. It is easy to show by case distinction, there are simple deviating strategies in each case. Next, we focus on the mixed equilibrium solution and obtain the following result.
There is a symmetric equilibrium in mixed strategies where each player i chooses x_i uniformly at random on the interval (0,E[V]).
By construction, the payoff of player j of playing x_jโค E[V_j] against the uniform strategy on (0,E[v_i]) is
F(x_j)E[V_j]-x_j=x_j/E[V_i]E[V_j]-x_j=0.
Choosing a strategy x_j>E[V_j] gives a negative pay-off. Therefore, each 0โค x_jโค E[V_j] is the best response of player j, and mixing uniformly among them is also the best response.
The above-described equilibrium is unique up to a change of strategy on a null set and in any mixed equilibrium, both players obtain the same payoffs as in this equilibrium. Note that the result is independent of the latency cost function. The only property used is that if a player invests more than the other player in the latency technology, its transaction is scheduled earlier.
ยง.ยง.ยง Budget constraints
In this section, we assume that players do not have access to any amount of money they need to invest to improve their latency, but rather they are constrained by a budget. Let B_i denote the budget of player i, meaning that player i cannot spend more than B_i. We consider an asymmetric case where one (weak) player has a budget B_1<E[v_i] and the other (strong) player has a larger budget with B_2>B_1. First, note that similar to the previous section with unlimited access to money, there is no pure strategy Nash equilibrium. Therefore, we switch to mixed strategy equilibrium. Let F_i denote the probability distribution of playing different strategies.
There exists a mixed Nash equilibrium solution in the game in which the weak player receives a payoff of 0 and the strong player receives a payoff of E[V_i]-B_1.
The following strategy profile in which the first player plays according to the following (mixed) strategy:
F_1(x)=
x/E[v_i]+E[v_i]-B_1/E[v_i], xโ (0,B_1],
E[v_i]-B_1/E[v_i], x = 0,
1, x>B_1,
the second player plays according to
F_2(x)=
x/E[v_i], xโ [0,B_1),
1, xโฅ B_1,
is a mixed strategy equilibrium. The first, weak player obtains an expected payoff 0 for any choice of 0โค x_1< B_1. The second, strong player obtains an expected payoff of E[V_i]-B_1 for any choice 0< x_2โค B_1. Choosing x_2>B_1 is wasteful for the second player and will not occur in equilibrium. Thus, both players are indifferent between all pure strategies in support of F_1 resp. F_2 and for player 2 choosing an action outside of the support of F_2 is dominated. The mixed strategies form a Nash equilibrium.
Similarly to the unconstrained case, the above-described equilibrium is unique up to a change of strategy on a null set and in any mixed equilibrium, both players obtain the same payoffs as in this equilibrium. Also similarly to the previous section, the result is independent of the latency cost function. The only property to derive this result is that if a player invests more than the other player in the latency technology, its transaction is faster (has a lower timestamp).
ยง.ยง.ยง Ex-ante latency and interim bidding
Next, assume investment in latency happens before players learn their valuations but after learning their valuation they can use the bidding technology to improve the score.
We consider a version where players learn the other players' investment decisions before bidding. This is a reasonable assumption if we assume that players play a game repeatedly and observe latency levels of each other.
In the following let x=(x_1,x_2) be the latency investment levels chosen by the two bidders and let ฮ:=t_2(x_2)-t_1(x_1) be the corresponding difference in latency. W.l.o.g. assume ฮโฅ0. First, we consider the case that ฮ=0:
There is a completely separating equilibrium of the bidding game when both bidders have made the same ex-ante investment.
Next, we consider the case that ฮโ 0.
For the case of different ex-ante investment we get partially separating equilibria where bidders do not bid for low valuations and bid for high valuations. The bidding strategies are asymmetric in general. However, for sufficiently large g the equilibrium becomes approximately symmetric and approximately efficient. See Figureย <ref> for a graphical illustration.
There is an equilibrium of the bidding game which is separating conditional on bidding: There is a threshold โ(ฮ/g-ฮ), such that a bidder does not bid if his valuation is below the threshold and bids above the threshold. Conditional on bidding, the high latency bidder i produces a higher signal than the low latency bidder j for equal valuations: ฯ_i(v)-t_i>ฯ_j(v)-t_j, for v>โ(ฮ/g-ฮ).
The equilibrium analysis in Propositionsย <ref> andย <ref> indicates how efficient our transaction ordering policy is as a function of the latency investment of bidders.
If bidders have the same latency we have a standard all pay auction which yields a fully efficient outcome.
If there is a difference in latency we have
no bidding for low valuation bidders and approximately
equal signals produced for equal valuations for high valuation bidders. Conditional on entry, low latency bidder underbid and high latency
bidder overbid relative to the standard all pay strategies.
Efficiency depends on the latency difference and the g parameter.
If g is chosen sufficiently large the auction is approximately efficient. A too low g can be detected by low bidding activity. Hence our transaction policy can strike a balance between fairness, low latency and
efficiency if properly parameterized.
ยง.ยง Ex-Post latency and bid
In this section, we assume that a decision on latency investment can be taken after the valuation is observed. First, we start with only the latency investment decision.
The expected utility of player i is equal to:
Pr[t(v_i)<t(v_j)]v_i-c(t(v_i)),
where jโ{1,2}โi.
We can look at this from a dual perspective: by v(t) we define the inverse of t(v). This is the so-called Revelation Principle. Instead of some function of the type, we report our type directly. Then, the optimization problem becomes:
max_v Pr[vโฅ v_2]v_1-c(t(v)).
By replacing the probability with F(v), we get that it is equivalent to
max_v F(v)v_1-c(t(v)).
By the first order condition, we get:
v_1f(v)-c'(t(v))t'(v)|_v=v_1 = 0,
where f is a density function of the valuation distribution F. By plugging in v=v_1, it is equal to:
v_1f(v_1)-c'(t(v_1))t'(v_1).
For the uniform distribution and cost function c=1/t, first order condition gives the following differential equation:
v_1+t'(v_1)/t^2(v_1)=0.
Solving this equation gives t(v)=2/c_1+v^2. By the boundary condition that 0 valuation type should wait infinitely (or equivalently pay 0 in the latency), we obtain the value of the constant in the solution: c_1=0. Therefore, cost incurred is equal to 1/t=v^2/2. On average each player pays:
โซ_0^1v^2/2f(v)dv|_0^1 = 1/6,
for better latency, before learning their types.
The cost of producing the score s=gm/m+1-t is:
c(s):=m+1/t.
We decompose total expenditure into 2 parts, for bidding and for time, by representing m and c(t(v)) as functions of v and taking integrals:
b(g):=โซ_0^1m(v)f(v)dv and โซ_0^11/t(v)f(v)dv.
The limit of b(g) when g tends to infinity is equal to 1/6.
b(g) is an increasing function in g, for g large enough.
The proposition implies that by taking large enough g, the system extracts almost all value invested in the latency through bidding.
Starting from some threshold value on g, extraction increases with increasing g.
We can verify whether the constructed equilibrium is unique by checking the conditions given inย <cit.>.
In this example, we calculate a few values of b(g). In particular, b(1000)โ 0.1294, meaning a player pays approximately 77% of the total expenditure in bids, and b(10000)โ 0.1537, meaning a player pays approximately 92% of the total expenditure in bids.
Note that in the proof of the propositionย <ref>, the total investment in both latency and bidding, c(v), is the same value v^2/2, as in the case of only investing in the latency. We show that this is not a coincidence.
In general, assume that there is an arbitrary signaling technology described by an increasing, differentiable cost function C(s).
The following result shows the revenue equivalence of ex-post bidding:
Both players spend the same amount on average for any cost function C.
The amount spent depends only on the value belief distribution function.
ยง.ยง n players
In this section, we consider identical n players with the same valuation distribution as in the 2 players case. The optimization problem is the following:
max_v Pr[vโฅmax{v_2,โฏ,v_n}]v_1-c(t(v)),
similarly toย (<ref>).
By replacing the probability with cumulative distribution, this is equivalent to:
max_v F_n-1(v)v_1-c(t(v)),
where F_n-1(x) is a cumulative distribution function of the random variable X:=max{X_1,โฏ, X_n-1}. By independence we have
F_n-1(x)=F(x)^n-1.
The first-order condition and plugging in v=v_1 gives the following differential equation, similar toย (<ref>):
f_n-1(v_1)v_1-c'(t(v_1))t'(v_1) = 0,
where f_n-1(v_1) = (n-1)v_1^n-2 is a density function of maximum among n-1 uniformly distributed random variables. The differential equation w.r.t. t(v) becomes:
(n-1)v_1^n-1+t'(v_1)/t^2(v_1)=0.
Solving the equation gives t(v)=n/c+(n-1)v^n. The same boundary condition ensures that c=0, that is, t(v)=n/(n-1) v^n. Each player pays:
n-1/nโซ_0^1v^ndv = n-1/nv^n+1/n+1|_0^1=n-1/n(n+1).
Together, the players pay n-1/n+1, that converges to 1 as n converges to infinity.
Note that the first place in the transaction order is given to the maximum-value player. The average valuation of the maximum value player is n/n+1, order statistics. This value also converges to 1 as n tends to infinity.
The analysis is the same as in the case of 2 players, until the differential equation that solves score function s.
Instead ofย (<ref>), for n players we solve:
(n-1)vv^n-1-c'(s)s'(v)=0.
For types v with
n/(n-1)v^2โฅโ(g),
who only use latency we have the same solution as before
s(v)=-n/(n-1)v^2.
Marginal type investing in bidding is:
u=โ(n/(n-1) โ(v)).
Plugging in functional forms of c and s inย (<ref>) gives the same limit results as in propositionย <ref>.
Next, we show a revenue equivalence for n players. The argument is similar to 2 players' case.
Assume that there is an arbitrary signaling technology described by an increasing, differentiable cost function C(s).
All n players spend the same amount on average for any cost function C.
ยง BENCHMARK COMPARISON AND EVALUATION
ย
In this section, we compare continuous time bidding proposal to what we call block-to-block auctions.
In these auctions, all transactions sent in fixed time intervals of length g are collected, and sorted in
decreasing order of their bids. One advantage of such an auction format is its simplicity. It also technically allows
on canceling of the transaction, if the transaction position in the block is not desirable, as it is done for example by Flashbots on the Ethereum blockchain. However, in the rollup
setting, transaction ordering is decided at the protocol level where canceling after scheduling is not allowed.
Nevertheless, it is interesting to compare batch auction, where each separate race corresponds to a first price auction, in which the winner pays its bid and all others can revert their transactions.
Generically speaking, a first-price auction where only the winning bidder pays and first-price all-pay auctions are both payoff equivalent, seeย <cit.>.
In our setting, the following result holds:
The expected payoff of the bidding game with an all-pay structure is equal to the expected payoff in the bidding game with pay-all structure where the highest bidder wins, independently of valuation distributions.
We do not give the proof here as it is implied by a result inย <cit.>.
Next, we will present lower latency effects in these two policies.
As we already stated in the introduction, a
block-to-block approach would create situations where one transaction arrives at the end of the block with a low bid,
and another, much higher bid transaction arrives right after this block. Then, the low bid transaction is always scheduled
in front of the high bid transaction. We describe when such a situation might arise and why low latency is more important
in a block-to-block approach. For simplicity, assume there are only two parties. Generalization to more parties is trivial.
Suppose the first party (denoted by A) can reach the sequencer in s_1 time, and the second party (denoted by B) can reach
in s_2 time, with s_1<s_2. Then, A can wait until g-s_1 seconds pass since the beginning of a new block creation,
and send its transaction to at exactly g-s_1, while B has to send its transaction to be included in this block until
g-s_2 seconds pass. That is, in g-s_1-(g-s_2)/g = s_2-s_1/g cases (in time interval (g-s_2,g-s_1)
B has no chance to win a single race over A, even if it values arbitrage much more than A.
On Ethereum, latency advantage is not a big issue, as A would only have an advantage in (s_2-s_1)/12 of the cases.
That is, latency is 12/g (which is equal to 24 for g=0.5 seconds) times more important in our case. It is easy to see that continuous time proposal of bidding does not suffer from this.
Random stopping to determine the block endpoint also does not help to reduce incentives for waiting, unless it is uniformly random. In that case, it is more similar to a continuous bidding proposal than to a block-to-block auction.
Assuming uniform random arrivals of transactions, the average delay of a transaction in a block-to-block auction is equal to g/2 and is independent of bids. In the time bidding, on the other hand, the average delay also depends on the bids. In the worst case, where all bids are equal to 0, all transactions incur a delay of g, as the algorithm needs to wait at least g time after each transaction before confirming it. For non-zero bids, such waiting times are lower and also depend on the bids of other transactions.
The effect of lowering latency in the block-to-block approach amplifies in the case of more than 1 sequencers. The implementation can be done in two ways. First, for each transaction, consensus timestamp is obtained, and if this timestamp belongs to the current block, the transaction belongs to the current block. Second, for each transaction, we identify all sequencers' assigned blocks to it, and the transaction is assigned to the median block. In both cases, if the transaction sender lowers the latency to the sequencer which is has median โdistance" from it, it improves the chances to capture more arbitrage opportunities.
ยง DECENTRALIZED IMPLEMENTATION
ย
In this section, we discuss how to implement the algorithm with a decentralized sequencer committee.
Assume a committee of N sequencers for some odd N, and assume a minority F < N/3 of the sequencers might be malicious.
Because fewer than 1/3 of the sequencers are malicious, we can implement an atomic broadcast protocol among them, which we will use as a building block in the sequencing protocol. Atomic broadcast guarantees that any message broadcast by an honest sequencer will be received by all honest sequencers, and all sequencers will receive the same sequence of messages in the same global order. This broadcast primitive can be implemented on top of a BFT consensus protocol.
References to โbroadcasting" or the "broadcast channel" below are referring to this atomic broadcast facility.
We assume that messages on the broadcast channel are authenticated. If message authentication is not provided by the broadcast channel implementation, sequencers can sign the messages they broadcast.
Replicated state machine model
The protocol uses a replicated state machine model. This means that every (honest) sequencer will have a sequencer state. The starting sequencer state is known, and a deterministic sequencer state transition function specifies how the sequencer state should change on the arrival of a broadcast message. The sequencer state transition function is deterministic, so every (honest) sequencer's state should evolve through the same sequencer of states.
In addition, we assume there is a canonical method for hashing the sequencer state to a single digest that is a binding commitment to the full sequencer state. This will allow a sequencer who knows the digest to verify that a claimed sequencer state is correct.
The sequencer state
As described in more detail below, the sequencer state consists of:
* for each sequencer, the maximum local timestamp that was ever broadcast by that sequencer,
* for every transaction T that has not been included in a sequencer block:
* all local timestamps that have been broadcast for T,
* all valid threshold decryption shares that have been broadcast for T,
* the contents of a candidate sequencer block including T (if there is one), and all signatures that have been broadcast on the candidate block,
* a Merkle accumulator over all sequencer blocks that have been included in a sequencer block.
For purposes of this definition, โincluded in a sequencer block" means that the sequencer block has been accepted as canonical by the underlying base chain.
An implementation may choose for convenience to track other information derived from the sequencer state, but that additional information is not formally part of the sequencer state.
The sequencer chain
The goal of the sequencing protocol is to produce the sequencer chain, a special-purpose blockchain that records the sequence of transactions. The sequencer chain will be the input to the execution phase of the overall rollup protocol.
The sequencer chain replaces the โsequencer feed" that is used in the centralized sequencer protocol.
Each block in the sequencer chain includes exactly one transaction.
The sequencers will also package sub-sequences of their chain into sequencer batches that will be posted as L1 calldata. These will be compressed for efficiency.
Each block in the sequencer chain contains:
* a block height, which is zero for the genesis block, otherwise one more than the height of the predecessor block,
* the total number of delayed inbox messages consumed in the history of the sequencer chain, up to and including this block,
* a transaction,
* an Ethereum-compatible timestamp for the transaction (in seconds since the Unix epoch),
* a Merkle root over the hashes of the (block height, number of delayed inbox messages, transaction, timestamp) tuples in every block of the chain, up to and including the current block.
Valid blocks will be signed by a quorum of the sequencers, as described below. This signature will often be attached to the block but is not part of the block.
Note that given the Merkle root that is contained in a block, it is possible to prove the contents of any previous block, using standard Merkle proof methods, where the proof size and checking time are proportional to the logarithm of the block height.
Similarly, it is possible to prove in logarithmic space and time that the Merkle roots in two blocks are consistent, in the sense that one is committing to a prefix of the blocks that the other is committing to.
Background: Timestamps
The protocol relies on timestamps. A timestamp has the form (t, id, seq) where t is a time, id is the identity of the sequencer that assigned the timestamp, and seq is a sequence number that is used as a tiebreaker if two timestamps have equal t and id portions. We define a total order over timestamps by ordering based on the time, then on the identifier if times are equal, then on the sequence number if times and identifiers are equal. We assume that timestamps are given on a millisecond granularity or finer.
We require that honest sequencers assign timestamps that are globally unique (i.e., locally unique and containing a correct id), and that timestamps created by the same sequencer are increasing. As described below, honest sequencers will ignore any asserted timestamp that violates these requirements.
Background: Threshold decryption
Sequencers will hold key shares in a threshold decryption scheme, requiring F+1 shares to decrypt. We write Enc(x) for the encryption of x under this threshold scheme.
Background: Epochs
The protocol includes a concept of epochs, which are used only among the sequencers, and are unrelated to any โepochs" used in other protocols. Epochs have increasing numbers, but the epoch numbers are not necessarily sequential (and usually will not be). The protocol starts in epoch 0.
Every broadcast message is tagged with the epoch in which it was sent. If a sequencer receives a broadcast message tagged with an epoch different from (that sequencer's view of) the current epoch, that message is ignored. (The effect of this will be to ignore messages from older epochs. A correct message can never be in an epoch after the sequencer's view of the current epoch, because if the message is correct it must have come after a valid message that declared the start of its epoch. So a message claiming to come from a future epoch must be invalid.)
Epochs are needed because the sequencer chain might reorg in situations where the sequencer committee is not creating new sequencer blocks correctly. The details are specified below, but the gist is that if the sequencers are not making progress, users can submit force-include transactions to the L1, to force the creation of a sequencer block without the sequencers' participation. After a force-include, the sequencing protocol will start a new epoch, and the sequencers will execute a recovery procedure to get back into a good state, as detailed below.
The epoch mechanism allows the protocol to ignore any messages that were โin flight" at the time of reorg and allows sequencers to recover to a good state after a reorg before resuming normal operation of the protocol so that sequencer blocks and transactions are not dropped as a result of a reorg.
Submitting transactions
To submit a transaction T, a user (or a node on the user's behalf) sends (P, Enc(T)) to every sequencer, where P is equal to the priority fee specified by (the plaintext of) T. If P does not match the plaintext, the transaction will eventually be discarded, as described below.
Delayed inbox messages
L1 contracts can put transactions into a delayed inbox queue, which is managed by a trusted L1 contract.
When the insertion of a message into the delayed inbox has L1 finality, every honest sequencer will act as if it received that transaction from a user, at the time when the sequencer first noticed the L1 finality.
Messages that arrive this way are not encrypted, but we will still use the (Enc(T), P) notation with them, to avoid awkward notation. Decrypting an unencrypted transaction is a no-op, and the priority fee of a delayed inbox transaction will always be zero.ย [A zero priority fee makes sense because the transactions are already delayed. Not encrypting them is necessary because they might need to be force-included, which would not be possible if they needed to be decrypted by the sequencer committee.].
Timestamping
Define the transaction hash H_j = Hash(Enc(T_j) || P_j), that is, a hash of the transaction and priority fee as submitted by the user.
On receiving a transaction j = (Enc(T_j), P_j), sequencer i assigns its local timestamp t_i, H_j to the transaction and broadcasts (epoch, โlocal timestamp", i, Enc(T_j), P_j, t_i, H_j). Here โreceivingโ a transaction means either receiving the transaction directly from a user or seeing another sequencer's local timestamp broadcast of the transaction on the broadcast channel.
Consensus timestamp
Every transaction is assigned a consensus timestamp by the protocol, based on the local timestamps on that transaction that are broadcast by sequencers. Because the consensus timestamp algorithm is deterministic, and depends only on the sequence of local timestamp broadcasts received on the broadcast channel, all honest sequencers will derive the same consensus timestamp for a transaction.
To derive consensus timestamps, the sequencer state includes the following information:
* for each sequencer i, the maximum local timestamp m_i ever assigned by that sequencer,
* for each sequencer i and each transaction hash H_j whose consensus timestamp is not yet known, the local timestamp t_i, H_j assigned by i to H_j, if there is one,
* a map allowing to recover (Enc(T_j), P_j) if given H_j, for any transaction that has not yet been decrypted,
When a local timestamp message containing t_i, H_j arrives on the broadcast channel, giving a time at which sequencer i claims to have seen the transaction with hash H_j, an honest sequencer does the following:
* if t_i, H_jโค m_i, ignore the message and do nothing (because the sequencer assigned an out-of-order timestamp)
* otherwise, set m_i t_i, H_j and append t_i, H_j to the set of assigned local sequencer timestamps for message H_j.
A sequencer can conclude that transaction hash H_j has consensus timestamp ฯ_j when the following are all true:
* some sequencer m has assigned t_m, H_j = ฯ_j, and
* exactly N-1/2 sequencers have assigned t_i, H_j < ฯ_j, and
* either
* for every sequencer i that has not assigned a timestamp to H_j, m_i > ฯ_j, or
* there are at least N-F sequencers i for which m_i โฅฯ_j.
The rationale for this is that if the first arm of the โeither" portion is satisfied first, then the consensus timestamp is the median, over all sequencers, of the local timestamps assigned to the transaction. Although some sequencers might not yet have assigned a local timestamp, if the first arm is satisfied then the not-yet-assigned timestamps will all be larger than the median and therefore cannot affect the median. The second arm of the โeither" is needed to protect against malicious sequencers who never issue local timestamps or who only issue very low local timestamps. Such sequencers can maintain a very low m_i, thereby preventing the first arm from ever being satisfied. There cannot be more than F sequencers following such a strategy, so the second arm will always be satisfied eventually, when all honest sequencers have sufficiently large m_i, if not earlier.
Invariant: All honest sequencers will derive the same consensus timestamp for each transaction.
Adjusted consensus timestamp, including time boost
If transaction hash H_j has a consensus timestamp ฯ_j, then its adjusted consensus timestamp is ฯ'_j = ฯ_j - ฯ(P_j). The time boost function ฯ(P) = gP/P+c where g is the maximum time boost (typically 0.5 seconds) and c is a suitably chosen constant. Note that for all P, 0 โคฯ(P) < g.
Transactions will be included in blocks in increasing order of adjusted consensus timestamp.
Invariant: All honest sequencers will derive the same adjusted consensus timestamp for each transaction.
Threshold decryption
If a transaction hash H_j has an adjusted consensus timestamp ฯ'_j, and for at least N-Fsequencers i, m_i โฅฯ'_j+g, then all honest sequencers will broadcast their share d_ij of the decryption of Enc(T_j), as the broadcast message
(epoch, โdecryption share", i, H_j, d_ij, ฯ'_j).
If multiple transactions satisfy this condition at the same time, honest sequencers will broadcast those transactions' decryption shares in increasing order of adjusted consensus timestamp.
Invariant: The decryption shares broadcast by an honest sequencer will always be in the order of increasing adjusted consensus timestamp.
On receiving a decryption share message, an honest sequencer validates the message by checking whether the sequencer itself has already broadcast a decryption share on the same transaction, or would be willing to do so. If not, the decryption share message is ignored.
Decryption shares that have been received and validated are stored in the sequencer state.
After receiving F+1 valid decryption shares for a hash H_j, an honest sequencer will be able to decrypt and recover T_j.
Invariant: At an honest sequencer, transactions will be decrypted, and their plaintext recovered, in increasing order of adjusted consensus timestamp.
Validating the decrypted transaction
After decrypting a transaction, honest sequencers validate the transaction, by checking that:
* it is properly formatted and signed, and
* it includes a priority fee that equals the P_j that was originally attached to the encrypted transaction.
If validation fails, the transaction is discarded. Otherwise, a new sequencer chain block can be created.
Invariant: Validation will either succeed at all honest sequencers or fail at all honest sequencers.
Creating a new sequencer block
When a transaction has been successfully decrypted and validated, the next sequencer block can be created. All honest sequencers compute the block's contents and then they sign the block and broadcast their signature on the block.
To compute the block's timestamp, the honest sequencers take the adjusted consensus timestamp of the transaction, add g, then round down to the nearest second to produce an Ethereum-compatible timestamp.
Note that in order to compute the Merkle roots to include in blocks, each sequencer must keep a Merkle accumulator, requiring logarithmic space, and constant amortized cost per block to update the accumulator.
This Merkle accumulator is part of the sequencer state.
On receiving a broadcast message containing such a signature, an honest sequencer validates the message by verifying that the sequencer itself has already signed the same block, or would be willing to do so. If not, the signed block message is ignored.
Invariant: All honest sequencers will derive the same sequence of blocks, and the transactions in those blocks will have increasing adjusted consensus timestamps.
Once F+1 signatures have been validly broadcast on the block, the signatures can be aggregated to create a fully signed block.
Creating batches
In addition to creating the individual blocks of the sequencer chain, the sequencers create compressed batches to be posted to the L1 chain. [Any deterministic compression algorithm can be used.]
A batch consists of a header, which includes the block number, Merkle hash, delayed inbox messages count, and timestamp from the last sequencer block included in the batch, followed by a body which is built by serializing a sequence of sequencer blocks (serialized as described below), then compressing the result.
The most recent batch is part of the sequencer state.
The sequencers use a deterministic rule to decide when to terminate a batch. In particular, a batch will be terminated if either (a) the next transaction's adjusted consensus timestamp is more than some ฮ later than the adjusted consensus timestamp of the first transaction in the batch, or (b) adding another transaction to the batch would cause the batch to be larger (after compression) than some maximum size S_max. All honest sequencers will agree on when to end a batch.
Note that this requires the sequencers to use a compressor that is deterministic and the same for all sequencer instances.
Invariant: All honest sequencers will derive the same sequence of batches.
The serialization of a sequencer block is:
* if the block's count of included delayed inbox messages is one greater than the predecessor block's count, then a 0 byte, followed by the timestamp, followed by the 8-byte big-endian encoding of the index of the delayed inbox message consumed by the block (which is the block's count of consumed delayed inbox messages, minus one),
* otherwise, a 1 byte, followed by the timestamp, followed by four-byte big-endian transaction size, followed by the bytes of the transaction included in the block.
When a batch ends, the honest sequencers will announce the end of the batch and broadcast their signatures on the hash of the compressed batch. These signatures can be aggregated to yield a signed batch.
Posting batches to L1
Any batch poster can post the signed batch to L1. The L1 contract will remember the header of the most recent posted batch. It will reject a batch if the batch's header has a block number less than or equal to the block number in the most recent header, or if the batch's delayed inbox message count is less than the count in the most recent header.
Any party can act as a batch poster. This is safe because the L1 contract will reject an invalid or out-of-order batch.
Forced includes from the delayed inbox
Normally the sequencers will cooperate to include transactions from the delayed inbox into the sequencer chain promptly. If this does not happen for some reason, there is a fallback mechanism to prevent starvation of the delayed inbox.
The most likely cause of this is that too many sequencers are either crashed or uncooperative so the sequencers are unable to create new sequencer chain blocks for an extended period.
If there is an unconsumed message in the delayed inbox that is older than some threshold time, anyone can submit an L1 transaction to โforce-include" that transaction. The force-include transaction will create a new sequencer block (which is certified by the L1 inbox contract rather than being signed by the sequencers), which will be included in the on-chain record of the sequencer chain.
This may create a reorg in the sequencer chain. Sequencers must be able to cope with such reorgs.
*Starting a new epoch
When any sequencer sees that a force-include transaction has occurred on L1, and that force-include transaction created a sequencer block number b which is greater than the sequencer's current epoch number, the sequencer broadcasts a message (โnew epoch", b, txid), where txid is the L1 transaction ID of the force-include transaction.
When a sequencer sees a broadcasted new epoch message where b is greater than the sequencer's current epoch number, the sequencer verifies that the txid identifies an L1 force-include transaction that creates sequencer block b. The new epoch message is ignored if this verification failsย [This validation of the txid is necessary to ensure that a sequencer chain reorg has actually occurred. Without the validation, any sequencer could cause a reorg by announcing a new epoch.].
If the verification succeeds, the sequencer starts a new epoch. This involves the following steps:
* pause the local timestamping of new incoming transactions (enqueueing any incoming transactions for later handling),
* pause processing of incoming broadcast messages (enqueueing incoming messages as needed so that none are missed),
* make a list of all sequencer blocks with block numbers โฅ b that this sequencer previously signed, sorted by increasing block number, but do not include any block whose transaction is the one that was force-included (call this the โorphaned block list"),
* make a list of any transactions timestamped locally by this sequencer but not yet included in any block signed by this sequencer, sorted by increasing local timestamp, but do not include the transaction that was force-included (call this the โorphaned transaction listโ),
* forget all local timestamps previously observed from any sequencer (including this sequencer), but don't forget the maximum timestamps assigned by any sequencer in the past,
* set the epoch number to b,
* for each block in the orphaned block list, in order, create a new block including the same transaction, but with a timestamp equal to the timestamp of the force-included transaction, and broadcast a signature of the new block in the new epoch,
* wait until the local time is greater than the timestamp of the force-included message,
* reassign local timestamps to the transactions in the orphaned transaction list, in order, and broadcast those local timestamp assignments in the new epoch,
* resume processing of incoming broadcast messages, starting with any enqueued messages,
* resume processing of new incoming transactions, starting with any enqueued transactions,
* return to normal execution of the protocol in the new epoch.
The logic of this mechanism is that the sequencer chain has been re-orged, so some things need to change, but we don't want to drop any block or transaction that was โin flight" at the time of the reorg.
* All previously created blocks that were orphaned by the reorg (i.e., the orphaned transaction list) need to be recreated on the new branch. These are recreated in the same order that they would be in on the pre-reorg branch, and with new timestamps to maintain monotonicity.
* Any transactions that have been timestamped by this sequencer but not yet included in a block (i.e., the orphaned transaction list) need to get new local timestamps, to maintain the monotonicity of timestamps. These are timestamped in a way that maintains ordering among the timestamps issued by a single sequencerโalthough this does not guarantee that the consensus ordering will be the same as it would have been absent the reorg.
Synchronizing after crash recovery or cold start
This section describes how such a sequencer can get back into synch, so it can start playing an active role in the protocol.
Suppose a sequencer wakes up, having missed some or all of the broadcast messages. The sequencer might be recovering from a crash, or it might be a new member of the sequencer committee.
Such a sequencer can listen to broadcasts, but it cannot send any broadcast messages unless it knows that those messages are correctโwhich means that a fully synchronized sequencer would have sent the same messages.
While a sequencer is recovering, it will participate in the atomic broadcast protocol. It will also broadcast local timestamps on any new transactions it sees, but it will not send any other broadcasts. Once the recovering sequencer has determined that its state is in synch, it will return to normal operation, including sending all types of broadcasts.
Learning the correct state hash
A recovering sequencer broadcasts a โrecover" message, including a unique nonce and a signature.
On seeing a recover message, each honest sequencer will snapshot its state, as of the time it received the recover message. It will then compute a hash digest of its state and broadcast a message (recovering sequencer id, nonce, state hash).
The recovering sequencer will watch for those messages. Once it finds F+1 of them that are identical, it knows that the state hash conveyed in those messages is correct. Now the recovering sequencer knows the state hash.
Learning the correct state
Once the recovering sequencer knows the state hash, it can contact any semi-trusted party out-of-band to get a copy of the full state, as of the time of the recovering sequencer's โrecover" message. This can be verified against the known-correct state hash.
Catching up to the current state
The recovering sequencer records all broadcast messages that it receives after receiving its own โrecover" message. Once it has recovered the correct state as of its recover message, it then proceeds to replay all of the recorded messages against its state. During this replay, it does not send any broadcast messages, other than its own local timestamps.
Having replayed all of these messages, the sequencer is now fully recovered and its state should be synchronized with all other honest sequencers.
Recovery is now complete and the sequencer can resume normal operation.
ยง CONCLUSION
We designed a policy for transaction ordering that takes into account transaction arrival times and bids. We showed that any ordering scheme that guarantees the independence of different latency races is a generalized scoring rule. By a suitably designed mixture of timestamps and bids, the economic efficiency of the system is in place: transaction senders spend most of their resources on bidding instead of latency improvement, which can later be used by the protocol for improvement and development. By implementation, it is guaranteed that unsophisticated transaction senders can get their transactions executed within a short time interval even if they bid zero. We also discuss the implementation of the proposed ordering policy by a decentralized committee of sequencers.
acm
ยง OMMITTED PROOFS
ย
It is immediate that a score-based algorithm satisfies the independence of irrelevant transactions.
For the opposite direction, we first prove the second part (ii). We define an order โบ over ๐ฏ where
T_1โบ T_2:โ๐({T_1,T_2},T_1)<๐({T_1,T_2},T_2).
Since ๐({T_1,T_2}) is a well-defined for any two transactions T_1,T_2โ๐ฏ, the order โบ is complete and anti-symmetric. By independence and since ๐({T_1,T_2,T_3}) is a well-defined order for any three transactions T_1,T_2,T_3โ๐ฏ we have
T_1โบ T_2โบ T_3
โ (๐({T_1,T_2},T_1)<๐({T_1,T_2},T_2) and ๐({T_2,T_3},T_2)<๐({T_2,T_3},T_3))
โ ๐({T_1,T_2,T_3},T_1)<๐({T_1,T_2,T_3},T_2)<๐({T_1,T_2,T_3},T_3)
โ ๐({T_1,T_3},T_1)<๐({T_1,T_3},T_3)
โ T_1โบ T_3
Therefore, โบ is transitive. We let T_1โผ T_2 iff T_1โบ T_2 or T_1=T_2.
The Cantor axiom and independence imply that there is a countable ๐ฏ'โ๐ฏ so that the order โบ satisfies that for any T_1,T_2โ๐ฏ there is a T_3โ๐ฏ' such that
T_1โบ T_2โ T_1โผ T_3โผ T_2
By Theorem 1.1 inย <cit.>, this, in turn, implies that there is a numerical representation of the order โบ which is a score S:๐ฏโโ such that for any two transactions T_1,T_2โ๐ฏ we have T_1โบ T_2 if and only if S(T_1)>S(T_2).
For the first part (i), note that the previous argument also works for a countable ๐ฏ and in that case we can choose ๐ฏ'= ๐ฏ where the Cantor axiom is now trivially satisfied.
We want to determine bidding signals ฯ_1(v_1,ฮ) and ฯ_2(v_2,ฮ), which are functions of valuations and the difference in latency. For a given ฮ denote the inverse of ฯ_1(ยท,ฮ) and ฯ_2(ยท,ฮ) by แนฝ_1(ยท,ฮ) and แนฝ_2(ยท,ฮ).
Then bidder 1 solves at the interim stage
max_ฯโฅ0Pr[ฯ-t_1(x_1)โฅฯ_2(v_2,x)-t_2(x_2)]v_1-ฯ/g-ฯ=F(แนฝ_2(ฯ+ฮ,ฮ))v_1-ฯ/g-ฯ,
We obtain the first order condition:
f(แนฝ_2(ฯ+ฮ,ฮ))v_1โแนฝ_2(ฯ+ฮ,ฮ)/โฯ=g/(g-ฯ)^2
For the uniform distribution, this simplifies to:
v_1โแนฝ_2(ฯ+ฮ,ฮ)/โฯ=g/(g-ฯ)^2.
Similarly, for bidder 2 we obtain
v_2โแนฝ_1(ฯ-ฮ,ฮ)/โฯ=g/(g-ฯ)^2.
The two equations give a system of differential equations that need to be solved for ฯ_1 and ฯ_2 or alternatively for แนฝ_1 and แนฝ_2.
Alternatively, we can write the system as:
แนฝ_1(ฯ,ฮ)โแนฝ_2(ฯ+ฮ,ฮ)/โฯ=g/(g-ฯ)^2.
แนฝ_2(ฯ,ฮ)โแนฝ_1(ฯ-ฮ,ฮ)/โฯ=g/(g-ฯ)^2.
The solution toย (<ref>) andย (<ref>) in case of equal investment (so that ฮ=0) and a symmetric equilibrium is given by the following formula:
แนฝ_1(ฯ,0) = แนฝ_2(ฯ,0) =โ(2โซ_0^ฯg/(g-ฯ)^2dฯ)=โ(2ฯ/g-ฯ).
We solve for the signal as a function of the valuation:
v^2=2ฯ/g - ฯโฯ = gv^2/2+v^2.
When x_1โ x_2, we can first sum upย (<ref>) andย (<ref>) to obtain a differential equation for the expected payoff v_1v_2:
d (v_1(ฯ) v_2(ฯ+ฮ))/dฯ = g/(g-ฯ)^2 + g/(g-ฯ-ฮ)^2.
Integrating both sides of the differential equation above gives the solution:
v_1(ฯ) v_2(ฯ+ฮ)= ฯ/(g-ฯ)+ฯ+ฮ/g-ฯ-ฮ+K.
To determine the constant we need to determine boundary conditions. For bidder 1, at the threshold where he is indifferent between bidding and not bidding, we have ฯ_1=0 and for bidder 2, at the threshold where he is indifferent between bidding and not bidding, he needs to overcome the handicap, we have ฯ_2=ฮ. At the threshold bidder 2 should make the same profit as from pooling,
v_1(0)v_2(ฮ)=ฮ/g-ฮโ K=0.
Combiningย (<ref>) andย (<ref>) we obtain a separable differential equation:
d v_1(ฯ,ฮ)/v_1(ฯ,ฮ) = dฯg/(g-ฯ-ฮ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1.
Combiningย (<ref>) andย (<ref>) we obtain another separable differential equation:
d v_2(ฯ+ฮ,ฮ)/v_2(ฯ+ฮ,ฮ) = dฯg/(g-ฯ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1.
Integrating both parts of the equationย (<ref>) solves the (logarithm of) the value as a function of the bid:
ln(v_1(ฯ))-ln(v_1(0))=โซ_0^ฯg/(g-ฯ-ฮ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1dฯ.
Similarly, integrating both parts of the equationย (<ref>) solves the (logarithm of) the value as a function of the bid:
ln(v_2(ฯ+ฮ))-ln(v_2(ฮ))=โซ_0^ฯg/(g-ฯ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1dฯ.
To determine the marginal valuations v_1(0) and v_2(ฮ) at which the two bidders start bidding, note that the support of ฯ_i-t_i and that of ฯ_j-t_j need to coincide for valuations where we have separation of types. Therefore, v_1(0)=v_2(ฮ). Since v_1(0)v_2(ฮ)=ฮ/g-ฮ it follows that v_1(0)=v_2(ฮ)=โ(ฮ/g-ฮ). This is the threshold where bidders start bidding. It follows that for ฮโ 0
v_1(ฯ)=โ(ฮ/g-ฮ)exp(โซ_0^ฯg/(g-ฯ-ฮ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1dฯ)
and
v_2(ฯ)=โ(ฮ/g-ฮ)exp(โซ_0^ฯ-ฮg/(g-ฯ)^2(ฯ/g-ฯ+ฯ+ฮ/g-ฯ-ฮ)^-1dฯ).
To compare the equilibrium signals ฯ_1(v)-t_1 and ฯ_2(v)-t_2 for v>โ(ฮ/g-ฮ), we need to compare ฯ_1(v)+ฮ to ฯ_2(v).
From the expressions for the valuations as a function of the bid, we can observe (observe that g/(g-ฯ-ฮ)^2โฅg/(g-ฮ)^2) that
v_1(ฯ)>v_2(ฯ+ฮ),
for ฯ>0.
It follows that
ฯ_1(v)โคฯ_2(v)-ฮ,
for vโฅโ(ฮ/g-ฮ).
The optimization problem of the player in the equilibrium is to minimize cost, subject to the score equation constraint.
By plugging in t=gm/m+1-s, we obtain the minimization problem:
min_m(m+m+1/gm-s(m+1)=:x(m)).
The first order condition on x(m) gives:
dx(m)/dm = 1+gm-s(m+1)-(m+1)(g-s)/(gm-s(m+1))^2=1-g/(gm-s(m+1))^2=0,
gives that the value of m that minimizes the cost function. The solutions of the last equation are gm-sm-s=โ(g) equivalent to m=s+โ(g)/g-s and gm-sm-s=-โ(g) equivalent to m=s-โ(g)/g-s, or the boundary condition m=0. For m=0, the value x(0)=-1/s, while for m=s+โ(g)/g-s, the value
x(s+โ(g)/g-s)=s+โ(g)/g-s + s+โ(g)/g-s + 1/gs+โ(g)/g-s-s(s+โ(g)/g-s+1)=1 + 2 โ(g) + s/g - s.
Accordingly, the marginal cost of producing signal s is:
c'(s)=(1 + โ(g))^2/(g - s)^2, if s>-โ(g),
1/s^2, if sโค-โ(g).
We solve a similar differential equation asย (<ref>), just with different marginal cost function c', and instead of time function t, we have a score function s of valuation v. The differential equation becomes:
vf(v)-c'(s)s'(v)=0.
We need to solve for the s(v) function. For types v with
2/v^2โฅโ(g),
who only use latency
we have the same solution as before
s(v)=-2/v^2.
The marginal type who is indifferent between using only latency and using a combination of the two technologies is given by
u=โ(2/โ(g)).
which gives the boundary condition
s(u)=-โ(g) for the differential equation describing the behavior of types who choose a signal sโฅ-โ(g):
v = (1 + โ(g))^2/(g - s)^2s'(v).
We obtain the solution
s(v) = (4 c_1 g^3/2 + 2 c_1 g^2 + 2 c_1 g + g (v^2 - 2) - 4 โ(g) - 2)/(2 c_1 g + 4 c_1 โ(g) + 2 c_1 + v^2).
The value of the constant c is obtained from the boundary condition that a zero-value player does not invest and it is equal to
c_1=1/(1 + โ(g))^2.
Therefore, plugging in the constant value in the solutionย (<ref>) and simplifying it gives:
s(v)=g v^2 - 4 โ(g) - 2/v^2+2.
Plugging this into the formula of c(s), gives the cost value as a function of valuation v:
c(v)=1+2โ(g)+g v^2 - 4 โ(g) - 2/v^2+2/g-g v^2 - 4 โ(g) - 2/v^2+2=v^2/2.
Separate expenditure in the bidding is calculated by the following formula:
b(g)= โซ_u^1m(v)f(v)dv = โซ_โ(2/โ(g))^1gv^2-4โ(g)-2/v^2+2+โ(g)/g-gv^2-4โ(g)-2/v^2+2dv =
โซ_โ(2/โ(g))^1v^2(g+โ(g))-4โ(g)-2/2g-4โ(g)-2dv=
1/2g+4โ(g)+2(g+โ(g)/3(1-2/โ(g)โ(2/โ(g)))-(4โ(g)+2)(1-โ(2/โ(g)))).
The dominant term in the nominator above is g and also in the denominator, it is 6g. Therefore, lim_gโโb(g)=1/6.
We are interested in the equilibrium signaling strategy s(v). Suppose that this strategy is increasing (so no pooling of types) and differentiable. Then, we can define a differentiable function
Cฬ(v):=C(s(v)).
To figure out what Cฬ(v) is, we have to consider an optimization problem with the first player:
max_v Pr[vโฅ v_2]v_1-C(s(v))=Pr[vโฅ v_2]v_1-Cฬ(v).
Taking first order conditions with respect to v gives:
v_1f(v)-Cฬ'(v)|_v=v_1 = 0,
that is,
v_1f(v_1)=Cฬ'(v_1).
For the uniform distribution:
v_1=Cฬ'(v_1).
Using the boundary condition Cฬ(0)=0 and integrating we get
Cฬ(v_1)=v_1^2/2.
More generally:
Cฬ(v_1)=โซ_-โ^v_1vf(v)dv.
We are interested in the equilibrium signaling strategy s(v). Suppose that this strategy is increasing (so no pooling of types) and differentiable. Then, we can define a differentiable function
Cฬ(v):=C(s(v)).
To figure out what Cฬ(v) is, we have to consider an optimization problem of the first player:
max_v Pr[vโฅmax{v_2,โฏ,v_n}]v_1-Cฬ(v)=F(v)^n-1v_1-Cฬ(v).
Taking first order conditions with respect to v:
(n-1)v_1f(v)F(v)^n-2-Cฬ'(v)|_v=v_1=0,
that is,
(n-1)v_1f(v_1)F(v_1)^n-2=Cฬ'(v_1).
For the uniform distribution:
(n-1)v_1^n-1=Cฬ'(v_1).
Using the boundary condition Cฬ(0)=0 and integrating we get
Cฬ(v_1)=(n-1)v_1^n/n.
More generally:
Cฬ(v_1)=โซ_-โ^v_1(n-1)vf(v)F(v)^n-2dv.
|
http://arxiv.org/abs/2306.11829v1
|
20230620183803
|
Single-quasiparticle eigenstate thermalization
|
[
"Piotr Tokarczyk",
"Lev Vidmar",
"Patrycja ลydลผba"
] |
cond-mat.stat-mech
|
[
"cond-mat.stat-mech",
"cond-mat.dis-nn"
] |
Institute of Theoretical Physics, Wroclaw University of Science and Technology, 50-370 Wrocลaw, Poland
Department of Theoretical Physics, J. Stefan Institute, SI-1000 Ljubljana, Slovenia
Department of Physics, Faculty of Mathematics and Physics, University of Ljubljana, SI-1000 Ljubljana, Slovenia=-1
Institute of Theoretical Physics, Wroclaw University of Science and Technology, 50-370 Wrocลaw, Poland
Quadratic Hamiltonians that exhibit single-particle quantum chaos are named quantum-chaotic quadratic Hamiltonians.
One of their hallmarks is single-particle eigenstate thermalization introduced in https://doi.org/10.1103/PhysRevB.104.214203Phys.ย Rev.ย Bย 104,ย 214203ย (2021), which describes statistical properties of matrix elements of observables in single-particle eigenstates.
However, the latter has been studied only in quantum-chaotic quadratic Hamiltonians that obey the U(1) symmetry.
Here, we focus on quantum-chaotic quadratic Hamiltonians that break the U(1) symmetry and hence their "single-particle" eigenstates are actually single-quasiparticle excitations introduced on top of a many-body state.
We study the structure of their wave functions and matrix elements of one-body observables, for which we introduce the notion of single-quasiparticle eigenstate thermalization.
Focusing on spinless fermion Hamiltonians in three dimensions with local hopping, pairing and on-site disorder, we also study the properties of disorder-induced near zero modes, which give rise to a sharp peak in the density of states at zero energy.
Finally, we numerically show equilibration of observables in many-body eigenstates after a quantum quench. We prove that it is a consequence of single-quasiparticle eigenstate thermalization, in analogy to the U(1) symmetric case from https://arxiv.org/pdf/2210.00016.pdfarXiv:2210.00016.
Single-quasiparticle eigenstate thermalization
Patrycja ลydลผba
July 31, 2023
==============================================
ยง INTRODUCTION
The emergence of quantum chaos in single-particle systems is nowadays a well established concept.
Inspired by some early contributions such as Berry's conjecture for the structure of chaotic wavefunctionsย <cit.> and the quantum chaos conjecture for the statistics of Hamiltonian eigenvaluesย <cit.>, signatures of chaos were detected in various physical systemsย <cit.>.
In the context of quadratic lattice Hamiltonians, such as the Anderson model, a comparison of single-particle spectral statistics with random matrix theory predictions served as one of the key tools for a detection of chaos and its breakdown at the localization transitionย <cit.>.
Recent works have established a broad characterization of single-particle chaos in quadratic systems beyond single-particle spectral statisticsย <cit.>.
For example, closed form expressions for the average many-body eigenstate entanglement entropies of these systems were introducedย <cit.> (see alsoย <cit.>), and the validity of eigenstate thermalization in single-particle eigenstates was establishedย <cit.>.
Quadratic Hamiltonians that comply with these properties were named quantum-chaotic quadratic Hamiltoniansย <cit.>.
Perhaps the most important aspect of single-particle eigenstate thermalization is its relevance for quantum dynamics of many-body states.
Specifically, it has been shown that the validity of single-particle eigenstate thermalization guarantees equilibration of observables after quantum quenchesย <cit.>.
Equilibration of observables in quadratic systemsย <cit.> represents a necessary condition to establish the agreement between their long time averages and the generalized Gibbs ensemble predictionsย <cit.>.
It should be emphasized that the validity of single-particle eigenstate thermalization has so far only be established in quadratic systems with the particle number conservation, i.e., with the U(1) symmetry.
The single-particle description of general (i.e., particle number nonconserving) quadratic Hamiltonians is commonly described in terms of quasiparticles or quasiholes created on the top of some vacuum stateย <cit.>.
However, this vacuum state typically consists of states with many bare fermions, and hence it represents a many-body state.
An open question is then to establish the analog of single-particle eigenstate thermalization in general quadratic Hamiltonians.
In this paper, we achieve this goal and introduce the notion of single-quasiparticle eigenstate thermalization.
We study a quantum-chaotic quadratic Hamiltonian for spinless fermions in a cubic lattice that includes local hopping, pairing and weak on-site disorder.
We show that the single-quasiparticle (and single-quasihole) wavefunctions exhibit features of quantum chaos, and are hence consistent with Berry's conjectureย <cit.>.
A special attention is devoted to the analysis of disorder-induced near zero modes, which can be detected by a sharp peak in the density of states.
However, the relative number of these states vanishes in the thermodynamic limit.
We then show that observables in the overwhelming majority of single-quasiparticle eigenstates (away from zero modes) exhibit key properties of eigenstate thermalization, dubbed single-quasiparticle eigenstate thermalization.
In the latter, fluctuations of the observable matrix elements decay as a square root of the number of lattice sites V.
This should be contrasted to the eigenstate thermalization hypothesis (ETH) emerging in many-body eigenstates of interacting systemsย <cit.>, which describes an exponential in V decay of the observable matrix elements fluctuationsย <cit.>.
A direct implication of single-quasiparticle eigenstate thermalization is equilibration of observables after quantum quenches, which is shown analytically and tested numerically.
The paper is organized as follows.
In Sec.ย <ref> we introduce the system under investigation and discuss its general properties.
In Sec.ย <ref> we study the properties of single-quasiparticle and single-quasihole wavefunctions, with a special emphasis on the disorder-induced near zero modes.
We then study in Sec.ย <ref> the statistical properties of diagonal and off-diagonal matrix elements of observables in these states, thereby establishing the key properties of single-quasiparticle eigenstate thermalization.
Finally, in Sec.ย <ref> we analytically and numerically study the time evolution of observables after quantum quenches, and show that single-quasiparticle eigenstate thermalization guarantees equilibration of observables.
We conclude in Sec.ย <ref>.
ยง MODEL AND SET-UP
ยง.ยง General considerations
We are interested in a general form of a quadratic Hamiltonian for spinless fermions on a lattice with V sites,
ฤค=โ_i,j=1^V ( h_ijฤ_i^โ ฤ_j + _ijฤ_i^โ ฤ_j^โ + _ij^* ฤ_j ฤ_i ) ,
where ฤ_i^โ (ฤ_i) creates (annihilates) a spinless fermion at site i.
The first term in Eq.ย (<ref>) describes the hopping between sites i and j (when iโ j) and the on-site potential (when i=j), while the other terms represent the pairing that breaks the particle number conservation.
The coefficients h_ij = h_ji^* from a hermitian V ร V matrix h, while the coefficients _ij=-_ji from an antisymmetric V ร V matrix .
For the sake of completeness, in this section we describe some general properties of the Hamiltonian ฤค from Eq.ย (<ref>), while in Sec.ย <ref> we introduce the specific model under investigation.
Diagonalization of ฤค is usually carried out by adopting the Nambu representation,
ฤค = ฤ^โ [ 1/2h ; -^* -1/2h^* ]ฤ + โ_i=1^V h_ii/2
=ฤ^โ โฤ + โ_i=1^V h_ii/2,
where ฤ=[ฤ_1 ... ฤ_V ฤ_1^โ ... ฤ_V^โ ]^T is a 2Vร 1 vector.
The 2V ร 2V matrix โ is said to be the matrix representation of the Hamiltonian ฤค in the Bogoliubov-de Gennes basis.
For a convenience, in what follows we omit the constant term in Eq.ย (<ref>), i.e., we set โ_i h_ii/2=0.
We introduce a unitary transformation to diagonalize โ as ฤ^โ โฤ = Fฬ^โ ๐Fฬ, where ๐ = U^โ โ U is a 2V ร 2V diagonal matrix and Fฬ = U^โ ฤ contains the new fermionic annihilation and creation operators, where Fฬ=[fฬ_1 ... fฬ_V fฬ_1^โ ... fฬ_V^โ ]^T is a 2Vร 1 vector.
The 2V ร 2V unitary matrix U can be expressed as
U=
[ u v^*; v u^* ],
where u and v are V ร V matrices with matrix elements u_iฮฑ and v_iฮฑ.
The form of U from Eq.ย (<ref>) gives rise to a special property of the diagonal matrix ๐, i.e., the diagonal matrix elements are {ฯต_ฮฑ} for ฮฑ=1,...,V, and the remaining elements are {-ฯต_ฮฑ}.
The new fermionic creation and annihilation operators can be expressed asย <cit.>
fฬ_ฮฑ^โ = โ_i=1^V(u_iฮฑฤ_i^โ + v_iฮฑฤ_i) , fฬ_ฮฑ = โ_i=1^V(u_iฮฑ^*ฤ_i+v_iฮฑ^*ฤ_i^โ ) ,
while the annihilation operators of bare fermions in the site-occupation basis can be expressed using the inverse transformation as ฤ_i = โ_ฮฑ=1^V(u_iฮฑfฬ_ฮฑ+v_iฮฑ^*fฬ_ฮฑ^โ ).
This brings the Hamiltonian ฤค to the diagonal form,
ฤค =โ_ฮฑ=1^V(ฯต_ฮฑfฬ_ฮฑ^โ fฬ_ฮฑ-ฯต_ฮฑfฬ_ฮฑfฬ_ฮฑ^โ )
= โ_ฮฑ=1^V2ฯต_ฮฑfฬ_ฮฑ^โ fฬ_ฮฑ-โ_ฮฑ=1^Vฯต_ฮฑ .
One often refers to fฬ_ฮฑ^โ and fฬ_ฮฑ as the creation operators of the so-called Bogoliubov quasiparticles and quasiholes with energy ฯต_ฮฑ and -ฯต_ฮฑ, respectively.
Since both fฬ_ฮฑ^โ and fฬ_ฮฑ are linear combinations of ฤ_i^โ and ฤ_i, both quasiparticle and quasihole excitations are superpositions of the bare particle and hole excitations.
In principle one has no a priori knowledge of the sign of ฯต_ฮฑ.
However, one notes that the columns ฮฑ and ฮฑ+V (assuming ฮฑโค V) of the unitary matrix U in Eq.ย (<ref>) are related, which is a consequence of the relationship between Fฬ_ฮฑ^โ = fฬ_ฮฑ^โ and Fฬ_ฮฑ+V^โ = fฬ_ฮฑ, see Eq.ย (<ref>).
In the matrix representation, the eigenstates of H can be expressed as
ฯ_ฮฑ = [u_1ฮฑ ... u_Vฮฑ v_1ฮฑ ... v_Vฮฑ]^ T and
ฯ_ฮฑ+V = [v_1ฮฑ ... v_Vฮฑ u_1ฮฑ ... u_Vฮฑ]^โ , satisfying
Hฯ_ฮฑ = ฯต_ฮฑ and Hฯ_ฮฑ+V = -ฯต_ฮฑฯ_ฮฑ+V.
It is then convenient to assume ฯต_ฮฑโฅ 0, which associates quasiparticles as objects with non-negative energies and quasiholes as objects with non-positive energies.
As a side remark, which is not essential for understanding of our new results, we note that in the case when quasiparticles are associated with energies ฯต_ฮฑโฅ 0, one can introduce the ground state |ฮจ_0โฉ as the eigenstate with no quasiparticle excitations, with energy E_0=-โ_ฮฑ=1^Vฯต_ฮฑ.
Since fฬ_ฮฑ|ฮจ_0โฉ=0, the state |ฮจ_0โฉ is often referred to as the Bogoliubov vacuum.
It can be defined as |ฮจ_0โฉโโ_p(1+ฮป_ppdฬ_p^โ dฬ_p^โ )|โ
โฉ with particle pairs occupying dฬ_p^โ dฬ_p^โ |โ
โฉ, where โ
is a true vacuum.
Note that dฬ_r^โ =โ_ix_irฤ_i^โ , where r=p,p are determined by a unitary transformation x that brings z=-(u^โ )^-1v^โ to a standard canonical form ฮป, i.e., z=xฮป x^T. The only non-zero elements of ฮป are ฮป_pp=-ฮป_pp with p and p being neighbouring odd and even numbers, respectively (more details can be found in Ref.ย <cit.>; see also Ref.ย <cit.>).
The above considerations can more formally be understood using the following symmetry property of ฤคย <cit.>,
ฮโฮ^-1=-โ , ฮ=ฯ_x๐ฆ ,
where ฯ_x is a generalization of the first Pauli matrix to a 2V ร 2V matrix with Vร V identity matrices replacing 1, and ๐ฆ is the complex conjugation. The symmetry operator ฮ is antiunitary with ฮ^-1=๐ฆฯ_x and ฮ^2=1. It corresponds to the particle-hole conjugation.
Then, Eq.ย (<ref>) implies
โฮฯ_ฮฑ = -ฮโฯ_ฮฑ =-ฯต_ฮฑฮฯ_ฮฑ ,
and hence ฯ_ฮฑ+V = ฮฯ_ฮฑ.
For simplicity, we will call both ฯ_ฮฑ and ฯ_ฮฑ+V as single-quasiparticle excitations (or single-quasiparticle eigenstates) in the rest of the paper.
It is often convenient to define the so-called chargeย <cit.>, which is for quasiparticle eigenstates ฯ_ฮฑ at ฮฑโค V defined as
q_ฮฑ = โ_i=1^V(|u_iฮฑ|^2-|v_iฮฑ|^2) ,
and has the same absolute value but opposite sign for ฮฯ_ฮฑ.
Furthermore, q_ฮฑ=1 for a pure particle excitation and q_ฮฑ=-1 for a pure hole excitation, while q_ฮฑ=0 for a perfect superposition of both.
There is no simple relation between the sign of charge q_ฮฑ and the sign of energy ฯต_ฮฑ.
The zero modes of โ, when emerge, require a special attention. If ฯ_ฮฑ is a zero mode, then ฮฯ_ฮฑ is also a zero mode. One can built their linear combinations ฯ_ฮฒ=(ฯ_ฮฑ+ฮฯ_ฮฑ)/โ(2) and ฯ_ฮฒ'=(ฯ_ฮฑ-ฮฯ_ฮฑ)/โ(2)ย <cit.>, which both have zero charge, q_ฮฒ=q_ฮฒ'=0.
They are also eigenstates of the symmetry operator ฮฯ_ฮฒ=ฯ_ฮฒ and
ฮฯ_ฮฒ'=-ฯ_ฮฒ', which resembles the property that the "particle" is its own "antiparticle".
ยง.ยง Quantum-chaotic quadratic model
We now focus on the specific model that we study in the remainder of the paper.
While many studies of the Hamiltonians ฤค of the form given by Eq.ย (<ref>) aim at exploring their possible topological propertiesย <cit.>, we here pursue a different goal, namely, we set the parameters of ฤค such that the model exhibits single-particle quantum chaos.
Specifically, we consider a cubic lattice with periodic boundary conditions and define ฤค as a local Hamiltonian, also denoted as extended 3D Anderson model in Ref.ย <cit.>,
ฤค=โ_โจ i,j โฉ(- ฤ_i^โ ฤ_j +2 ฮฤ_i^โ ฤ_j^โ + h.c.) + โ_i=1^Vฮต_i ฤ_i^โ ฤ_i ,
where โจ i,jโฉ denote nearest neighbors.
The first term describes the hopping and pairing that is consistent with a spin-polarized triplet p-wave pairing interactionย <cit.>.
The second term describes the on-site potential ฮต_i = W/2 r_i with a disorder strength W, where r_i is a random number drawn independently from a uniform distribution over [-1,1].
Since all model parameters in Eq.ย (<ref>) are real, it follows that โ=โ^T, the matrix โ from Eq.ย (<ref>) has a time-reversal symmetry, and the unitary matrix U from Eq.ย (<ref>) is real.
The model belongs to the BDI symmetry classย <cit.>. In the limit ฮโ 0, the 3D Anderson model from the AI symmetry classย <cit.> is recoveredย <cit.>.
We observe that the model in Eq.ย (<ref>) exhibits disorder-induced near zero modes at nonzero ฮ.
Figureย <ref> shows the scaled density (2V)^-1ฯ(ฯต), where the density of single-quasiparticle eigenstates is
ฯ(ฯต) = ฮด N / ฮดฯต |_ฯต.
Its most prominent feature is a sharp peak at ฯต = 0, which denotes the near zero modes.
The peak is present at ฮ=0.5 at both weak disorder [W=1, Fig.ย <ref>(a)] and moderate disorder [W=5, Fig.ย <ref>(b)], while it is absent at ฮ=0 as well as at weak ฮ such as ฮ=0.1 (not shown).
Results in the insets of Figs.ย <ref>(a) andย <ref>(b) show a polynomial divergence at zero energy that is independent of the system sizeย V and is well described by a function ฯต^-ฮถ, with ฮถโ[0.95,1.1].What is not visible in the logarithmic scale is that the height of the peak slowly decreases with the system sizeย V.
A more quantitative analysis of the near zero modes will be carried out in Sec.ย <ref>.
ยง QUASIPARTICLE EIGENSTATES
Eigenstate thermalization is expected to occur in eigenstates that are sufficiently delocalized in a non fine-tuned basis.
The goal of this section is to detect the regime of parameters W and ฮ, for which the statistical properties of the majority of single-quasiparticle eigenstates are consistent with the predictions of quantum chaos.
ยง.ยง Inverse participation ratio
We calculate the inverse participation ratio (IPR) of a single-quasiparticle eigenstate, which we define as
IPR_ฮฑ = โ_i=1^V( u_iฮฑ^4 + v_iฮฑ^4 ) ,
and average it throughout the single-quasiparticle spectrum,
IPR_av= 1/2Vโ_ฮฑ=1^2VIPR_ฮฑ .
The above definitions have been previously used for systems with superconducting pairing termsย <cit.> (see alsoย <cit.> for a different definition).
In single-quasiparticle eigenstates of quantum-chaotic quadratic Hamiltonians at ฮโ 0, the amplitudes u_iฮฑ and v_iฮฑ are expected to be (pseudo)random numbers drawn from a normal distribution with zero mean and 1/2V variance.
In this case, the IPR is expected to agree with the Gaussian orthogonal ensemble (GOE) prediction IPR_GOE=3/2Vย <cit.>.
At the particle number conserving point ฮ=0, the GOE prediction is IPR_GOE=3/V.
If, in contrast, the eigenstates are localized in some basis, the IPR in this basis is a nonzero constant that is independent of the system size V.
In the model from Eq.ย (<ref>), the localization is expected to occur in the site-occupation basis at large disorderย W.
Figureย <ref> shows IPR_av versus disorder strengthย W at three different values of ฮ=0, 0.1, 0.5.
In all cases IPR_av is very small at weak disorder W, indicating delocalization is site-occupation basis, and it increases at large W, indicating localization.
A more detailed view into the weak disorder regime, see also the inset of Fig.ย <ref>, reveals that IPR_av approaches the corresponding GOE prediction at ฮ=0 and 0.1.
At ฮ=0.5, however, IPR_av shows a non-monotonous behavior and, at least at V=20^3, does not entirely match with the GOE prediction.
To better understand the small deviations from the GOE prediction at ฮ=0.5, we study IPR_ฮฑ versus quasiparticle energy ฯต_ฮฑ in Fig.ย <ref>, focusing on disorders W=1 and 5.
Results at ฮ=0 and 0.1, which are also included in Fig.ย <ref>, are rather expected: the IPR's match quite accurately with the GOE predictions at W=1, while the agreement deteriorates at W=5.
In the latter case, the deviations from the GOE predictions are most prominent at spectral edges, and may indicate a tendency towards forming a mobility edge.
On the contrary, the numerical results at ฮ=0.5 in Fig.ย <ref> show a large deviation of IPR_ฮฑ from the GOE prediction at |ฯต_ฮฑ| โ 0.
In this energy regime, near zero modes were observed in the density of states in Fig.ย <ref>.
The anomalous behavior of IPR_ฮฑ at |ฯต_ฮฑ| โ 0 is most clear at W=5, see Figs.ย <ref>(b) andย <ref>(d), while at W=1 other subbands emerge at |ฯต_ฮฑ| โ 0, see Figs.ย <ref>(a) andย <ref>(c).
Moreover, even at ฮ=0.1 and V=15^3, see Fig.ย <ref>(a), one observes deviations of IPR_ฮฑ from the GOE prediction at |ฯต_ฮฑ| โ 0, which however disappears at larger system size V=20^2, see Fig.ย <ref>(c).
These observations call for further quantitative studies of properties of near zero modes, which we carry out in the next section.
ยง.ยง Disorder-induced near zero modes
We now study the structure of disorder-induced near zero modes.
As a working definition, the term near zero modes refers to quasiparticle eigenstates with small absolute energy ฯต_ฮฑ that is still sufficiently larger than the machine precision, 10^-14โฒ |ฯต_ฮฑ| โช 1, while with zero modes we have in mind the quasiparticle eigenstates with zero energy within the machine precision, |ฯต_ฮฑ| โฒ 10^-14.
We observe that the near zero modes can be organized in pairs (ฮฑ,ฮฑ') with ฯต_ฮฑ'โ -ฯต_ฮฑ and q_ฮฑ'โ -q_ฮฑ, with negligible relative differences.
In these pairs, u_iฮฑ'=v_iฮฑ and v_iฮฑ'=u_iฮฑ for the majority of sites i.
This organisation is impossible for most of the zero modes.
We calculate the expectation values of two operators, the particle-hole conjugation |โจฮโฉ_ฮฑ|=|โจฮฑ|ฮ|ฮฑโฉ| and the charge |q_ฮฑ|.
In this and all further sections, we use the bra-ket notation in which |ฮฑโฉ denotes the single-quasiparticle eigenstate.
One expects |โจฮโฉ_ฮฑ| โ 0 and |q_ฮฑ|>0 for states sufficiently away from zero modes (i.e., away from zero quasiparticle energy), while for zero modes one expects |โจฮโฉ_ฮฑ| = 1 and |q_ฮฑ|=0.
A way to consider these properties is to say that the quasiparticle eigenstates with a nonvanishing |โจฮโฉ_ฮฑ| are not particle-hole symmetric, since ฮ|ฮฑโฉ is not orthogonal to |ฮฑโฉ.
Therefore, |โจฮโฉ_ฮฑ| measures the deviation from the particle-hole symmetry.
Figureย <ref> shows results for |โจฮโฉ_ฮฑ| and |q_ฮฑ| at W=5, while the analogous results at W=1 are shown in Fig.ย <ref>.
There are two main results.
The first is that at ฮ=0.1, for which the IPR results in Sec.ย <ref> matched with the GOE prediction to a reasonable accuracy, we observe |โจฮโฉ_ฮฑ| โ 0 and |q_ฮฑ|>0 for all single-quasiparticle eigenstates.
This results is consistent with absence of (near) zero modes at ฮ=0.1.
The second result is that at ฮ=0.5, the values of |โจฮโฉ_ฮฑ| and |q_ฮฑ| smoothly span over the entire interval from 0 to 1.
They appear to be a well-defined function of the absolute energy |ฯต_ฮฑ| and roughly independent of the system size V.
We obtain a good fit using |โจฮโฉ_ฮฑ| โ |ฯต_ฮฑ|^-ฮถ and |q_ฮฑ| โ |ฯต_ฮฑ|^ฮถ, where ฮถโ 1 in both cases.
This suggest emergence of a smooth crossover between zero modes and states than cannot be characterized as zero modes, and it reinforces our term near zero modes to describe the states at small but still nonzero energy.
We next determine the number of zero and near zero modes ๐ฉ, and study their scaling with the total number of states 2V.
Note, however, that results in Figs.ย <ref> andย <ref> hint at a certain degree of ambiguity when defining ๐ฉ.
We consider two criteria.
(i) Zero mode condition: zero modes correspond to quasiparticle eigenstates with โจฮฬโฉ_ฮฑ>10^-4 and q_ฮฑ<10^-14.
According to the results in Figs.ย <ref> andย <ref>, zero modes can be interpreted as those with energy that is zero within machine precision, which is consistent with our working definition of zero modes introduced above.
(ii) Near zero mode condition: near zero modes correspond to quasiparticle eigenstates with โจฮฬโฉ_ฮฑ>10^-14 and q_ฮฑ<10^-4.
Focusing on ฮ=0.5, the scaling properties of the relative number of zero and near zero modes ๐ฉ/(2V) are shown in Fig.ย <ref>.
Figureย <ref>(a) shows the scaling of ๐ฉ/(2V) versus V at disorders W=1 and 5.
Results using the near zero mode condition exhibit a rather clear polynomial decay V^-ฮถ, with ฮถ=0.31 at W=1 and ฮถ=0.21 at W=5.
This suggests that the number of near zero modes is subextensive, and it eventually represents a vanishing fraction of all single-quasiparticle eigenstates in the thermodynamic limit.
Results using the zero mode condition, on the other hand, do not exhibit a clear decay, in particular at W=5.
However, they are upper bounded by the decay given by the near zero mode condition. Therefore, one may expect that the fraction of zero modes decays to zero irrespective of the precise quantitative criterion for their definition.
Figureย <ref>(b), in contrast, shows the scaling of ๐ฉ/(2V) versus disorder W at a fixed system size V=22^3.
In this case, results using the zero mode condition exhibit an exponential decay exp(-ฮถ W), with ฮถ=0.19 at V=22^3.
This suggests that the contribution of zero modes is negligible at moderate and large W.
On the other hand, results using the near zero mode condition saturate to a constant at fixed V.
This results is expected from Fig.ย <ref>(a), which suggest that the fraction ๐ฉ/(2V) is nonzero for any finite V.
ยง SINGLE-QUASIPARTICLE EIGENSTATE THERMALIZATION
We now turn our attention to the main topic of the paper, i.e., eigenstate thermalization in single-quasiparticle eigenstates of the Hamiltonian from Eq.ย (<ref>).
Based on the analysis in Sec.ย <ref>, we focus on the disorder strength W=5 and two distinct pairing strengths ฮ=0.1 and 0.5.
As shown in Figs.ย <ref> andย <ref>, the system exhibits zero and near zero modes at ฮ=0.5 but not at ฮ=0.1, at least for the system sizes under investigation.
We conjecture that the matrix elements of a normalized observable ร (to be defined in Sec.ย <ref>) in single-quasiparticle eigenstates of quantum-chaotic quadratic Hamiltonians, away from zero modes, can be written as
โจฮฑ|ร|ฮฒโฉ = ๐ช(ฯต)ฮด_ฮฑฮฒ+
ฯ(ฯต)^-1/2โฑ(ฯต,ฯ)R_ฮฑฮฒ ,
where ๐ช(ฯต) and
โฑ(ฯต,ฯ) are smooth functions of their arguments, and R_ฮฑฮฒ is a random variable with zero mean and unit variance.
In Eq.ย (<ref>), |ฮฑโฉ and |ฮฒโฉ represent single-quasiparticle eigenstates, and hence ฯต = (ฯต_ฮฑ + ฯต_ฮฒ)/2 and ฯ =ฯต_ฮฑ-ฯต_ฮฒ refer to the mean single-quasiparticle energy and the corresponding difference, respectively.
The fluctuations of matrix elements are governed by the single-quasiparticle density of states
ฯ(ฯต) = ฮด N / ฮดฯต |_ฯต
studied in Fig.ย <ref>, which scales with the system size V.
The ansatz in Eq.ย (<ref>) carries some analogies, but also differences, with respect to the eigenstate-thermalization hypothesis (ETH) ansatz for many-body eigenstates of quantum-chaotic interacting Hamiltoniansย <cit.>.
To highlight the distinction between interacting and quadratic systems, we refer to the ansatz in Eq.ย (<ref>) as the single-quasiparticle eigenstate thermalization ansatz.
In a system with V lattice sites, we consider V distinct single-quasiparticle eigenstates |ฮฑโฉ.
Since the Hamiltonian ฤค from Eq.ย (<ref>) contains 2V eigenstates, there may exist a certain degree of ambiguity about the choice of a single-quasiparticle eigenstate from a pair of eigenstates with energies ยฑฯต_ฮฑ related by a particle-hole symmetryย ฮ.
One possibility is to consider single-quasiparticle eigenstates with non-negative energies ฯต_ฮฑโฅ 0.
In this case, one can define |ฮฑโฉ = fฬ_ฮฑ^โ |ฮจ_0โฉ, where the "vacuum" state |ฮจ_0โฉ is the many-body ground state (see also the discussion in Sec.ย <ref>).
However, to make a connection with the single-particle eigenstate thermalization in the 3D Anderson model from Ref.ย <cit.> (the limit ฮโ0), we define the "vacuum" state |ฮ_0โฉ as the state that is annihilated by the quasiparticles with negative charge q_ฮฑ, which energy is E_ฮ_0=โ_q_ฮฑ<0ฯต_ฮฑโ 0.
Hence, |ฮฑโฉ refers to the single-quasiparticle eigenstate of the form fฬ_ฮฑ^โ |ฮ_0โฉ with energy E_ฮ_0+2ฯต_ฮฑโ 2ฯต_ฮฑ, where fฬ_ฮฑ^โ creates a quasiparticle with a positive chargeย q_ฮฑ.
Nevertheless, we do not expect this choice to have an impact on the validity of the single-quasiparticle eigenstate thermalization.
ยง.ยง Observable normalization
The observable ร considered in Eq.ย (<ref>) is labeled by the underlined letter, which means that it is normalized and traceless.
The trace is carried out in the single-quasiparticle Hilbert space, and hence tracelessness requires
Tr{ร} = โ_ฮฑ=1^V โจฮฑ| ร | ฮฑโฉ = 0 ,
while normalization is defined by a unit Hilbert-Schmidt norm,
||ร||^2=1/VTr{ร^2} = 1/Vโ_ฮฑ,ฮฒ=1^V |โจฮฑ| ร | ฮฒโฉ|^2 =1.
As a side remark, we note that normalization of an observable ร in the single-quasiparticle Hilbert space does not imply normalization of this observable in a many-body Hilbert space of dimension 2^V, for which 1/V in Eq.ย (<ref>) is replaced by 1/2^V and the trace runs over 2^V many-body states.
The observable for which we carry out the numerical calculations to test Eq.ย (<ref>) is the site occupation of the lattice site i,
nฬ_i=ฤ_i^โ ฤ_i .
The matrix elements of the observable are obtained by expressing the bare spinless fermion creation and annihilation operators ฤ_i^โ , ฤ_i by the quasiparticle operators fฬ_ฮฑ^โ , fฬ_ฮฑ from Eq.ย (<ref>).
The diagonal matrix elements are
โจฮฑ|nฬ_i|ฮฑโฉ=u_iฮฑ^2-v_iฮฑ^2+โ_ฮฒ=1^V v_iฮฒ^2 ,
while offdiagonal matrix elements are
โจฮฑ|nฬ_i|ฮฒโฉ=u_iฮฑu_iฮฒ-v_iฮฑv_iฮฒ .
The observable nฬ_i has an O(1) Hilbert-Schmidt norm in the many-body Hilbert space.
However, the traceless and normalized counterpart of nฬ_i in the single-quasiparticle Hilbert space, our space of interest, is
nฬ_i=1/โ(N)(nฬ_i-T) ,
where
T=1/Vโ_ฮฑ=1^V(u_iฮฑ^2-v_iฮฑ^2)+โ_ฮฑ=1^Vv_iฮฑ^2
and
N = 1/V(โ_ฮฑ=1^Vu_iฮฑ^2)^2
+V-1/V(โ_ฮฑ=1^Vv_iฮฑ^2)^2
+2/Vโ_ฮฑ=1^Vu_iฮฑ^2โ_ฮฑ=1^Vv_iฮฑ^2
-2/V(โ_ฮฑ=1^Vu_iฮฑv_iฮฑ)^2
-T^2 .
The magnitude of T from Eq.ย (<ref>) is Tโ 1/2, which is a consequence of the many-body content of |ฮฑโฉ in terms of bare fermions.
This property differs from the particle number conserving quadratic systems in which Tโ 1/Vย <cit.>.
In contrast, the magnitude of N from Eq.ย (<ref>) is Nโ 1/V, which is a consequence of the restriction to the single-quasiparticle subspace.
This property is analogous to the particle number conserving quadratic systems.
Without the loss of generality, we fix i=1 and replace nฬ_1โnฬ (nฬ_1โnฬ) in what follows. To simplify the notation, we define n_ฮฑฮฑ=โจฮฑ|nฬ|ฮฑโฉ (n_ฮฑฮฑ=โจฮฑ|nฬ|ฮฑโฉ) and n_ฮฑฮฒ=โจฮฑ|nฬ|ฮฒโฉ (n_ฮฑฮฒ=โจฮฑ|nฬ|ฮฒโฉ).
ยง.ยง Structure of matrix elements
Next, we numerically test the validity of the ansatz in Eq.ย (<ref>) for the observable in Eq.ย (<ref>).
We first focus on the structure functions ๐ช(ฯต) and โฑ(ฯต,ฯ), and comment on the amplitudes of the matrix elements of near zero modes at ฮ=0.5.
ยง.ยง.ยง Diagonal matrix elements
The diagonal matrix elements n_ฮฑฮฑ versus ฯต_ฮฑ are shown for three system sizes Vโ{10^3,15^3,20^3} at ฮ=0.1 and 0.5 in Figs.ย <ref>(a) andย <ref>(b), respectively.
Results suggest that the matrix elements are structureless since n_ฮฑฮฑ fluctuate around zero.
Specifically, the unnormalized diagonal matrix elements of n_ฮฑฮฑ can be expressed using Eqs.ย (<ref>) andย (<ref>) as
n_ฮฑฮฑ - T = u_iฮฑ^2 - v_iฮฑ^2 - ( u_i,ฮฑ^2 - v_i,ฮฑ^2) ,
where the eigenstate-averaged coefficients u_i,ฮฑ^2 and v_i,ฮฑ^2 are defined as u_i,ฮฑ^2 = (1/V)โ_ฮฑ'=1^V u_iฮฑ'^2 and v_i,ฮฑ^2 = (1/V)โ_ฮฑ'=1^V v_iฮฑ'^2, respectively.
Using the notion of a site-resolved charge u_iฮฑ^2 - v_iฮฑ^2 [cf.ย Eq.ย (<ref>)], the diagonal matrix elements of the traceless observable n_ฮฑฮฑ can be seen as a measure of fluctuations of a site-resolved charge over the averaged site-resolved charge u_i,ฮฑ^2 - v_i,ฮฑ^2.
The fluctuations of matrix elements appear to decrease with V and will be studied in more detail in Sec.ย <ref>.
The support of fluctuations at ฮ=0.1 appears to be rather insensitive to ฯต_ฮฑ, while at ฮ=0.5 it is larger at the edges of the single-quasiparticle spectrum.
Even though the results in Fig.ย <ref> may suggest that ๐ช(ฯต) โ 0 in Eq.ย (<ref>), there nevertheless exists a fine structure of near zero modes that we study in the insets of Fig.ย <ref>(b).
Figureย <ref> shows that the charge q_ฮฑ of near zero modes vanishes as ฯต_ฮฑโ 0 and, hence, one may also expect the site-resolved charge u_iฮฑ^2 - v_iฮฑ^2 to vanish as well.
Equationย (<ref>) then suggest that the diagonal matrix elements of near zero modes become independent of the eigenstate index ฮฑ since
n_ฮฑฮฑ - Tโ - ( u_i,ฮฑ^2 - v_i,ฮฑ^2).
The upper inset of Fig.ย <ref>(b) shows that this is indeed the case.
However, in the lower inset of Fig.ย <ref>(b) we show the subtracted matrix elements |n_ฮฑฮฑ-min(|n_ฮฑฮฑ|)|, where min(|n_ฮฑฮฑ|) is the minimal absolute value of matrix elements within the band of near zero modes.
These matrix elements exhibit the structure |n_ฮฑฮฑ-min(|n_ฮฑฮฑ|)|โ|ฯต|^ฮถ with ฮถ=0.90, see the solid line in the lower inset of Fig.ย <ref>(b).
This property is consistent with the dependence of charge q_ฮฑ on ฯต_ฮฑ, as shown in Fig.ย <ref>.
ยง.ยง.ยง Offdiagonal matrix elements
The offdiagonal matrix elements n_ฮฑฮฒ are calculated for pairs of |ฮฑโฉ and |ฮฒโฉ restricted to a narrow energy window ฮด around a target energy ฯต_tar, i.e., |ฯต-ฯต_tar|<ฮด/2.
The target energy is either selected near the mean energy ฯต_tarโ0 or at a higher energy ฯต_tarโฯต_ max/2, where ฯต_ max is the highest quasiparticle energy corresponding to the eigenstate index ฮฑ=V.
The width of the energy window is ฮด=(ฯต_ max-ฯต_ min)/100, where ฯต_ min is the lowest quasiparticle energy corresponding to the eigenstate index ฮฑ=1.
We numerically calculate both ฯต_tar and ฮด for each disorder realization.
In Fig.ย <ref> we plot log_10(|n_ฮฑฮฒ|) as functions of energy differences ฯ=|ฯต_ฮฒ-ฯต_ฮฑ| for a system with V=10^3 lattice sites and 10 disorder realizations.
Solid red curves denote the coarse-grained values log_10(|n_ฮฑฮฒ|).
They are obtained by dividing the entire range of ฯ into 100 bins, followed by the averaging of absolute values of offdiagonal matrix elements |n_ฮฑฮฒ| within each bin and over 10 disorder realizations.
Figuresย <ref>(a) andย <ref>(c) show results for the pairing strength ฮ=0.1 and the target energies ฯต_tarโ 0 and ฯต_ max/2, respectively, while Figs.ย <ref>(b) andย <ref>(d) show results for the pairing strength ฮ=0.5 and the same target energies.
We observe in all cases under considerations that the offdiagonal matrix elements are dense, i.e., there is no finite fraction of offdiagonal matrix elements with values that are lower than or of the order of machine precision.
This property is also observed in single-particle eigenstates of quantum-chaotic quadratic systems with particle number conservationย <cit.>, as well as in many-body eigenstates of quantum-chaotic interactingย <cit.> and integrable interacting systemsย <cit.>.
The coarse-grained values log_10(|n_ฮฑฮฒ|) in Fig.ย <ref> reveal only a mild dependence on ฯ, suggesting that the structure function โฑ(ฯต,ฯ) from Eq.ย (<ref>) carries similarities with the random matrix theory (RMT) predictions for which โฑ_ RMT(ฯ) โ 1ย <cit.>.
However, as shown in the inset of Fig.ย <ref>(b), the structure function at ฮ=0.5 can be nontrivial in the energy range ฯต_ tarโ 0 and ฯโ 0, in which near zero modes emerge.
It is expected that the offdiagonal matrix elements between zero modes are zero or vanishingly small.
This can be understood by considering a zero mode ฯ_ฮฒ = (ฯ_ฮฑ+ฮฯ_ฮฑ)/โ(2) from Sec.ย <ref> [below Eq.ย (<ref>)]. For the latter, the wavefunction coefficients u_iฮฑ and v_iฮฑ are identical, so that the offdiagonal matrix elements are zero [see Eq.ย (<ref>)].
This agrees with our observation that the offdiagonal matrix elements of near zero modes vanish in the limit ฯต_ tarโ 0, ฯโ 0.
In Fig.ย <ref>, we study the structure function at ฮ=0.5 and ฯต_ tarโ 0 in the regime of low ฯ.
To this end, we divide the entire range of log_10(ฯ) into 100 bins, and average n_ฮฑฮฒ^2V within each bin, thereby giving rise to the scaled coarse-grained values n_ฮฑฮฒ^2V.
For the system sizes under investigation, the scaled results are fairly independent of the system size V, and exhibit a polynomial scaling n_ฮฑฮฒ^2Vโฯ^ฮท, with ฮทโ 2.15.
This function interpolates between the vanishing scaled offdiagonal matrix elements n_ฮฑฮฒ^2Vโ 0 encountered in zero modes (i.e., the limit ฯโ0), and n_ฮฑฮฒ^2Vโ const at ฯ= O(1), which was studied in Fig.ย <ref>.
ยง.ยง Fluctuations of matrix elements
We now perform a quantitative analysis of the fluctuations of matrix elements, focusing on the eigenstate-to-eigenstate fluctuations of the diagonal matrix elements and the variances of both diagonal and offdiagonal matrix elements.
ยง.ยง.ยง Eigenstate-to-eigenstate fluctuations
The eigenstate-to-eigenstate fluctuations determine the differences between diagonal matrix elements in the eigenstates with the nearby single-quasiparticle energies, ฮดn_ฮฑ=n_ฮฑ,ฮฑ-n_ฮฑ-1,ฮฑ-1.
We calculate the average of the absolute values of these differences,
ฮดn_av=||ฮ||^-1โ_|ฮฑโฉโฮ |ฮดn_ฮฑ| ,
as well as the corresponding maximal difference
ฮดn_max=max_|ฮฑโฉโฮ |ฮดn_ฮฑ| ,
where ฮ is a set comprising either 80% (||ฮ||=0.8V) or 500 (||ฮ||=500) eigenstates in the middle of the spectrum. We always average ฮดn_av and ฮดn_av over 100 disorder realizations to obtain โจโจฮดn_avโฉโฉ and โจโจฮดn_maxโฉโฉ, respectively.
The eigenstate-to-eigenstate fluctuations from Eqs.ย (<ref>) andย (<ref>) were introduced as indicators of the ETH in many-body eigenstates of interacting Hamiltoniansย <cit.>.
For single-(quasi)particle eigenstates of quantum-chaotic quadratic Hamiltonians, both indicators are expected to decay to zero as โ V^-ฮท with 0<ฮทโค 0.5ย <cit.>.
A particularly strong indicator of eigenstate thermalization is ฮดn_max from Eq.ย (<ref>), which is expected to decay to zero with increasing V in both quantum-chaotic quadratic modelsย <cit.> and quantum-chaotic interacting modelsย <cit.>.
Figuresย <ref>(a) andย <ref>(b) show the eigenstate-to-eigenstate fluctuations at ฮ=0.1 and 0.5, respectively.
They all decay to zero with increasing V and they generally behave as expected for quantum-chaotic quadratic models.
Specifically, โจโจฮดn_avโฉโฉโ V^-0.5 while โจโจฮดn_maxโฉโฉโ V^-ฮท, with the exponent ฮทโ [0.3,0.5].
The observation that ฮท in the latter case is slightly smaller than 0.5 is in agreement with results from other local quantum-chaotic quadratic models, e.g., the 3D Anderson modelย <cit.>.
In Fig.ย <ref> we compare results for โจโจฮดn_avโฉโฉ and โจโจฮดn_maxโฉโฉ that are averaged over 80% of eigenstates in the middle of the single-quasiparticle spectrum, with those averaged over 500 eigenstates in the middle of the spectrum.
While the choice of averaging does not yield any significant differences at ฮ=0.1, see Fig.ย <ref>(a), it gives rise to a markedly different scaling with V at ฮ=0.5, see Fig.ย <ref>(b).
In particular, when ฮ=0.5 and the averages are carried out over 500 eigenstates in the middle of the spectrum, the eigenstate-to-eigenstate fluctuations decay much faster than โ V^-0.5.
We comment on this property in more detail in the next section, where a similar scaling is observed for the offdiagonal matrix elements. We can trace it back to the emergence of near zero modes.
ยง.ยง.ยง Variances
Variances of matrix elements are standard indicators of the ETH.
In particular, it has been realized that it is convenient to study variances in narrow energy windowsย <cit.>, in which the impact of the structure of matrix elements can be neglected.
We define the variance of diagonal matrix elements as
ฯ_diag^2=||ฮ||^-1โ_|ฮฑโฉโฮn_ฮฑฮฑ^2-(||ฮ||^-1โ_|ฮฑโฉโฮn_ฮฑฮฑ)^2 ,
and the variance of offdiagonal matrix elements as
ฯ_off^2=||ฮ'||^-1โ_|ฮฑโฉ,|ฮฒโฉโฮ'
|ฮฑโฉโ |ฮฒโฉn_ฮฑฮฒ^2-(||ฮ'||^-1โ_|ฮฑโฉ,|ฮฒโฉโฮ'
|ฮฑโฉโ |ฮฒโฉn_ฮฑฮฒ)^2 ,
where ฮ (ฮ') is a set comprising diagonal (offdiagonal) matrix elements created from 200 energy eigenstates around the target energy ฯต, such that ||ฮ||=200 (||ฮ'||=39800). In all calculations, we establish ฯ_diag^2 and ฯ_off^2 for a single disorder realization, which we then average over 100 disorder realizations. The latter averages are denoted as โจโจฯ_diag^2โฉโฉ and โจโจฯ_off^2โฉโฉ.
We calculate the variances at two target energies, i.e., ฯตโ 0 in Fig.ย <ref>(a) and ฯตโฯต_ max/2 in Fig.ย <ref>(b).
For both cases at ฮ=0.1, and for ฯตโฯต_ max/2 at ฮ=0.5, the variances behave as expected for quantum-chaotic quadratic Hamiltoniansย <cit.>.
Specifically, both โจโจฯ_diag^2โฉโฉ and โจโจฯ_off^2โฉโฉ scale approximately as 1/V. The least-squares fits of a two-parameter function aV^-ฮถ are presented in Figs.ย <ref>(a)-<ref>(b). The exception is ฯตโฯต_ max/2 at ฮ=0.5, for which the least-squares fit is not fully reliable for the accessible V. Moreover, the ratio
ฮฃ^2= โจโจฯ_diag^2โฉโฉ/โจโจฯ_off^2โฉโฉ
is close to the value ฮฃ^2_ GOE=2 predicted by the GOEย <cit.>, see the insets of Fig.ย <ref>(a) andย <ref>(b).
The deviation from ฮฃ^2_ GOE=2 is expected to be a finite-size effect. Indeed, we show in Fig.ย <ref>(c) that the difference ฮฃ^2-2 at ฯตโฯต_ max/2 for both ฮ=0.1 and 0.5 vanishes in the thermodynamic limit as bV^-ฮบ with ฮบโ 0.7.
Special attention should be devoted to the variances in the middle of the spectrum (ฯตโ 0) at ฮ=0.5, for which the results in Fig.ย <ref>(a) show a decay that is much faster than 1/V.
We argue that such a decay is a consequence of an increasing number of near zero modes that are included in the variances in Eqs.ย (<ref>) andย (<ref>).
Namely, assuming that the number of near zero modes increases as a_0 V^1-ฮถ, with 0<1-ฮถ<1 as suggested by Fig.ย <ref>(a), we develop a two-fluid approximation for the variances in which the squared matrix elements are zero for near zero modes and proportional to 1/V otherwise.
This yields (see Appendixย <ref> for details)
ฯ_ diag^2 โ1/V(1-(V/V^*)^1-ฮถ) , ฯ_ off^2 โ1/V(1-(V/V^*)^2-2ฮถ) ,
where V^*=(||ฮ||/a_0)^1/(1-ฮถ) is the number of lattice sites when the number of near zero modes becomes equal to ||ฮ||.
For the system sizes under investigation, V < V^*, so that the variances are small yet nonzero.
The results in Eq.ย (<ref>) yield a faster than 1/V decay of the variances, which is further discussed in Appendixย <ref>.
ยง QUANTUM QUENCHES AND EQUILIBRATION
Finally we turn our attention to the consequences of single-quasiparticle eigenstate thermalization for nonequilibrium quantum dynamics.
In a recent workย <cit.>, it has been proved that the equilibration of observables in the many-body states is guaranteed for quadratic Hamiltonians that comply with the single-particle eigenstate thermalization. In this section, we show that it is also guaranteed for quadratic Hamiltonians that comply with the single-quasiparticle eigenstate thermalization.
We consider a quantum quench protocol, in which we prepare a system in a pure many-body state |ฮจ_0โฉ, and evolve it under a Hamiltonian ฤค=โ_ฮฑ=1^V2ฯต_ฮฑfฬ_ฮฑ^โ fฬ_ฮฑ, for which |ฮจ_0โฉ is not an eigenstate.
We focus on one-body observables ร= โ_i,j=1^V O_ijฤ_i^โ ฤ_j of rank ๐(1), namely, on one-body observables that have an ๐(1) number of nondegenerate eigenvalues in the single-particle spectrum.
The simplest examples of such observables are the site occupation operator nฬ_i = ฤ_i^โ ฤ_i studied in Sec.ย <ref>, and the quasimomentum occupation operator that in one dimension has the simple form mฬ_k = โ_l,j=1^V 1/V e^i(l-j)kฤ_l^โ ฤ_j.
However, the analysis can be generalized to extensive observables with rank ๐(V).
The observables of interest have the following form
ร = โ_ฮฑ,ฮฒ=1^V(O_ฮฑฮฒfฬ_ฮฑ^โ fฬ_ฮฒ+A_ฮฑฮฒfฬ_ฮฑ^โ fฬ_ฮฒ^โ +A_ฮฑฮฒ^*fฬ_ฮฒfฬ_ฮฑ),
where O_ฮฑฮฒ=โจฮฑ | ร | ฮฒโฉ=โ_i,j=1^V O_ij(u_iฮฑu_jฮฒ-v_iฮฑv_jฮฒ) is hermitian, while A_ฮฑฮฒ=โจฮฑ|รฮ|ฮฒโฉ=โ_i,j=1^V O_iju_iฮฑv_jฮฒ is antisymmetric. We can rewrite ร in the Nambu representation
ร=Fฬ^โ [ 1/2O A; -A^* -1/2O^* ]Fฬ
= Fฬ^โ ๐ชFฬ,
where Fฬ=[fฬ_1 ... fฬ_V fฬ_1^โ ... fฬ_V^โ ]^T is a 2Vร 1 vector introduced in Sec.ย <ref> and the matrix O should be distinguished from the smooth function O(ฯตฬ
) in Eq.ย (<ref>).
Note that the matrix O from Eq.ย (<ref>) can be understood as being composed of matrix elements of ร between energy eigenstates from the same symmetry sector, while the matrix A as being composed of matrix elements of ร between energy eigenstates from different symmetry sectors. As argued in Ref.ย <cit.>, the behaviour (e.g., the scaling of the variance) of offdiagonal matrix elements that connect energy eigenstates from the same versus different symmetry sectors is qualitatively similar.
We are interested in the time evolution, so we express ร in the Heisenberg representation
ร(t)=e^iฤคtรe^-iฤคt =
โ_ฮฑ,ฮฒ=1^V O_ฮฑฮฒe^2i(ฯต_ฮฑ-ฯต_ฮฒ)tfฬ_ฮฑ^โ fฬ_ฮฒ
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒe^2i(ฯต_ฮฑ+ฯต_ฮฒ)tfฬ_ฮฑ^โ fฬ_ฮฒ^โ
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒ^*e^-2i(ฯต_ฮฑ+ฯต_ฮฒ)tfฬ_ฮฒfฬ_ฮฑ,
and calculate its expectation value in the initial stateย |ฮจ_0โฉ
โจร(t)โฉ =
โ_ฮฑ,ฮฒ=1^V O_ฮฑฮฒe^2i(ฯต_ฮฑ-ฯต_ฮฒ)t R_ฮฑฮฒ
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒe^2i(ฯต_ฮฑ+ฯต_ฮฒ)t M_ฮฑฮฒ
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒ^*e^-2i(ฯต_ฮฑ+ฯต_ฮฒ)t M_ฮฑฮฒ^*,
where R_ฮฑฮฒ=โจฮจ_0|fฬ_ฮฑ^โ fฬ_ฮฒ|ฮจ_0โฉ and M_ฮฑฮฒ=โจฮจ_0|fฬ_ฮฑ^โ fฬ_ฮฒ^โ |ฮจ_0โฉ. These matrices are parts of the generalized one-body correlation matrix
ฯ=
[ R M; -M^* -R^* ] ,
which eigenvalues for fermions belong to the interval [0,1], so that Tr[ฯ^2]โคTr[ฯ]โค 2Vย <cit.>.
The equilibration of an observable means that the temporal fluctuations of its expectation value about the infinite-time average vanish in the thermodynamic limit. The temporal fluctuations can be characterized by the variance
ฯ_t^2 = โจร(t)โฉ^2-โจร(t)โฉ^2,
where โจร(t)โฉ=lim_ฯโโโซ_0^ฯโจร(t)โฉ dt is the infinite time average.
We calculate the time average as
โจร(t)โฉ =โ_ฮฑ,ฮฒ=1^V O_ฮฑฮฒ R_ฮฑฮฒe^2i(ฯต_ฮฑ-ฯต_ฮฒ)t
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒ M_ฮฑฮฒe^2i(ฯต_ฮฑ+ฯต_ฮฒ)t
+ โ_ฮฑ,ฮฒ=1^V A_ฮฑฮฒ^* M_ฮฑฮฒ^*e^-2i(ฯต_ฮฑ+ฯต_ฮฒ)t.
In quantum-chaotic quadratic models, there are no degeneracies in the single-quasiparticle energy spectrum.
Therefore, the first term on the r.h.s.ย of Eq.ย (<ref>) is nonvanishing if and only if ฮฑ=ฮฒ, while the last two terms vanish. The infinite time average hence simplifies to โจร(t)โฉ=โ_ฮฑ=1^VO_ฮฑฮฑR_ฮฑฮฑ. Using similar arguments and the assumption of no gap degeneracies in the single-quasiparticle energy spectrum, we arrive at
โจร(t)^2โฉ =โจร(t)โฉ^2+โ_ฮฑโ ฮฒ|O_ฮฑฮฒ|^2|R_ฮฑฮฒ|^2
+ 4โ_ฮฑ,ฮฒ=1^V|A_ฮฑฮฒ|^2|M_ฮฑฮฒ|^2,
so that the variance is given by
ฯ_t^2=โ_ฮฑโ ฮฒ|O_ฮฑฮฒ|^2|R_ฮฑฮฒ|^2 + 4โ_ฮฑ,ฮฒ=1^V|A_ฮฑฮฒ|^2|M_ฮฑฮฒ|^2.
To determine the upper bound for the temporal fluctuations, we note that โ_ฮฑโ ฮฒ(|1/2O_ฮฑฮฒ|^2โค|max(๐ช_ฮฑฬฮฒฬ)|^2) and โ_ฮฑ,ฮฒ(|A_ฮฑฮฒ|^2โค|max(๐ช_ฮฑฬฮฒฬ)|^2), where ฮฑฬ,ฮฒฬโ{1,2,...,2V} run over the Bogoliubov-de Gennes basis, while max(๐ช_ฮฑฬฮฒฬ) is the maximal offdiagonal matrix element of ๐ช from Eq.ย (<ref>). We can now write
ฯ_t^2 โค 4|max(๐ช_ฮฑฬฮฒฬ)|^2 โ_ฮฑ,ฮฒ(|R_ฮฑฮฒ|^2+|M_ฮฑฮฒ|^2) .
Since the double sum in Eq.ย (<ref>) simplifies to โ_ฮฑ,ฮฒ(|R_ฮฑฮฒ|^2+|M_ฮฑฮฒ|^2)=Tr(R^2-MM^*)=1/2Tr(ฯ^2), we obtain
ฯ_t^2 โค 2|max(๐ช_ฮฑฬฮฒฬ)|^2Tr(ฯ^2)โค 4 |max(๐ช_ฮฑฬฮฒฬ)|^2 V .
Since the relation between one-body observables and their counterparts normalized in the single-quasiparticle sector is รโรโ(V) (see Eq.ย (<ref>) and Ref.ย <cit.>), the single-quasiparticle eigenstate thermalization in quantum-chaotic quadratic Hamiltonians results in |max(๐ช_ฮฑฬฮฒฬ)|^2 Vโ |max(๐ช_ฮฑฬฮฒฬ)|^2โ 1/V. Hence, the equilibration of
these one-body observables is guaranteed in the thermodynamic limit.
In Fig.ย <ref>, we numerically test equilibration of a site occupation operatorย nฬ, for which matrix elements in single-quasiparticle eigenstates were studied in Sec.ย <ref>.
In the quantum quench setup, the system is prepared in the many-quasiparticle ground state at disorder Wฬ=W+5.
Specifically, we construct a state with N=V/2 quasiparticles with the lowest positive energiesย ฯต_ฮฑ on top of the many-particle ground state (i.e., the Bogoliubov vacuum).
The state is evolved with a Hamiltonian at disorder W and a different disorder realization.
Figuresย <ref>(a)-<ref>(d) show the time evolution of โจnฬ(t)โฉ-โจnฬ(t)โฉ, where โจnฬ(t)โฉ is the long-time average calculated within the time interval tโ[10^2, 10^5].
The temporal fluctuations ฯ_t as functions of V are shown in Fig.ย <ref>(e).
For the quenches at W=5, see Fig.ย <ref>(a) [ฮ=0.1] and Fig.ย <ref>(c) [ฮ=0.5], the temporal fluctuations decrease with increasing system size, and a scaling ฯ_tโ V^-ฮถ with ฮถโ0.5 is observed in Fig.ย <ref>(e).
For this choice of parameters, the Hamiltonian is quantum-chaotic quadratic and the single-quasiparticle eigenstate thermalization was demonstrated in Sec.ย <ref>.
Then, equilibration of the observable is guaranteed by Eq.ย (<ref>).
The situation is different at a large disorder when the system exhibits signatures of localization, as demonstrated in Fig.ย <ref> by the IPR analysis.
Examples are shown in Fig.ย <ref>(b) [W=30, ฮ=0.1] and Fig.ย <ref>(d) [W=60, ฮ=0.5].
In these cases, the temporal fluctuations do not decrease with increasing system size and a scaling ฯ_tโ V^-ฮถ with ฮถโ0 is observed in Fig.ย <ref>(e).
ยง CONCLUSIONS
We studied the wavefunction properties and the observable matrix elements in single-quasiparticle eigenstates of a quantum-chaotic quadratic Hamiltonian without the particle number conservation.
Our main goal was to extend the concept of single-particle quantum chaos to general quadratic Hamiltonians.
In particular, we introduced the notion of single-quasiparticle eigenstate thermalization, which is the ansatz for the matrix elements of observables in single-quasiparticle eigenstates.
We also argued about an important consequence of single-quasiparticle eigenstate thermalization: it guarantees equilibration of observables in the long-time dynamics after quantum quenches at arbitrary quasiparticle numbers.
The proof of equilibration caries analogies with the recent proof of equilibration in quantum-chaotic quadratic Hamiltonians with the particle number conservationย <cit.>.
Although we only considered one-body observables, we expect the proof to be valid for multi-body observables, as in the Hamiltonians with the particle number conservationย <cit.>.
Beside these generic features, the Hamiltonian under investigation may also exhibit a considerable number of zero and near zero modes.
When present, they manifest themselves as a sharp peak in the single-quasiparticle density of states and may violate the single-quasiparticle eigenstate thermalization ansatz.
However, their relative number appears to vanish in the thermodynamic limit, and in this sense they carry similarities with the zero modes recently observed in quantum-chaotic tight-binding billiardsย <cit.>.
The role of zero modes in nonequilibrium quantum dynamics after quantum quenches is an interesting topic that deserves more attention in future works.
ยง ACKNOWLEDGEMENTS
We acknowledge discussions with M. Mierzejewski, and the support from the Slovenian Research Agency (ARRS), Research core funding Grants No.ย P1-0044 and No.ย N1-0273 (L.V.). Numerical studies in this
work have been partially carried out using resources provided by
the Wroclaw Centre for Networking and Supercomputing <cit.>, Grant No. 579 (P. T., P. ล.).
ยง SCALING OF VARIANCES OF MATRIX ELEMENTS IN THE MIDDLE OF THE SPECTRUM
In Fig.ย <ref>(a) we observed a rapid decrease of variances with V at W=5 and ฮ=0.5.
This occurs in the middle of the spectrum (ฯตฬ
โ 0), in which near zero and zero modes are included in the calculations of the variances in Eqs.ย (<ref>) andย (<ref>).
As argued in Sec.ย <ref>, the zero modes exhibit diagonal matrix elements that are vanishingly small and independent of the eigenstate index ฮฑ, and offdiagonal matrix elements that are zero.
To understand the rapid decrease of variances with V, we develop a two-fluid model in which the squares of matrix elements are either exactly zero (contribution from zero modes) or they scale as 1/V (contributions from other states).
We consider the following ansatz, inspired by Fig.ย <ref>(a), for the number N of zero modes: N = a_0 V^1-ฮถ, where a_0 is a constant and 0<1-ฮถ<1.
Since the total number of diagonal matrix elements included in the variance is ||ฮ||, the number of nonzero matrix elements is then ||ฮฬ|| = ||ฮ|| - N.
The system size V^* = (||ฮ||/a_0)^1/(1-ฮถ) at which the variance is determined by the zero modes only is given by the condition ||ฮฬ||=0.
In our calculations, we consider V < V^*.
The nonzero contribution to the variance of the diagonal matrix elements is then
ฯ_ diag^2 = 1/||ฮ||||ฮฬ||/V( 1/||ฮฬ||โ_ฮฑโฮ' V n_ฮฑฮฑ^2 ) .
If the system supports the single-quasiparticle eigenstate thermalization, the term in the parenthesis is a constant. We write it symbolically asย c_ diag.
The variance is then simplified to
ฯ_ diag^2 = 1/V c_ diag(1-(V/V^*)^1-ฮถ) .
In the case of the offdiagonal matrix elements, the total number of matrix elements included in the variance is roughly ||ฮ||^2 and the number of nonzero matrix elements is then roughly ||ฮฬ||^2 = ||ฮ||^2 - N^2.
The analysis similar to the one carried out for the diagonal matrix elements yields
ฯ_ off^2 = 1/||ฮ||^2||ฮฬ||^2/V( 1/||ฮฬ||^2โ_ฮฑ,ฮฒโฮ' V n_ฮฑฮฒ^2 ) .
Again, if the system supports the single-quasiparticle eigenstate thermalization, the term in the parenthesis is a constant. We write it symbolically asย c_ off.
The variance then takes the following form
ฯ_ off^2 = 1/V c_ off(1-(V/V^*)^2-2ฮถ) .
We demonstrate in Fig.ย <ref> that the finite-size scalings of ฯ^2_diag and ฯ^2_off, which have been established in Eqs.ย (<ref>) and (<ref>), qualitatively agree with the numerical simulations, which have been presented in Fig.ย <ref>(a). The variances are proportional to 1/V when Vโช V^*, and they begin to decrease much faster when Vโ V^*. However, there are some quantitative differences. They most likely follow from the fact that the number of zero modes is not exactly polynomial in the system size V (see Fig.ย <ref>), and that the matrix elements of near zero modes have an additional structure in the energy difference ฯ (see Figs.ย <ref> and <ref>) that we neglected. It is also somewhat surprising that c_diag<c_off. Further improvements of the two-fluid model are out of the scope of this manuscript.
ยง DISTRIBUTIONS OF MATRIX ELEMENTS
We complement the results in Sec.ย <ref> by studying the distributions of matrix elements of nฬ.
It was recently shown that one-body observables in single-particle eigenstates of quantum-chaotic quadratic Hamiltonians may not be Gaussianย <cit.> (see alsoย <cit.>), and we here expect non-Gaussian distributions are well.
As a simple analytical prediction we consider the approximation in which the coefficients u_iฮฑ and v_iฮฑ are
random variables drawn from the normal distribution with zero mean and variance ฯ^2=1/2V,
P_u(x)=P_v(x)=1/โ(2ฯฯ^2)exp(-x^2/2ฯ^2) .
This approximation, also referred to as the RMT approximation, gave rise to very accurate predictions for the distributions of matrix elements in the 3D Anderson modelย <cit.> and tight-binding billiardsย <cit.>.
The RMT approximation simplifies the following sums of coefficients:
โ_ฮฑ=1^V u_iฮฑ^2 โ1/2, โ_ฮฑ=1^V v_iฮฑ^2 โ1/2, and โ_ฮฑ=1^Vu_iฮฑv_iฮฑโ 0, which in turn simplify the expressions for the norm and the trace,
Nโ1/4V+V-1/4V+2/4V-1/4=1/2V , Tโ1/2 .
The diagonal matrix elements of the normalized operator nฬ from Eq.ย (<ref>) are then
n_ฮฑฮฑ=โ(2V)(u_iฮฑ^2-v_iฮฑ^2)
and the offdiagonal matrix elements are
n_ฮฑฮฒ=โ(2V)(u_iฮฑu_iฮฒ-v_iฮฑv_iฮฒ) .
Using these expressions, and some algebra of random variables that is reported in Appendicesย <ref> andย <ref>, one obtains predictions for the distributions of matrix elements.
The distribution of diagonal matrix elements n_ฮฑฮฑ is
P_n_ฮฑฮฑ(x)=1/ฯโ(V/2)K_0(โ(V/2)|x|),
where K_0 stands for the modified Bessel function of the second kind.
The distribution of offdiagonal matrix elements n_ฮฑฮฒ is
P_n_ฮฑฮฒ(x) = โ(V/2)exp(-โ(2V) |x| ) ,
see Appendicesย <ref> andย <ref> for the details of the derivation.
Figuresย <ref> andย <ref> compare the analytical expressions from Eqs.ย (<ref>)-(<ref>) to the actual numerical results.
Atย ฮ=0.1, the agreement is reasonably good for both the diagonal matrix elements [Figs.ย <ref>(a) andย <ref>(b)] and the offdiagonal matrix elements [Figs.ย <ref>(a) andย <ref>(b)].
The deviations can be observed near the tails of the distributions.
In particular, the distributions calculated from the matrix elements between eigenstates in the middle of the spectrum (ฯตโ 0) deviate less than those calculated from matrix elements between eigenstates away from the middle of the spectrum (ฯตโฯต_ max/2).
At ฮ=0.5, see Figs.ย <ref>(c)-<ref>(d) andย <ref>(c)-<ref>(d), we only show the distributions away from the middle of the spectrum (ฯตโฯต_ max/2) since both diagonal and offdiagonal matrix elements are close to zero in the middle of the spectrum (ฯตโ 0).
The predictions from Eqs.ย (<ref>)-(<ref>) are incompatible with the numerical results (see the solid lines).
This is a consequence of the variances being much smaller than the ones predicted by the simple RMT approximation.
For example, when the ฮ=0.5 case is compared to the ฮ=0.1 case, the ratio of their variances at ฯตโฯต_ max/2 for the largest system sizes Vโฅ 18^3 is given by
ฯ^2_diag/off=โจโจฯ^2_diag/off(ฮ=0.5)โฉโฉ/โจโจฯ^2_diag/off(ฮ=0.1)โฉโฉโ 10^-1 .
We then rescale the distributions according to the prescription
P_n_ฮฑฮฑ(x)โ P_n_ฮฑฮฑ(x/ฯ_diag)/ฯ_diag ,
P_n_ฮฑฮฒ(x)โ P_n_ฮฑฮฒ(x/ฯ_off)/ฯ_off ,
which we plot as dashed lines in Figs.ย <ref>(c)-<ref>(d) andย <ref>(c)-<ref>(d).
This rescaling improves the agreement with the numerical results.
However, the deviations near the tails of the distributions are still visible.
Nevertheless, they are similar to the deviations at ฮ=0.1 and ฯตโฯต_ max/2 shown in Figs.ย <ref>(a)-<ref>(b) andย <ref>(a)-<ref>(b).
ยง.ยง Distributions of diagonal matrix elements
We first present the derivation of the distribution of n_ฮฑฮฑ from Eq.ย (<ref>). Below, we show that is related to the distribution of a difference of random variables drawn from ฯ_1^2, i.e., from the chi-square distribution of degree one.
The square of a random number from the Gaussian distribution belongs to the chi-square distribution for x โฅ 0,
P_u^2(x)=P_v^2(x)=1/โ(x)1/โ(2ฯฯ^2)exp(-x/2ฯ^2)
and vanishes for x<0 (see also Appendixย D of Ref.ย <cit.>). Therefore, the distribution of a difference w=u_iฮฑ^2-v_iฮฑ^2 can be calculated as
P_w(y) =โซ_-โ^โโซ_-โ^โ dx dx^'
P_u^2(x)P_v^2(x^')ฮด(y-(x-x^'))
=โซ_-โ^โ dx
P_u^2(x)P_v^2(x-y),
which takes the following form after introducing the chi-square distributions:
P_w(y)=exp(y/2ฯ^2)/2ฯฯ^2โซ_แปน^โdx exp(-x/ฯ^2)/โ(x(x-y)),
where แปน=y for yโฅ 0 and แปน=0 for y<0. Finally, we arrive at
P_w(y)=1/2ฯฯ^2K_0(|y|/2ฯ^2),
which is proportional to the Bessel function of the second kind K_0. Thus, the distribution of diagonal matrix elements n_ฮฑฮฑ is given by
P_n_ฮฑฮฑ(y)=โ(N)P_w(โ(N)y)=
1/ฯโ(V/2)K_0(โ(V/2)|y|).
Note that P_n_ฮฑฮฑ(x) is the same as the distribution of diagonal matrix elements of the nearest-neighbor hopping in the quantum-chaotic quadratic Hamiltonians that conserve the particle numberย <cit.>.
ยง.ยง Distributions of offdiagonal matrix elements
Finally, we present the derivation of the distribution of n_ฮฑฮฒ from Eq.ย (<ref>). Below, we show that it is related to the distribution of a difference of two random variables from the function K_0, which itself is the distribution of a product of normal random variables.
The product of two random numbers from the Gaussian distribution is given by the Bessel function of the second kind K_0 (see also Appendixย D of Ref.ย <cit.>),
P_uu(x)=P_vv(x)=1/ฯฯ^2K_0(|y|/ฯ^2).
Therefore, the distribution of a difference q=u_iฮฑu_iฮฒ-v_iฮฑv_iฮฒ can be calculated as
P_q(y)=โซ_-โ^โdx P_uu(x)P_vv(x-y).
This integral can be evaluated with the help of the so-called characteristic functions and the Fourier transform (for further details see Appendixย D of Ref.ย <cit.>),
P_q(y) =1/ฯ^2ฯ^4โซ_-โ^โdx K_0(|x|/ฯ^2)K_0(|x-y|/ฯ^2)
= 1/2ฯ^2exp(-|y|/ฯ^2),
and corresponds to the exponential distribution. Thus, the distribution of offdiagonal matrix elements n_ฮฑฮฒ is given by
P_n_ฮฑฮฒ(y)=โ(N)P_q(โ(N))=
โ(V/2)exp(-โ(2V)|y|).
Note that P_n_ฮฑฮฒ(x) is the same as the distribution of offdiagonal matrix elements of the nearest-neighbor hopping in the quantum-chaotic quadratic Hamiltonians that conserve the particle numberย <cit.>.
biblev1
|
http://arxiv.org/abs/2306.06232v1
|
20230609200722
|
Probing self-supervised speech models for phonetic and phonemic information: a case study in aspiration
|
[
"Kinan Martin",
"Jon Gauthier",
"Canaan Breiss",
"Roger Levy"
] |
cs.CL
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] |
Using Foundation Models
to Detect Policy Violations with Minimal Supervision
Sid Mittal, Vineet Gupta, Frederick Liu, Mukund Sundararajan
Google LLC
{sidmittal,vineet,frederickliu,mukunds}@google.com
June 9, 2023
================================================================================================================================
Textless self-supervised speech models have grown in capabilities in recent years, but the nature of the linguistic information they encode has not yet been thoroughly examined. We evaluate the extent to which these models' learned representations align with basic representational distinctions made by humans, focusing on a set of phonetic (low-level) and phonemic (more abstract) contrasts instantiated in word-initial stops. We find that robust representations of both phonetic and phonemic distinctions emerge in early layers of these models' architectures, and are preserved in the principal components of deeper layer representations. Our analyses suggest two sources for this success: some can only be explained by the optimization of the models on speech data, while some can be attributed to these models' high-dimensional architectures.
Our findings show that speech-trained HuBERT derives a low-noise and low-dimensional subspace corresponding to abstract phonological distinctions.
Index Terms: self-supervised models, decoding analysis, probing, speech, representation learning, phonemes
ยง INTRODUCTION
Self-supervised learning techniques have become the new standard for speech representation learning in recent years, and are at the foundation of models such as HuBERT <cit.> and wav2vec <cit.>, which are establishing new states of the art in automatic speech recognition <cit.>. These systems are pre-trained entirely on unlabeled data before being fine-tuned downstream for particular tasks. As such, they are free to derive whatever representations are optimal for their self-supervised pre-training task.
Recent work has asked whether the representations derived by these models are human-like at multiple levels of granularity, from their specific representational contents <cit.> to their broad alignment with human brain responses to speech stimuli <cit.>.
Past work <cit.> shows that representations of speech stimuli extracted from self-supervised models can be successfully aligned with abstract phonetic descriptions of the input.
However, this prior work neglects a representational distinction crucial to linguistic theory: the difference between the phonetic level of representations and the phonemic level. Unlike phonetic representations, which are closely related to the acoustic features that implement them, phonemic representations function in linguistic theory as the axes of contrast which underpin lexical and grammatical distinctions and which are core to the effective use of language. Because these two levels are highly correlated in practice, distinguishing phonetic representational systems from phonemic ones is nontrivial. This paper explores whether self-supervised models encode distinct representations of speech inputs at both levels.
ยง.ยง Phones, phonemes, and allophones
In the context of spoken languages, phonologists use the term phone to describe the different phonetically-distinct categories that are targets of speakers' production and perception systems: for example, the b in [@"baUt] about and the p in ["phรฆt] pat are perceptually distinctive for a speaker English, and thus are classified as separate phones.[We give phonetic transcription in [square brackets] and phonemic transcription in /slashes/ with the International Phonetic Alphabet.]
However, there are many cases where multiple phones realize the same linguistically meaningful class of sounds, the phoneme. For example, the l sounds in [mIk] milk and in [lin] lean are perceptually distinguishable but not linguistically contrastive: there are no words that differ in their meaning based solely on whether the l sound is dark [] or clear [l]. These different phonetic realizations of the same phoneme are called allophones <cit.>.
ยง.ยง Probing models for phonetic and phonemic knowledge
Contrast Group 1 Group 2 Confound group
Phonemic stop (ex. labial) #ph V, #sp V #b V #s{k,t,l,m,n,w} V
Phonetic stop (ex. labial) #ph V #sp V, #b V #s{k,t,l,m,n,w} V
Consonant vs. vowel (+) C V N/A
Primary vs. no stress (+) V1 V0 N/A
Distant phoneme (-) C X X X V V X X X V N/A
Distant phoneme (-) V X X X C V X X X V N/A
Description of the stimuli entering into each contrast tested in our analyses. Columns denote targets of a classifier applied to input representations, and column contents denote IPA phonetic patterns used to select the relevant audio frames. # denotes word onset; C a consonant; V a vowel optionally followed by a stress number (1 = primary stress, 0 = no stress)<cit.>; X any phoneme; {} a set of options; (+)/(-) positive/negative control tests. Representations were extracted from the onset to the offset of the underlined phone in each matching audio file.
Although most phonemes do not share allophones, in cases of neutralization, a specific phone could belong to one of two representationally-distinct phonemes. In this paper, we focus on the neutralization of post-sibilant and word-initial plosives in English. The top left panel of Figureย <ref> demonstrates the relationship between the word forms peak, speak, and beak, along with their phonemic and phonetic representations. In isolation, the phone [p] (voiceless, with short voice onset time) is ambiguous as to whether it represents phonemic /p/ (as in [spik] speak) or word-initial phonemic /b/ (as in [pik] beak) <cit.>.
This paper tests whether self-supervised speech models derive distinctly phonemic representations of their speech input, in addition to phonetic ones, focusing on the case of neutralization. Ultimately, we find that these speech models learn robust representations of phonetic and phonemic content, suggesting that these models learn human-like representations of acoustic input.
ยง METHODS
We design a probing experiment <cit.> to test whether self-supervised speech model representations contain distinct phonetic and phonemic contents. We use the Massive Auditory Lexical Decision (MALD) database <cit.>, which contains recordings of 26,793 words and 9,592 pseudowords, each uttered in isolation by a single speaker, accompanied by time-aligned phonemic annotation.
We convert the recordings to mono with a 16 kHz sample rate, and present them to a suite of self-supervised speech models, each pre-trained on different datasets.
ยง.ยง Speech models and representations
Our tests evaluate the representations of HuBERT <cit.>, a popular self-supervised speech representation architecture. This model has been widely used as a component of larger spoken language processing pipelines and has proven useful for downstream language tasks <cit.>. It has also been analyzed in other works probing self-supervised speech models <cit.>.
The HuBERT architecture[We analyze the HuBERT Base model in this paper, which has approximately 95 million parameters.] is composed of a seven-layer convolutional neural network (CNN) which feeds into a twelve-layer transformer. The CNN takes in raw audio at 16 kHz and yields a sequence of frames, each 20 milliseconds long (50 Hz). These frames are then fed through a stack of 12 transformer layers. In our work, we extract the representations output by each of the CNN layers and transformer blocks.[HuBERT also includes a final classification layer which predicts the identity of masked input frames. These classification weights and the discrete output codes are not studied in this paper.]
We evaluated instances of the HuBERT architecture trained on three different objectives. These objectives were selected to help us distinguish the specific effects of training on speech data:
* speech HuBERT: trained to predict masked frames on speech data from LibriSpeech <cit.>, as released by <cit.>
* non-speech HuBERT: trained on non-speech audio from the AudioSet dataset <cit.>, as released by <cit.>
* random HuBERT: a matching architecture with randomly initialized CNN and transformer weights
The non-speech and random models were included to examine the extent to which the audio-based non-speech training objective and architecture alone facilitate the learning of phonological representations, respectively.
We also analyzed a log-mel representation of the acoustic input, which computes the log-power of acoustic input within 80 mel frequency bands as computed by Kaldi's routine. This model targets low-level features of the acoustic signal, acting as a suitable baseline of comparison for the more complex HuBERT-based speech models.
HuBERT produces a representation h_โ(t) for each 20ms frame of the audio input at each layer โ. We first extract frame-level representations h_โ(t) for each file in the MALD dataset and for each HuBERT model instance.
Because phones typically span more than one frame, we take a model's representation of a phone to be its mean activation over the overlapping frames h_โ(t) <cit.>. This aggregation produces a vector of size d for each phone in the dataset, which is the object of our analyses.
ยง.ยง Phonetic and phonemic probes
We designed a set of classification tasks exploiting the pattern of phonetic neutralization of the stop-voicing contrast in English, described in Sectionย <ref>. We implemented these tasks for triples of phones at three places of articulation: labial (/p/, /sp/, /b/), alveolar (/t/, /st/, /d/), and velar (/k/, /sk/, /g/). Tableย <ref> demonstrates this scheme for the labial place of articulation, which we use for exposition in the main text. Quantitative results are the average of identical tasks at three places of articulation.
For each place of articulation, we design phonemic and phonetic multinomial regression probes p_m and p_t. Given a representation of a particular phone at hidden layer h_โ, our probes learn classifier weights
p_m(h_โ) โexp(W_m^T h_โ); p_t(h_โ) โexp(W_t^T h_โ)
The phonemic classifier p_m is tasked with distinguishing phonemic /p/ from phonemic /b/ as realized in different contexts (Tableย <ref>). In contrast, the phonetic classifier p_t must distinguish cases of aspirated [ph] from cases of unaspirated [p] and [b].
ยง.ยง.ยง Controlling for phonetic confounds
These phonetic and phonemic labelings of the data are confounded with a simpler disjunctive phonetic coding of the input: our phonemic labial contrast (shown in the first row of Tableย <ref>) could be solved by grouping together inputs which contain [ph] or which are directly preceded by the phone [s].[This information is likely to be present in the model representations due to either 1) co-articulation of [s] and [p] in the input, 2) overlapping spectral signals of [s] in the earliest input frames near the onset of [p], or 3) models' combination of neighboring low-level acoustic features during their feed-forward pass.] This means a model could solve the phonemic contrast by searching for the presence of a word-initial [s], rather than attending to the neutralized stop itself.
To control for this, we design a third class of confound inputs, shown in the third column of Tableย <ref>. Phones in this class have the property that they are also directly preceded by the sound [s] and followed by a vowel. We constrain our probe classifiers to jointly contrast among these three classes, thus ruling out a classification strategy which merely exploits the presence of [s] to solve the phonemic contrast.
Our classification setup thus learns three-way multi-class weights W_m, W_t โโ^3 ร d for a given d-dimensional input representation. We optimize these weights under a multinomial loss, requiring the classifier to jointly contrast inputs between the two classes of interest and also with the confound class.
We then evaluate these classifiers on held-out data with a multi-class ROC/AUC metric, computing an ROC score for each of the three possible binary contrasts in the data, weighting each score by the prevalence of the classes in the contrast, and summing these weighted scores.
ยง.ยง Dimensionality reduction
Previous work has suggested that HuBERT encodes some information at much higher representational levels, such as word identity and meaning <cit.>. In order to avoid confounding our phonetic and phonemic tests with these higher levels, we designed a set of control tests to help us target a distinctly phonological level of representation within the models. Our positive and negative control tests, shown in the lower section of Tableย <ref>, were designed to select for a maximal and minimal level of representational capacity. These tests offer a non-circular method for selecting views of model representations which are neither too strong (i.e., performing above chance on negative controls) nor too weak (i.e., performing below ceiling on positives).
For each model representation h, we use these control tests to search for a representation of reduced dimensionality h^d which satisfies the above criteria.
Concretely, let h_โ^d: โ^n ร d denote a model's representations of all n phones in the dataset at layer โ reduced to d principal components. Let P_1, P_2 and N_1, N_2 denote ROC/AUC measures of held-out probing performance on a representation h_โ^d for the positive and negative controls respectively, shown in Tableย <ref>. We define a control score ๐ฎ summarizing across-layer difference at dimension d:
๐ฎ^d(h) = โ_โ ( P_1(h_โ^d) + P_2(h_โ^d) - N_1(h_โ^d) - N_2(h_โ^d) )
For each model, we selected a constrained dimensionality d^* which maximized this contrast between positive and negative controls:
d^*(h) = _d โ{2^1, 2^2, โฆ, D}๐ฎ^d(h)
ยง RESULTS
We train and evaluate phonetic and phonemic probes for representations extracted from every HuBERT layer (7 CNN layers and 12 transformer layers) optimized for each training objective (speech, non-speech audio, or randomly initialized weights). We report the probe performance as the weighted ROC/AUC metric described in Sectionย <ref>. We estimate this metric by nested 10-fold cross validation, using an inner cross-validation loop to estimate an L2 regularization hyperparameter.
ยง.ยง Aspiration test, original dimensionality
Figure <ref> shows classifier performance on the aspiration test conducted on the HuBERT speech, non-speech audio, and random models, along with a log-mel baseline. Probe performance is evaluated with an ROC/AUC metric, where 0.5 corresponds to random chance guessing and 1.0 corresponds to perfect phonetic or phonemic contrasts. Since the log-mel metric is external to the models, we show it as a horizontal red line across layers.
In all models, we see a rapid transition from near-chance performance to high scores on both phonetic and phonemic contrasts within the first several CNN layers. Each model then exhibits a performance drop at the final layers of the CNN (x=-1 in Figureย <ref>), followed by near-ceiling performance in the task-optimized transformer models and far worse performance in the randomly initialized transformer layers.
We see a slight decrease in probe performance in the final layers of the non-speech audio model, suggesting task-specific specialization away from speech features.
The random-weights model performs worse than task-optimized models but better than chance, demonstrating that some nontrivial portion of success on these tests can be attributed simply to high-dimensional random projection. However, we see that both task-optimized HuBERT models reach ceiling performance at intermediate layers, suggesting that training selects for quality phonetic and phonemic representations.
These results show that the speech models' representations are both phonetically and phonemically robust, and that these abstract representations are acquired early in the feed-forward pass in the CNN, even before the first transformer block.
However, this at-ceiling performance may mask meaningful differences between models and contrasts of interest. Using the dimensionality reduction tools described in Sectionย <ref>, we asked if models continue to encode these abstract distinctions in their highest-order principal components.
ยง.ยง Control tests, constrained dimensionality
Figure <ref> shows the layer-wise performance of the classifier on the positive and negative control tests conducted on the HuBERT speech, non-speech audio, and random models at a constrained dimensionality as described in Sectionย <ref>. Speech and random models: d^*=16; non-speech audio model: d^*=8, log-mel: d^*=80 (no reduction).
Although our probe performed at ceiling in the aspiration test, the soundness of our probing paradigm is affirmed by the patterning of performance on our control tests: all HuBERT models perform poorly at the negative controls and well at the positive controls, confirming that there do exist non-phonological distinctions that the models do not encode.
For all models, a marked jump in performance occurs in the last layer of the CNN. The layer at which this jump occurs coincides with the layer at which model performance temporarily drops in the aspiration test. This suggests that the sudden improvement in representing gross phonological categories (the positive controls) trades off with success at fine-grained phonological distinctions (the aspiration test), though this performance picks back up later in the transformer layers (Figure 4).
ยง.ยง Aspiration test, constrained dimensionality
Figureย <ref> shows classifier performance on the aspiration test conducted at the constrained dimensionalities d^*. Here, the models no longer perform at ceiling on the classification task. The performance of log-mel remains the same as in Figureย <ref> since its constrained d^* value is unreduced, remaining at d^*=80. Unlike the HuBERT models, log-mel is unable to perform well at the positive controls (visible in Sectionย <ref>), meaning that there is no downward pressure when finding a constrained d^* that maximizes the difference between positive and negative controls.
The HuBERT speech model out-performs both other models, and in both late CNN layers and early transformer layers achieves better performance on phonemic classification than phonetic. In later layers, however, the performance of phonetic distinctions gradually improves relative to the phonemic ones. This suggests that, while the self-supervised training objective leads to the HuBERT speech model discovering both salient phonetic and phonemic abstractions, the phonetic representation may be prioritized in later layers due to its greater utility in predicting the identity of short (20ms) masked frames in training.
ยง DISCUSSION
We found in both aspiration and control tests that HuBERT successfully draws abstract representational distinctions between input phonemes within its CNN layers, prior to even the first transformer block. This suggests that the more complex transformer architecture may actually be overpowered for these types of speech representation tasks. This is compatible with some work on distillations of HuBERT, which find that a majority of HuBERT's transformer layers can be excised without significant performance loss, so long as the CNN layers are maintained <cit.>. Future work can investigate the degree to which these phonetic and phonemic distinctions are maintained in distilled models, and whether models with fewer parameters could recover the same representational contents from scratch.
We find both phonetic and phonemic distinctions are encoded early in the HuBERT speech model, as well as to a lesser degree in the non-speech audio model. Our results demonstrate that HuBERT's forward pass first recapitulates, then exceeds, the representational capacity of the log-mel baseline. First, the penultimate CNN layer of HuBERT trained on speech data produces representations which encode the same phonological distinctions as log-mel features (Figure 4, x=-2). Next, the final CNN layer of HuBERT renders higher-level phonological distinctions not present in the log-mel features (Figure 3, x=-1).
Some of this success seems to be due merely to HuBERT's high-dimensional transformer architecture. We note that even the random-weights model encodes coarse-grained phonological distinctions late in its CNN layers (Figure 3, x=-1, green). However, the fine-grained distinctions targeted in the aspiration test are not encoded at the same layers (Figure 2, x=-2 and -1, green). This disparity persists through the transformer layers, where we see a sustained gap in decoding performance between the two models (Figures 2 and 3, blue vs. green).
The contrasts with the above baseline models suggest that task-optimized HuBERT simultaneously and with lower noise accomplishes two representational functions modeled independently by the log-mel and random projection baselines: it derives the fine-grained phonemic distinctions readable from log-mel representations, while rendering decodable the gross phonological distinctions present in the randomly initialized models, all without losing lower-level phonetic information.
Although we sought to control for frame overlap confounds and select only information relevant to the phonological identification task, confounds may remain. In the aspiration test, the non-speech and random models unexpectedly performed better at phonemic classification than the phonetic one. We expected the opposite result, given <cit.>'s finding that the phonetic /sp/-/b/ similarity can indeed be recovered from low-level acoustic features. Further, the constrained speech model unexpectedly gains higher relative performance on the phonetic than the phonemic task over the course of its transformer layers (Figure 4, blue).
A possible explanation for this discrepancy is that the MALD dataset's speaker exhibits idiolectic differences in voice onset time that undermine the assumed acoustic similarity of /sp/ and word-initial /b/, which lie outside the generalization population of <cit.>'s claim.
This would lead to degradation on the phonetic classification task relative to the phonemic one, since the unaspirated category would then be internally heterogeneous. Extending the same analysis to a multi-speaker dataset or creating new phonological tests may resolve this issue.
ยง CONCLUSION
This paper tested whether self-supervised speech models derive distinctly phonemic representations of their speech input, using aspiration as a case study. We find that these speech models derive robust representations of phonemic and phonetic content which recover and surpass fine-grained log-mel features. These representations emerge early in the models' processing stream, a phenomenon which has implications for the design of self-supervised speech models. Differences in model performance depending on the content of training datasets are evidence for task specialization arising in later model layers. Overall, the lack of marked distinction between phonetic and phonemic contrasts suggests the presence of certain confounds that complicate representational probing of these speech models.
Acknowledgments: Thanks to Juliette Millet and Ewan Dunbar for graciously providing their non-speech audio-trained models, to the MIT Exp/Comp group for feedback. KM, CB, and JG gratefully acknowledge support from MIT CPL, the MITโIBM Watson AI Lab, and the Open Philanthropy Project, respectively.
ยง REFERENCES
IEEEtran
|
http://arxiv.org/abs/2306.04266v1
|
20230607090703
|
Measurements of rate coefficients of CN$^+$, HCN$^+$ and HNC$^+$ collisions with H$_2$ at cryogenic temperatures
|
[
"Petr Dohnal",
"Pavol Jusko",
"Miguel Jimรฉnez-Redondo",
"Paola Caselli"
] |
astro-ph.GA
|
[
"astro-ph.GA",
"physics.chem-ph"
] |
Department of Surface and Plasma Science, Faculty of Mathematics and Physics,
Charles University, Prague, V Holeลกoviฤkรกch 2, 180 00, Czech Republic
[][email protected]
Max Planck Institute for Extraterrestrial Physics, Gieรenbachstraรe 1, 85748 Garching, Germany
Max Planck Institute for Extraterrestrial Physics, Gieรenbachstraรe 1, 85748 Garching, Germany
Max Planck Institute for Extraterrestrial Physics, Gieรenbachstraรe 1, 85748 Garching, Germany
The experimental determination of the reaction rate coefficients for production and destruction of and in collisions
with H2 is presented. A variable temperature 22 pole radio frequency ion trap was used to study the reactions in the
temperature range of 17 โ 250 K. The obtained rate coefficients for the reaction of and of with
H2 are close to the collisional (Langevin) value, whereas that for the reaction of with H2 is quickly decreasing
with increasing temperature. The product branching ratios for the reaction of with H2 are also reported
and show a notable decrease of product with respect to product with increasing temperature.
These measurements have consequences for current astrochemical models of cyanide chemistry, in particular for the cation.
Measurements of rate coefficients of , and collisions with H2 at cryogenic temperatures
Paola Caselli
July 31, 2023
=========================================================================================
ยง INTRODUCTION
HCN and its higher energy isomer HNC (0.64) are presumably the two simplest isomers in chemistry.
On earth, at 300 K, the isomer is populated almost exclusively.
Both isomers have been detected in a variety of environments of the interstellar medium (ISM)
such as starless cores <cit.>, diffuse <cit.>,
translucent <cit.> and dense interstellar clouds <cit.>
and star <cit.> and planet <cit.> forming regions.
Despite the 1.3 barrier for isomerization from HCN to HNC <cit.>,
HNC abundances are
often found to be comparable to that of HCN, especially in cold environments with temperatures close
to 10 K <cit.>. On the other hand, the HCN/HNC abundance ratio was
reported to be much greater than one in relatively warm objects such as hot cores or young stellar
objects. For example, a HCN/HNC abundance ratio of 13 was observed for IRAS 16293โ2422 <cit.>,
while in the vicinity of the Orion-KL Nebula this ratio was approximately 80 <cit.>.
It has been suggested <cit.> that the actual HCN/HNC abundance ratio is governed by competing
processes in the given environment.
However, as noted by <cit.>, special care needs to be taken in interpreting the observations,
as the rate coefficients for HCN and HNC differ significantly, especially when the collision partner is para-H2,
the dominant form of H2 in cold gas (e.g. <cit.>).
One of the main sources of HCN and HNC molecules in the interstellar medium is the dissociative recombination
of HCNH+ ions with electrons
+ e- โHCN + H
โHNC + H
โCN + H + H
with almost the same probability for the three reaction channels <cit.>. The principal
processes leading to the formation of HCNH+ ions in molecular clouds are <cit.>:
+ H_2โ + H,
+ H_2โ + H,
C^+ + NH_3โ + H,
H3+ + /HNCโ + H2,
H_3O+ + /HNCโ + H_2O,
HCO+ + /HNCโ + CO,
where reactions (<ref>) and (<ref>) are thought to be the main pathway
for HCNH+ formation in dense, cold regions such as L1544 <cit.> while
reaction (<ref>) shall dominate in warmer environments <cit.>. In early
times of cloud formation HCNH+ ions are presumably produced mainly in reaction
(<ref>) <cit.>.
The processes leading to the production of and its higher energy isomer ions (0.94)
depend on the physical and chemical conditions of the molecular cloud.
In cold and dense regions,
these ions are mainly produced by proton transfer reaction of H+ with HCN/HNC or in
collisions of CN+ ions with H2 <cit.>
+ H2 โ + H ฮ H = -2.28
โ + H ฮ H = -1.33 ,
with the enthalpies of formation, ฮ H, taken from ref. <cit.>. Reaction (<ref>)
was previously studied in SIFT (Selective Ion Flow Tube) experiments at 300ย K by <cit.>
and by <cit.> who reported the value of the reaction rate coefficient to be
1.1ร10^-9 and 1.6ร10^-9, respectively. In both experiments, the
observed branching ratio was 0.5 (, the two reaction channels were equally probable).
In another SIFT experiment, <cit.> obtained a value of
the reaction rate coefficient of 1.0ร10^-9 at room temperature without distinguishing
between reaction products. A slightly higher value of 1.24ร10^-9 was measured by
<cit.> using the Ion Cyclotron Resonance technique (ICR) at near thermal energies.
To the best of our knowledge there are no experimental data for reaction (<ref>)
obtained at low, astrophysically relevant temperatures.
Reactions (<ref>) and (<ref>) were only studied by <cit.>
at 300ย K using the SIFT technique. The measured reaction rate coefficients were
8.6ร10^-10 and 7.0ร10^-10, respectively. The scarcity of experimental
data for these reactions is not surprising as it is very difficult to distinguish between the two
isomer forms in experiments solely based on mass spectrometry.
Chemical probing is an obvious method to enhance mass sensitive experiments to
gain isomer sensitivity <cit.>.
Photon processes also play an important role in the HCN/HNC (neutral/cation) abundance <cit.>.
Although <cit.> and <cit.> have been extensively studied experimentally in
the mm-wave range, only a Ne matrix IR spectrum <cit.>
and vibrational bands determined by neutral photoelectron spectroscopy <cit.>
are available for / cations.
Photon ionisation can be used to produce almost exclusively from HCN, contrary
to e^- bombardment <cit.> (see also sectionย <ref>).
The anion has been extensively studied spectroscopically in the IR<cit.>
(vibration) and in the mm-wave range <cit.> (rotation) and consequently detected in
space<cit.>. The anion does not react with H2 and only forms a weakly bound complex
at low temperatures<cit.>, therefore we assume it only plays a marginal role in cyano-H2
chemistry relevant in this context.
Due to the simplicity of the / isomerisation process and the small size of the system it is extensively
studied theoretically, often employing very high levels of theory or to benchmark new methods
<cit.>.
The present astrochemical models have difficulties to reproduce the observed / ratios due to
potentially missing important pathways or key reactions whose reaction rates are not well constrained <cit.>.
In their recent study, <cit.> conclude that in order to get the correct molecular abundances,
laboratory measurements of reactions (<ref>), (<ref>) and (<ref>) are needed.
This paper focuses on the experimental determination of the reaction rate coefficients for reactions
(<ref>), (<ref>) and (<ref>) in the temperature range of
17 โ 250ย K relevant for a variety of astrochemical environments.
ยง METHODS
ยง.ยง Experimental setup
The experimental setup is described in detail in <cit.>, only a short overview will be given here.
The ions are produced in the storage ion source (SIS) <cit.>, mass selected by passing through the
first quadrupole mass filter and then refocused on the trap axis using an electrostatic bender.
The 22 pole radio frequency ion trap is positioned on a cold head (RDK-101E, Sumitomo) enabling operation down to 4 K. The pressure
is measured by a Bayard-Alpert type ionisation gauge calibrated by a capacitance manometer CMR 375 from Pfeiffer. After a set
storage time, the trap is opened and the ions exiting the trap are mass selected by the second quadrupole mass
spectrometer and detected in a Daly type conversion detector.
The trapped ions are cooled in collisions with helium gas that is introduced for a few milliseconds into the trap volume
at the beginning of every trapping cycle by a custom piezo valve.
This measurement sequence is repeated, usually at 1 Hz
repetition rate, until an adequate signal for the set experimental condition is acquired.
To form ย and ย ions, either acetonitrile (CH3CN) or vapors of bromium cyanide (BrCN) with water,
producing HCN, were continuously leaked into the SIS.
When acetonitrile was used, a portion (about 15 %) of ions with mass 27 m/z did not react with hydrogen and oxygen
molecules. Here and in the following text m/z denotes mass to charge ratio. We assume that this non-reacting fraction consists of C2H3+ ions. The production of non reactive species
was not observed when bromium cyanide was used as a precursor gas in the SIS.
In experiments focused on the reaction of ย with H2, the use of acetonitrile led to high fraction of ions with mass 26 m/z
(almost 50 %) not reacting with H2 nor O2. We tentatively identified these ions as C2H2+ with reported
reaction rates with H2 at least 3 orders of magnitude lower<cit.> than for + H2 and described as
โno reaction/ slowโ<cit.> for C2H2+ + O2, in comparison
to reaction of with atomic O with the reaction rate of 2ร10^-10 <cit.>.
ยง.ยง Ratio of to in the ion trap
We estimate the relative amount of to in the trap using isomer sensitive reaction probing.
, the lower energy isomer by 0.94, and have different reaction channels in ion-molecule reactions with
O2
+ O2โO2+ + ,
+ O2 โHNCO+ + O
โNO+ + HCO,
and SF6
+ SF6โSF5+ + HF + CN
+ SF6โHNCF+ + SF5,
with all the species in their vibrational ground states.
The rate coefficients for reactions (<ref>) and (<ref>) were reported at 300 K as
5.0ร10^-10 and 3.6ร10^-10, respectively <cit.>.
Reactions with SF6 (<ref>) and (<ref>) were reported at 300ย K as 1.3ร10^-9
and 1.2ร10^-9, respectively <cit.>. All those reactions are exothermic.
The number of trapped ions as a function of the storage time with a small amount
of O_2 gas (ca. 5ร10^9 cm^-3)
present in the trap volume is plotted in both panels of Fig. <ref>. The primary ions with mass 27 m/z
(, , and C2H3+) were produced in the SIS from acetonitrile and then trapped and cooled (translation and vibration)
using the initial He pulse.
As can be seen from the upper panel of
Fig. <ref>, the main product under these conditions are O_2+ ions, indicating that the majority of ions with mass
27 m/z are .
At low energies, <0.1, the collision of ย with CO or CO_2 leads to the formation of the more stable
isomer as a result of double charge transfer in the collisional complex <cit.>.
CO2 is added into the trap volume to enhance the fraction of the HNC+
isomer inside the trap using the catalytic reaction
+ CO2โHNC+ + CO2 ฮ H = -0.94 .
The number density of CO_2 used in the experiments (order of magnitude or more than other reactants)
ensures that the isomerisation reaction is dominant over reactions with O2 for / ratio estimation or
with H2 for reaction rates.
The lower panel of Fig. <ref> illustrates the typical enrichment using the CO2 technique.
The dominant product in the O2 probing reaction is HNCO+, implying that the most prevalent isomer is .
The actual fractions of ย and ย were calculated by solving a set of corresponding balance equations (see section Data analysis) and
extrapolating the numbers of ions of the given species in the trap to the time of 500 ms.
Analogous results were obtained with SF_6 used as a probe gas.
It is important to note, that the addition of CO_2 directly to the ion source did not lead to a substantial change in
the measured fractional populations of and in the trap.
We attribute this behaviour to the relatively high energy of ions in the source.
Ions just produced by electron bombardment are substantially more energetic than few hundred
meV, which is required for reactionย (<ref>) to be dominant over a simple no-isomerisation collision.
ยง.ยง Data analysis
By integrating the chemical rate equations for the ion number densities over the trap volume, balance equations
describing time evolutions of the number of trapped ions n_i are obtained. As an example, for the
reaction of ย with H_2 the corresponding balance equation can be written as
dn_HCN^+/dt = -k^H_2_HCN^+[H_2]n_HCN^+,
where k^H_2_HCN^+ is the binary rate coefficient for the reaction of ย with H_2
and [H_2] is the H_2 number density.
When evaluating the time dependencies of the number of ions in the trap, the quantities that are obtained by
fitting the set of appropriate balance equations to the measured data are reaction rates, , the reaction rate
for reactionย (<ref>) is k^H_2_HCN^+[H_2]. The reaction rate coefficient
is then determined from the slope of the dependence of the reaction rate on the number density of the
corresponding reactant at the given temperature (for a detailed description of the fitting procedure see
ref.<cit.>).
Throughout the text, quoted uncertainties are statistical errors of the corresponding fitting procedures.
The systematic error, arising mainly from the uncertainty in pressure calibration, is estimated to be 20ย % <cit.>.
ยง RESULTS AND DISCUSSIONS
ยง.ยง ย +ย H_2
The reaction of with H_2 was studied in the temperature range of 17 - 250 K. An example of the measured
time dependence of the number of ions in trap when was the dominant isomer is shown in Fig.ย <ref>.
As the ions react with molecular hydrogen, HCNH^+ ions are formed as the only product of the reaction.
Similar data were obtained at each temperature for several values of hydrogen number density in order to evaluate
the reaction rate coefficients (see Fig.ย <ref> for a typical reaction rate plot as a function of number density).
The results are plotted in Fig.ย <ref> as down facing triangles. The collisional
Langevin reaction rate coefficient for this reaction is 1.54ร 10^-9 using H_2 polarisability
from ref<cit.>. As can be seen from Fig.ย <ref>, the obtained value of the reaction rate coefficients
are constant in the studied temperature range and slightly lower than the Langevin reaction rate coefficient.
The reaction of with H_2 was previously studied by <cit.> at 300 K using a selected-ion flow tube
(SIFT) experiment. CF_4 and SF_6 were used as probe gases to distinguish between
and ions in the SIFT experiments <cit.>. Contrary to what <cit.> thought
at that time, the ground vibrational state of does
not react with CF4 <cit.> indicating that a substantial fraction of ions in their experiment
<cit.> was, in fact, vibrationally excited. This could be a possible explanation for the almost 40 % difference
between the SIFT data and the reaction rate coefficient obtained at the highest temperature of 250 K in the present study.
ยง.ยง + H_2
In order to study the reaction of ions with H_2, we first trapped the ions (with a small fraction of
non reactive C_2H_3^+ ions) and converted them to ions inside the 22 pole trap by applying the
isomerization reactionย (<ref>) with CO2<cit.>. The CO2 number density was high
enough to ensure that the majority of the ions was converted to before the measurement of the reaction with
H2 was performed.
The only observed product of the reaction were HCNH^+ ions.
For temperatures above 80ย K, the CO_2 gas was added directly into the trap, as the vapour pressure of
CO_2 was sufficient (4ร10^-6 Pa at 80ย K<cit.>).
For lower temperatures, 30ย K and 70ย K, the CO_2 had to be mixed with the helium buffer gas and pulsed at the
beginning of each trapping cycle.
The charge transfer reaction between N2^+ and CO_2 was used to check and confirm the presence of
sufficient CO2 number densities. While no depletion of CO2 was observed down to 70ย K,
below this temperature the CO2 number density started to decrease reaching our detection limit between 30-40 K.
Although the value of the reaction rate coefficient obtained for temperature of 30ย K corresponds to a mixture of ions
with mass 27 m/z dominated by , we are unable to quantify the isomeric ratio.
The measured value of the reaction rate coefficient is approximately half of the collisional Langevin rate coefficient
at 250ย K and increases with decreasing temperature (see Fig.ย <ref>). The 300ย K SIFT value
reported by <cit.> is in very good agreement with our data.
ยง.ยง + H_2
The ions were produced in the SIS from bromium cyanide with an admixture of water vapours. The majority of the trapped ions
with mass 26 m/z reacted with hydrogen, which was leaked directly into the trap, implying that there was only a small fraction
of C_2H_2^+ ions present in the trap during these experiments <cit.>. The value of the reaction rate coefficient
for reaction of with H_2 was measured in the temperature range of 17 - 200 K. The results are shown in
Fig.ย <ref>. The obtained value is close to the collisional Langevin reaction rate coefficient. No significant temperature
dependence was observed. Our data are in excellent agreement with the study by <cit.> performed at 300ย K.
The reaction of with H_2 results in the production of and ions. As both these products subsequently react with
hydrogen (see above), it was impossible to determine the product branching ratio with hydrogen continuously added into the trap.
Instead, we added a small amount of H_2 (approximately 0.2 %) to the short helium pulse that is used to cool down ions at
the beginning of each trapping period. In this way, we were able to maximize the amount of produced primary ions ( and
) and to minimize the subsequent formation of secondary ions (HCNH^+).
The actual product branching ratio was determined by utilizing different reactivity of and ions with SF_6,
which was added directly into the trap. An example of the measured number of ions in the trap as a function of storage time is
shown in Fig.ย <ref>. SF5+ is formed in reactions of as well as and vibrationally excited
with SF_6, , this channel is not suitable for product isomer probing.
Solely ions in the ground state reacting with SF6 do form HNCF+. Therefore, the fraction was
determined from the increase of the number of HNCF+ ions compared to the decrease of the ions of mass 27 m/z.
The measured / product branching ratios are shown in lower panel of Fig.ย <ref>.
At 250ย K, almost 70 % of all ions produced in reaction of with H_2 are . As the temperature decreases,
reactionย (<ref>) results in larger fraction of produced ions and below 100ย K the channel of
reactionย (<ref>) accounts for more than 60 % of total produced ions.
This is in disagreement with results of <cit.> who observed the same probability of both channels of reaction
(<ref>) at 300ย K.
As discussed above in relation with the study of the reaction of with H2, it is possible that the ions in
the study by <cit.> were vibrationally excited. While we are absolutely certain that in our experiment the primary
ions are in their vibrational ground state, we can not rule out that some of the and ions produced inside
the trap (reactionย (<ref>)) possess some vibrational excitation. As reaction (<ref>) is exothermic
by 2.28 <cit.>, ions can have enough internal energy available to form SF5+ ions in reaction with
SF6 <cit.> and thus influence our data analysis to overestimate the fraction of produced ions.
Therefore, our / product branching ratios shall be interpreted as lower limit; the ratio may be higher, but not
lower than reported.
ยง.ยง + O_2
The reaction rate coefficient for reaction
+ O2 โO2+ + CN
โNCO+ + O
โNO+ + CO
was determined between 100 and 230ย K. Acetonitrile (all temperatures) and bromium cyanide with admixture of water vapours (152ย K)
were used as precursors to form ions in the SIS (see Fig.ย <ref>).
The major product of the reaction is O2+, accounting for more than 80 % of the produced ions, followed by NCO+
(less than 20 % of product ions) and NO+ (few percent of produced ions). NCO+ ions reacted slowly with O2,
complicating the determination of the product branching ratios. For comparison, <cit.> reported the product
branching ratios for reaction (<ref>) as 0.6:0.2:0.2 at room temperature.
The measured rate coefficient shows a very steep increase with decreasing temperature and is in a good agreement with the previous 300ย K
SIFT experimental value <cit.>.
Although the corresponding collisional Langevin reaction rate coefficient k_L=7.8ร 10^-10 is
of the same order of magnitude as the experimentally determined rate coefficient,
the comparison is only made for reference as the measured dependence
implies a barrier in the reaction path of the overall exothermic reaction.
ยง.ยง Attempt at electronic spectroscopy of
We attempted to perform electronic spectroscopy of cation in photon energy range 1.6 - 2.5
using a laser induced charge transfer (LICT) action scheme of ions stored in the 22 pole trap at
ca. 150 K to avoid any kind of neutral freeze out. The range has been selected
for X ^2ฮฃ^+โA ^2ฮ transition predicted
by previous computational results around 2 <cit.>.
LICT of to Xe (ฮโผ 0.1) as well as to CO2 (ฮโผ 1.6, note that CO2 is more
suitable for LICT studies of higher energy isomer )<cit.>
has been attempted using a supercontinuum laser<cit.> with ions produced as described in sectionย <ref>.
Unfortunately no action spectroscopic signal could be recorded.
While the aforementioned LICT schemes for IR vibrational and VIS electronic studies of should be straightforward and
very effective, the lack of experimental spectroscopic data for / in gas phase remains remarkable.
ยง.ยง Comparison to other processes involved in formation in the ISM
A comparison of the reaction rate coefficients obtained in the present study with those measured for other formation
processes by <cit.> is shown in Fig. <ref>. In cold, dense regions of the interstellar medium, the most important gas phase processes for the formation of and ions are considered to be <cit.>
reaction (<ref>) and the charge transfer reaction
H^+ + HCN/ HNCโ/ + H.
<cit.> studied reaction (<ref>) for HCN reactant and reported values of the reaction rate coefficient
close to 1ร10^-8 and practically independent on temperature between 205 and 540ย K. The calculated exothermicity
of reaction (<ref>) is 8 ยฑ 10 meV <cit.>, therefore reaction (<ref>) could be slightly
endothermic and the value of its reaction rate coefficient at temperatures close to 10ย K much lower than that reported
by <cit.> at higher temperatures. In that case, reaction (<ref>) proceeding with almost Langevin reaction rate coefficient,
would be the dominant process for formation in such environments.
Astrochemical databases such as KIDA <cit.> usually contain the
reaction rate coefficients for reactions (<ref>), (<ref>) and (<ref>) that were measured
by <cit.> at 300ย K with
an unknown fraction of vibrationally excited ions. According to the present data obtained with vibrationally cold ions
at low temperatures, the actual values of reaction rate coefficients for reactions (<ref>) and (<ref>)
are higher by a factor of up to 1.6. In the case of low temperature formation in the reaction of
with H_2 (<ref>) the ratio between our value of the reaction rate coefficient and that in the
KIDA entry is almost two. As a result, the astrochemical models employing these database values are probably underestimating
the / and formation in cold, dense regions of the interstellar medium.
The branching ratio for reaction (<ref>) is 1:1 in the KIDA database in accordance with the 300ย K value
reported by <cit.>. Although our data show that at 250ย K almost 70ย % of products of reaction (<ref>) are
ions, the branching ratio is close to that in
the KIDA database around 100ย K. Unfortunately, we were not able to measure the branching ratio of
reaction (<ref>) to lower temperatures due to the employed chemical probing scheme.
Given the observed temperature dependence, it is possible that in regions of cold interstellar gas,
where reaction (<ref>) is a key formation process for and ions, the actual branching ratio strongly
favours production.
ยง.ยง Parameterized reaction rates coefficients
The temperature dependent reaction rate coefficients of reactions
(<ref>), (<ref>), (<ref>), and, (<ref>) measured in this work have been parameterized
using the ArrheniusโKooij formula as defined in equationย (1) in ref. <cit.>. The results of the least square fit
together with the temperature range where the fit is valid, are reported in Tableย <ref>.
The + O2 room temperature measurement <cit.> has been included in the fit.
Detailed procedures are available in the data set associated with this work (see section Data Availability).
ยง CONCLUSION
The formation and destruction of and isomers in reactions with H_2 were studied in the temperature
range of 17 - 250 K. The values of the reaction rate coefficients for reactions of and with H_2 are
constant in the studied temperature range and close to the collisional Langevin reaction rate coefficient. The reaction
of with H2 produces predominantly at 250ย K, but below 100ย K, is favoured. The obtained
values of the reaction rate coefficient for the reaction of with H_2 decrease with increasing temperature.
As current astrophysical databases such as KIDA <cit.> contain rate coefficients for these key reactions that were obtained at 300 K, in some cases with vibrationally excited ions, we believe that our results will help to improve models of cyanide chemistry in the interstellar medium.
The isomer specific results in this work were achieved solely using chemical probing. This technique is fully dependent
on the availability of chemicals with the right energy levels with respect to the isomers being studied. Even then,
the process is rather tedious, since several reactants have to be used in sequence ( isomerisation/ reactivity/
probing) with tight control of the real number density inside the trap, especially while lowering the temperature
towards the neutral gas freeze out. While we have demonstrated that rf multipole ion traps are a suitable tool for
isomer specific studies, the application of novel techniques and schemes based on direct discrimination of ions with different
kinetic energy ( the reaction of of with H2 releases two equal m/z products with two distinctive total energies),
as described in <cit.>,
or indirectly, based on the transfer of the internal excitation to translation energy in a collision with a neutral, as described
in the โleak outโ method <cit.>, shall be developed.
This work was supported by the Max Planck Society, Czech Science Foundation (GACR 22-05935S) and COST Action CA17113.
The authors gratefully acknowledge the work of the electrical and mechanical workshops and engineering
departments of the Max Planck Institute for Extraterrestrial Physics.
ยง AUTHOR DECLARATIONS
ยง.ยง Conflict of Interest
The authors have no conflicts to disclose.
ยง.ยง Author Contributions
Paola Caselli: Conceptualization, Funding acquisition, Investigation, Writing โ review & editing.
Petr Dohnal: Conceptualization, Funding acquisition, Investigation, Writing โ original draft.
Miguel Jimรฉnez-Redondo: Conceptualization, Data curation, Investigation, Writing โ review & editing.
Pavol Jusko: Conceptualization, Data curation, Investigation, Visualization, Writing โ original draft.
ยง DATA AVAILABILITY
The data that support the findings of this study are openly available at
<http://doi.org/10.5281/zenodo.7704359>.
50
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1ฬ2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Hily-Blant, P. etย al.(2010)Hily-Blant, P., Walmsley, M., Pineau
des Forรชts, G., and Flower, D.]HilyBlant2010
author author Hily-Blant, P.,
author Walmsley, M., author
Pineau des Forรชts, G., and author
Flower, D., title title Nitrogen chemistry and depletion in starless cores, https://doi.org/10.1051/0004-6361/200913200 journal journal A&A volume 513, pages
A41 (year 2010)NoStop
[Liszt, H. and Lucas,
R.(2001)]Liszt2001
author author Liszt, H. and author Lucas, R., title
title Comparative chemistry of diffuse clouds - ii.
cn, hcn, hnc, ch3cn & n2h^+, https://doi.org/10.1051/0004-6361:20010260 journal journal A&A volume 370, pages
576โ585 (year 2001)NoStop
[Turner, Pirogov, and Minh(1997)]Turner1997
author author B.ย E. Turner, author L.ย Pirogov, and author Y.ย C. Minh, title title The physics and chemistry of
small translucent molecular clouds. viii. hcn and hnc, https://doi.org/10.1086/304228 journal journal
The Astrophysical Journal volume 483, pages 235 (year 1997)NoStop
[Boger and Sternberg(2005)]Boger2005
author author G.ย I. Boger and author A.ย Sternberg, title title CN and
HCN in dense interstellar clouds, https://doi.org/10.1086/432864 journal journal
The Astrophysical Journal volume 632, pages 302โ315 (year 2005)NoStop
[Godard, B. etย al.(2010)Godard, B., Falgarone, E., Gerin, M.,
Hily-Blant, P., and De Luca,
M.]Godard2010
author author Godard, B.,
author Falgarone, E., author
Gerin, M., author Hily-Blant,
P., and author De Luca, M., title title Molecular absorption lines toward
star-forming regions: a comparative study of hco+, hnc, hcn, and cn, https://doi.org/10.1051/0004-6361/201014283 journal
journal A&A volume 520, pages A20 (year 2010)NoStop
[Graninger etย al.(2015)Graninger, รberg, Qi, and Kastner]Graninger2015
author author D.ย Graninger, author K.ย I. รberg, author C.ย Qi, and author J.ย Kastner, title title Hnc in protoplanetary disks, https://doi.org/10.1088/2041-8205/807/1/L15 journal journal The Astrophysical Journal Letters volume
807, pages L15 (year 2015)NoStop
[Cleeves etย al.(2018)Cleeves, รberg, Wilner, Huang, Loomis, Andrews, and Guzman]Cleeves2018
author author L.ย I. Cleeves, author K.ย I. รberg,
author D.ย J. Wilner, author J.ย Huang, author
R.ย A. Loomis, author
S.ย M. Andrews, and author
V.ย V. Guzman, title
title Constraining gas-phase carbon, oxygen, and
nitrogen in the im lup protoplanetary disk, https://doi.org/10.3847/1538-4357/aade96 journal journal The Astrophysical Journal volume 865, pages 155 (year 2018)NoStop
[Bergner, Rajappan, and รberg(2022)]Bergner2022
author author J.ย B. Bergner, author M.ย Rajappan, and author K.ย I. รberg, title title Hcn snow lines in
protoplanetary disks: Constraints from ice desorption experiments, https://doi.org/10.3847/1538-4357/ac771e journal journal The Astrophysical Journal volume 933, pages 206 (year 2022)NoStop
[Loison, Wakelam, and Hickson(2014)]Loison2014
author author J.-C. Loison, author V.ย Wakelam, and author K.ย M. Hickson, title title The interstellar gas-phase
chemistry of HCN and HNC, https://doi.org/10.1093/mnras/stu1089
journal journal Monthly Notices of the Royal
Astronomical Society volume 443, pages
398โ410 (year 2014)NoStop
[Hirota etย al.(1998)Hirota,
Yamamoto, Mikami, and Ohishi]Hirota1998
author author T.ย Hirota, author S.ย Yamamoto,
author H.ย Mikami, and author M.ย Ohishi, title
title Abundances of hcn and hnc in dark cloud
cores, https://doi.org/10.1086/306032 journal
journal The Astrophysical Journal volume
503, pages 717 (year 1998)NoStop
[Tennekes, P. P. etย al.(2006)Tennekes, P. P., Harju, J., Juvela,
M., and Tรณth, L. V.]Tennekes2006
author author Tennekes, P. P.,
author Harju, J., author
Juvela, M., and author Tรณth, L. V., title title Hcn and
hnc mapping of the protostellar core chamaeleon-mms1, https://doi.org/10.1051/0004-6361:20040294 journal journal A&A volume 456, pages
1037โ1043 (year 2006)NoStop
[Schรถier, F. L. etย al.(2002)Schรถier, F. L., Jรธrgensen, J. K., van Dishoeck, E. F., and Blake, G. A.]Schoier2002
author author Schรถier, F. L.,
author Jรธrgensen, J. K., author
van Dishoeck, E. F., and author Blake, G. A., title title Does
iras 16293-2422 have a hot core? chemical inventory and abundance changes in
its protostellar environment, https://doi.org/10.1051/0004-6361:20020756 journal journal A&A volume 390, pages
1001โ1021 (year 2002)NoStop
[Schilke etย al.(1992)Schilke, Walmsley, Pineau Des Forets,
Roueff, Flower, and Guilloteau]Schilke1992
author author P.ย Schilke, author C.ย M. Walmsley, author G.ย Pineau
Des Forets, author E.ย Roueff, author D.ย R. Flower, and author S.ย Guilloteau, title title A study
of HCN, HNC and their isotopometers in OMC-1. I. Abundances and
chemistry. @noop journal journal
A&A volume 256, pages 595โ612
(year 1992)NoStop
[Hernรกndezย Vera etย al.(2017)Hernรกndezย Vera, Lique, Dumouchel,
Hily-Blant, and Faure]HernandezVera2017
author author M.ย Hernรกndezย Vera, author F.ย Lique, author F.ย Dumouchel,
author P.ย Hily-Blant, and author A.ย Faure, title title The rotational excitation of the HCN
and HNC molecules by H2 revisited, https://doi.org/10.1093/mnras/stx422 journal journal Monthly Notices of the Royal Astronomical Society volume 468, pages 1084โ1091 (year
2017)NoStop
[Brรผnken etย al.(2014)Brรผnken, Sipilรค, Chambers,
Harju, Caselli, Asvany,
Honingh, Kaminski, Menten,
Stutzki, and Schlemmer]Bruenken2014
author author S.ย Brรผnken, author O.ย Sipilรค, author E.ย T. Chambers, author J.ย Harju,
author P.ย Caselli, author O.ย Asvany, author
C.ย E. Honingh, author
T.ย Kaminski, author
K.ย M. Menten, author
J.ย Stutzki, and author
S.ย Schlemmer, title title H2d+ observations give an age of at least one million
years for a cloud core forming sun-like stars, https://doi.org/10.1038/nature13924 journal journal Nature volume 516, pages
219 (year 2014)NoStop
[Semaniak etย al.(2001)Semaniak, Minaev, Derkatch, Hellberg, Neau, Rosรฉn, Thomas, Larsson, Danared, Paรกl, and afย Ugglas]Semaniak2001
author author J.ย Semaniak, author B.ย F. Minaev, author A.ย M. Derkatch, author F.ย Hellberg,
author A.ย Neau, author
S.ย Rosรฉn, author R.ย Thomas, author M.ย Larsson, author H.ย Danared, author A.ย Paรกl, and author M.ย afย Ugglas, title title Dissociative recombination of hcnh+: Absolute cross-sections and
branching ratios, https://doi.org/10.1086/321797 journal journal The Astrophysical Journal Supplement Series volume 135, pages 275 (year
2001)NoStop
[Quรฉnard etย al.(2017)Quรฉnard, Vastel, Ceccarelli, Hily-Blant, Lefloch, and Bachiller]Quenard2017
author author D.ย Quรฉnard, author C.ย Vastel,
author C.ย Ceccarelli, author P.ย Hily-Blant, author
B.ย Lefloch, and author
R.ย Bachiller, title title Detection of the HC3NH+ and HCNH+ ions in the L1544
pre-stellar core, https://doi.org/10.1093/mnras/stx1373
journal journal Monthly Notices of the Royal
Astronomical Society volume 470, pages
3194โ3205 (year 2017)NoStop
[Fontani etย al.(2021)Fontani, Colzi, Redaelli, Sipilรค, and Caselli]Fontani2021
author author F.ย Fontani, author L.ย Colzi,
author E.ย Redaelli, author O.ย Sipilรค, and author P.ย Caselli, title
title First survey of hcnh+ in high-mass star-forming
cloud cores, https://doi.org/10.1051/0004-6361/202140655
journal journal A&A volume 651, pages A94 (year
2021)NoStop
[Scott etย al.(1997)Scott,
Fairley, Freeman, McEwan,
Spanel, and Smith]Scott1997
author author G.ย B. Scott, author D.ย A. Fairley,
author C.ย G. Freeman, author M.ย J. McEwan, author
P.ย Spanel, and author
D.ย Smith, title title Gas phase reactions of some positive ions with atomic and
molecular hydrogen at 300 k, https://doi.org/10.1063/1.473116
journal journal The Journal of Chemical Physics volume 106, pages 3982โ3987 (year 1997)NoStop
[Petrie etย al.(1991)Petrie,
Freeman, McEwan, and Ferguson]Petrie1991
author author S.ย Petrie, author C.ย G. Freeman,
author M.ย J. McEwan, and author E.ย E. Ferguson, title title The ion chemistry of
HNC+/HCN+ isomers: astrochemical implications, https://doi.org/10.1093/mnras/248.2.272 journal journal Monthly Notices of the Royal Astronomical Society volume 248, pages 272โ275 (year
1991)NoStop
[Raksit, Schiff, and Bohme(1984)]Raksit1984
author author A.ย Raksit, author H.ย Schiff, and author D.ย Bohme, title title A selecte ion flow tube study of the
kinetics of cn+ reactions at 296 ยฑ 2 k, https://doi.org/10.1016/0168-1176(84)85058-2 journal
journal International Journal of Mass Spectrometry and Ion
Processes volume 56, pages 321โ335
(year 1984)NoStop
[McEwan etย al.(1983)McEwan,
Anicich, Huntress, Kemper, and Bowers]McEwan1983
author author M.ย McEwan, author V.ย Anicich,
author W.ย Huntress, author P.ย Kemper, and author
M.ย Bowers, title title Reactions of cn+ and c2n+ ions, https://doi.org/10.1016/0020-7381(83)80008-4 journal
journal International Journal of Mass Spectrometry and Ion
Physics volume 50, pages 179โ187
(year 1983)NoStop
[Smith etย al.(2002)Smith,
Schlemmer, von Richthofen, and Gerlich]Smith2002
author author M.ย A. Smith, author S.ย Schlemmer,
author J.ย von Richthofen, and author D.ย Gerlich, title title Hoc+ + h2 isomerization rate at 25 k:
Implications for the observed [hco+]/[hoc+] ratios in the interstellar
medium, https://doi.org/10.1086/344404 journal
journal The Astrophysical Journal volume
578, pages L87 (year 2002)NoStop
[Aguado etย al.(2017)Aguado,
Roncero, Zanchet, Agรบndez, and Cernicharo]Aguado2017
author author A.ย Aguado, author O.ย Roncero,
author A.ย Zanchet, author M.ย Agรบndez, and author J.ย Cernicharo, title
title The photodissociation of hcn and hnc: Effects on
the hnc/hcn abundance ratio in the interstellar medium, https://doi.org/10.3847/1538-4357/aa63ee journal journal The Astrophysical Journal volume 838, pages 33 (year 2017)NoStop
[Thorwirth etย al.(2019)Thorwirth, Schreier, Salomon, Schlemmer, and Asvany]Thorwirth2019
author author S.ย Thorwirth, author P.ย Schreier,
author T.ย Salomon, author S.ย Schlemmer, and author O.ย Asvany, title
title Pure rotational spectrum of cn+, https://doi.org/10.3847/2041-8213/ab3927 journal journal The Astrophysical Journal Letters volume
882, pages L6 (year 2019)NoStop
[Amano, Hashimoto, and Hirao(2006)]Amano2006
author author T.ย Amano, author K.ย Hashimoto, and author T.ย Hirao, title title Submillimeter-wave spectroscopy of
hcnh+ and ch3cnh+, https://doi.org/10.1016/j.molstruc.2006.02.035
journal journal Journal of Molecular Structure volume 795, pages 190โ193 (year 2006)NoStop
[Forney, Thompson, and Jacox(1992)]Forney1992
author author D.ย Forney, author W.ย E. Thompson, and author M.ย E. Jacox, title title The vibrational
spectra of molecular ions isolated in solid neon. ix. hcn+, hnc+, and cn-, https://doi.org/10.1063/1.463963 journal journal The Journal of Chemical Physics volume
97, pages 1664โ1674 (year 1992)NoStop
[Fridh and ร
sbrink(1975)]Fridh1975
author author C.ย Fridh and author L.ย ร
sbrink, title title
Photoelectron and electron impact spectrum of hcn, https://doi.org/10.1016/0368-2048(75)80045-4 journal
journal Journal of Electron Spectroscopy and Related
Phenomena volume 7, pages 119โ138
(year 1975)NoStop
[Eland etย al.(1998)Eland,
Field, Baltzer, and Hirst]Eland1998
author author J.ย Eland, author T.ย Field,
author P.ย Baltzer, and author D.ย Hirst, title
title Photoelectron spectra, electronic structure,
coincidence spectra and dissociation mechanisms of the hydrogen cyanide
cation, https://doi.org/10.1016/S0301-0104(98)00036-6 journal journal Chemical Physics volume 229, pages 149โ163 (year
1998)NoStop
[Gans etย al.(2019)Gans,
Garcia, Boyรฉ-Pรฉronne, Pratt, Guillemin, Aguado, Roncero, and Loison]Gans2019
author author B.ย Gans, author G.ย A. Garcia,
author S.ย Boyรฉ-Pรฉronne,
author S.ย T. Pratt, author J.-C. Guillemin, author
A.ย Aguado, author O.ย Roncero, and author J.-C. Loison, title title
Origin band of the first photoionizing transition of hydrogen isocyanide, https://doi.org/10.1039/C8CP07737A journal journal Phys. Chem. Chem. Phys. volume 21, pages 2337โ2344 (year 2019)NoStop
[Wisthaler etย al.(2000)Wisthaler, Hansel, Schwarzmann,
Scheiring, Lindinger, and Ferguson]Wisthaler2000
author author A.ย Wisthaler, author A.ย Hansel,
author M.ย Schwarzmann, author C.ย Scheiring, author
W.ย Lindinger, and author
E.ย E. Ferguson, title
title Relaxation of vibrationally excited hcn+ and
dcn+ ions in collisions with he, https://doi.org/10.1063/1.480644
journal journal The Journal of Chemical Physics volume 112, pages 731โ735 (year 2000)NoStop
[Dahlmann etย al.(2022)Dahlmann, Jusko, Lara-Moreno, Halvick, Marimuthu, Michaelsen,
Wild, Geistlinger, Schlemmer,
Stoecklin, Wester, and Brรผnken]Dahlmann2022
author author F.ย Dahlmann, author P.ย Jusko,
author M.ย Lara-Moreno, author P.ย Halvick, author
A.ย N. Marimuthu, author
T.ย Michaelsen, author
R.ย Wild, author K.ย Geistlinger, author S.ย Schlemmer, author T.ย Stoecklin, author R.ย Wester, and author S.ย Brรผnken, title title Predissociation spectroscopy of cold cn-h2 and cn-d2, https://doi.org/10.1080/00268976.2022.2085204 journal
journal Molecular Physics volume
120, pages e2085204 (year 2022)NoStop
[Gottlieb etย al.(2007)Gottlieb, Brรผnken, McCarthy, and Thaddeus]Gottlieb2007
author author C.ย A. Gottlieb, author S.ย Brรผnken, author M.ย C. McCarthy, and author P.ย Thaddeus, title title The rotational
spectrum of cn^-, https://doi.org/10.1063/1.2737442 journal journal The Journal of Chemical Physics volume 126, pages 191101 (year
2007)NoStop
[Amano(2008)]Amano2008
author author T.ย Amano, title title Extended negative
glow and โhollow anodeโ discharges for submillimeter-wave observation of
cn-, c2h-, and c4h-, https://doi.org/10.1063/1.3043739 journal journal The Journal of Chemical Physics volume 129, pages 244305 (year
2008)NoStop
[Agรบndez, M. etย al.(2010)Agรบndez, M., Cernicharo, J., Guรฉlin, M., Kahane, C., Roueff, E.,
Klos, J., Aoiz, F. J., Lique, F., Marcelino, N., Goicoechea,
J. R., Gonzรกlez Garcรญa, M., Gottlieb, C. A., McCarthy, M. C., and Thaddeus, P.]Agundez2010
author author Agรบndez, M.,
author Cernicharo, J., author
Guรฉlin, M., author Kahane,
C., author Roueff, E., author
Klos, J., author Aoiz, F.
J., author Lique, F., author
Marcelino, N., author Goicoechea, J. R., author Gonzรกlez
Garcรญa, M., author Gottlieb, C. A.,
author McCarthy, M. C., and author
Thaddeus, P., title title Astronomical identification of cn-, the smallest observed molecular
anion, https://doi.org/10.1051/0004-6361/201015186 journal journal A&A volume 517, pages L2 (year 2010)NoStop
[van Mourik etย al.(2001)van
Mourik, Harris, Polyansky, Tennyson, Csรกszรกr, and Knowles]Mourik2001
author author T.ย van
Mourik, author G.ย J. Harris,
author O.ย L. Polyansky, author J.ย Tennyson, author
A.ย G. Csรกszรกr, and author
P.ย J. Knowles, title
title Ab initio global potential, dipole, adiabatic,
and relativistic correction surfaces for the hcnโhnc system, https://doi.org/10.1063/1.1383586 journal journal The Journal of Chemical Physics volume
115, pages 3706โ3718 (year 2001)NoStop
[Nguyen etย al.(2015)Nguyen,
Baraban, Ruscic, and Stanton]Nguyen2015
author author T.ย L. Nguyen, author J.ย H. Baraban,
author B.ย Ruscic, and author J.ย F. Stanton, title title On the hcn โ hnc energy difference, https://doi.org/10.1021/acs.jpca.5b08406 journal
journal The Journal of Physical Chemistry A volume 119, pages 10929โ10934 (year
2015)NoStop
[Khalouf-Rivera etย al.(2019)Khalouf-Rivera, Carvajal, Santos, and Pรฉrez-Bernal]Khalouf2019
author author J.ย Khalouf-Rivera, author M.ย Carvajal, author L.ย F. Santos, and author F.ย Pรฉrez-Bernal, title title
Calculation of transition state energies in the hcnโhnc isomerization with
an algebraic model, https://doi.org/10.1021/acs.jpca.9b07338
journal journal The Journal of Physical Chemistry
A volume 123, pages 9544โ9551
(year 2019)NoStop
[Jusko, Jimรฉnez-Redondo, and Caselli(2023)]Jusko2023
author author P.ย Jusko, author M.ย Jimรฉnez-Redondo, and author P.ย Caselli, title title Cold cas ion trap โ 22 pole trap with ring electrodes for
astrochemistry, https://doi.org/10.1080/00268976.2023.2217744
journal journal Molecular Physics (year 2023), 10.1080/00268976.2023.2217744NoStop
[Gerlich(1992)]Gerlich1992
author author D.ย Gerlich, title title Inhomogeneous
RF fields: A versatile Tool for the Study of processes with Slow Ions, in https://doi.org/10.1002/9780470141397.ch1 booktitle Adv. Chem. Phys.: State-Selected and State-to-State Ion-Molecule
Reaction Dynamics, Vol. volume LXXXII, editor
edited by editor C.-Y. Ng and editor M.ย Baer (publisher Wiley, New York, year 1992) pp. pages 1โ176NoStop
[Smith etย al.(1993)Smith,
Glosรญk, Skalskรฝ, ล panฤl, and Lindinger]Smith1993
author author D.ย Smith, author J.ย Glosรญk,
author V.ย Skalskรฝ, author P.ย ล panฤl, and author W.ย Lindinger, title
title A further investigation of the reaction of c2h2+
with h2, https://doi.org/10.1016/0168-1176(93)87038-T journal journal International Journal of Mass Spectrometry
and Ion Processes volume 129, pages
145โ153 (year 1993)NoStop
[Scott etย al.(2000)Scott,
Milligan, Fairley, Freeman, and McEwan]Scott2000
author author G.ย B.ย I. Scott, author D.ย B. Milligan, author D.ย A. Fairley, author C.ย G. Freeman, and author M.ย J. McEwan, title title A selected ion
flow tube study of the reactions of small cmhn+ ions with o atoms, https://doi.org/10.1063/1.481050 journal journal
The Journal of Chemical Physics volume 112, pages 4959โ4965 (year 2000)NoStop
[Viggiano etย al.(1980)Viggiano, Albritton, Fehsenfeld,
Adams, Smith, and Howorka]Viggiano1980
author author A.ย A. Viggiano, author D.ย L. Albritton, author F.ย C. Fehsenfeld, author N.ย G. Adams, author D.ย Smith, and author F.ย Howorka, title title Laboratory studies of some
ion-atom reactions related to interstellar molecular synthesis. https://doi.org/10.1086/157766 journal journal
The Astrophysical Journal volume 236, pages 492โ497 (year 1980)NoStop
[Petrie etย al.(1990)Petrie,
Freeman, Meot-Ner, McEwan, and Ferguson]Petrie1990
author author S.ย Petrie, author C.ย G. Freeman,
author M.ย Meot-Ner, author M.ย J. McEwan, and author E.ย E. Ferguson, title title Experimental study of hcn+ and hnc+ ion
chemistry, https://doi.org/10.1021/ja00176a006 journal journal Journal of the American Chemical Society volume 112, pages 7121โ7126 (year 1990)NoStop
[Hansel etย al.(1998)Hansel,
Glantschnig, Scheiring, Lindinger, and Ferguson]Hansel1998
author author A.ย Hansel, author M.ย Glantschnig,
author C.ย Scheiring, author W.ย Lindinger, and author E.ย E. Ferguson, title title Energy dependence of the isomerization
of hcn+ to hnc+ via ion molecule reactions, https://doi.org/10.1063/1.476748 journal journal
The Journal of Chemical Physics volume 109, pages 1743โ1747 (year 1998)NoStop
[Milenko, Karnatsevich, and Kogan(1972)]Milenko1972
author author Y.ย Milenko, author L.ย Karnatsevich, and author V.ย Kogan, title title On temperature
dependence of the polarizability of H_2 and D_2 molecules, https://doi.org/10.1016/0031-8914(72)90223-6 journal
journal Physica volume 60, pages 90โ96 (year 1972)NoStop
[Bryson, Cazcarra, and Levenson(1974)]Bryson1974
author author C.ย E. Bryson, author V.ย Cazcarra, and author L.ย L. Levenson, title title Sublimation rates and vapor
pressures of water, carbon dioxide, nitrous oxide, and xenon, https://doi.org/10.1021/je60061a021 journal journal Journal of Chemical & Engineering Data volume 19, pages 107โ110 (year
1974)NoStop
[Clary, Smith, and Adams(1985)]Clary1985
author author D.ย Clary, author D.ย Smith, and author N.ย Adams, title title Temperature dependence of rate
coefficients for reactions of ions with dipolar molecules, https://doi.org/10.1016/0009-2614(85)80425-5 journal
journal Chemical Physics Letters volume
119, pages 320โ326 (year 1985)NoStop
[Wakelam etย al.(2012)Wakelam, Herbst, Loison, Smith, Chandrasekaran, Pavone,
Adams, Bacchus-Montabonel, Bergeat, Bรฉroff, Bierbaum, Chabot, Dalgarno, van Dishoeck,
Faure, Geppert, Gerlich,
Galli, Hรฉbrard, Hersant,
Hickson, Honvault, Klippenstein, Picard, Nyman, Pernot, Schlemmer, Selsis, Sims, Talbi, Tennyson, Troe, Wester, and Wiesenfeld]Wakelam2012
author author V.ย Wakelam, author E.ย Herbst,
author J.-C. Loison, author I.ย W.ย M. Smith, author
V.ย Chandrasekaran, author
B.ย Pavone, author N.ย G. Adams, author M.-C. Bacchus-Montabonel, author A.ย Bergeat, author K.ย Bรฉroff, author V.ย M. Bierbaum, author M.ย Chabot, author A.ย Dalgarno,
author E.ย F. van Dishoeck,
author A.ย Faure, author W.ย D. Geppert, author
D.ย Gerlich, author D.ย Galli, author E.ย Hรฉbrard, author F.ย Hersant, author K.ย M. Hickson, author P.ย Honvault, author S.ย J. Klippenstein, author S.ย L. Picard, author G.ย Nyman,
author P.ย Pernot, author S.ย Schlemmer, author
F.ย Selsis, author I.ย R. Sims, author D.ย Talbi, author J.ย Tennyson, author J.ย Troe, author R.ย Wester, and author L.ย Wiesenfeld, title title A kinetic
database for astrochemistry (kida), https://doi.org/10.1088/0067-0049/199/1/21 journal journal The Astrophysical Journal Supplement Series volume 199, pages 21 (year 2012)NoStop
[Schmid etย al.(2022)Schmid,
Asvany, Salomon, Thorwirth, and Schlemmer]Schmid2022
author author P.ย C. Schmid, author O.ย Asvany,
author T.ย Salomon, author S.ย Thorwirth, and author S.ย Schlemmer, title
title Leak-out spectroscopy, a universal method of
action spectroscopy in cold ion traps, https://doi.org/10.1021/acs.jpca.2c05767 journal journal The Journal of Physical Chemistry A volume
126, pages 8111โ8117 (year 2022)NoStop
|
http://arxiv.org/abs/2306.05055v1
|
20230608092033
|
Magnetic anisotropy of superconducting transition in S/AF heterostructures with spin-orbit coupling
|
[
"G. A. Bobkov",
"I. V. Bobkova",
"A. A. Golubov"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con",
"cond-mat.mes-hall"
] |
Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Moscow region, Russia
Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Moscow region, Russia
National Research University Higher School of Economics, 101000 Moscow, Russia
Faculty of Science and Technology and MESA^+ Institute for Nanotechnology, University of Twente, 7500 AE Enschede, The Netherlands
The influence of Rashba spin-orbit coupling (SOC) on superconducting correlations in thin-film superconductor/antiferromagnet (S/AF) structures with compensated interfaces is studied. A unique effect of anisotropic enhancement of proximity-induced triplet correlations by the SOC is predicted. It manifests itself in the anisotropy of the superconducting critical temperature T_c with respect to orientation of the Nรฉel vector relative to the S/AF interface, which is opposite to the behaviour of T_c in superconductor/ferromagnet structures. We show that the anisotropy is controlled by the chemical potential of the superconductor and, therefore, can be adjusted in (quasi)2D structures.
Magnetic anisotropy of superconducting transition in S/AF heterostructures with spin-orbit coupling
A.A. Golubov
July 31, 2023
===================================================================================================
Introduction.โ
The interplay between superconductivity and ferromagnetism in thin film superconductor/ferromagnet (S/F) heterostructures usually manifests itself as a change in superconductivity of the S layer due to the proximity to the F layer. The most well-known and studied effect is induced by the magnetic proximity triplet superconducting correlations<cit.>. Further studies <cit.> have predicted and observed that spin-orbit coupling (SOC) in S/F bilayers can produce an anisotropic depairing effect on triplets. One of the manifestations of the anisotropic depairing is that the critical temperature T_c of the bilayer depends on the orientation of the F layer magnetization with respect to the S/F interface<cit.>.
One of consequences of a SOC-driven modulation of superconductivity is the possibility for a reciprocal effect i.e., a reorientation of the F layer magnetization due to superconductivity<cit.>. For sufficiently thin ferromagnetic layers, a
change from in-plane (IP) to out-of-plane (OOP) magnetization has been predicted <cit.> and realised <cit.> in
magnetic tunnel junctions. The possibility to control magnetic anisotropies using superconductivity - a key step
in designing future cryogenic magnetic memories and spintronics applications.
However, the finite net magnetization of ferromagnets presents a significant drawback
for applications in nanoscale devices. On the other hand, antiferromagnets (AFs) are magnetically ordered materials
with zero net magnetization and negligible stray fields,
as well as intrinsic high-frequency dynamics. Due to these advantages they are being actively studied as alternatives to ferromagnets for
spintronics applications<cit.>. For AF-based superconducting spintronics it is of crucial importance to study proximity effects in superconductor/antiferromagnet (S/AF) heterostructures <cit.>. It has been reported that in S/AF structures superconducting triplet correlations also arise due to the proximity effect. Depending on the particular antiferromagnetic order, system geometry and the interface properties they can be of different types. In particular, if the S/AF interface possesses nonzero net magnetization (uncompensated interface), it induces Zeeman splitting and conventional triplet correlations in the adjacent superconductor<cit.>. The compensated S/AF interfaces also induce triplet correlations, but they are of the Nรฉel type<cit.>, that is their amplitude flips sign from one lattice site to the next, just
like the Nรฉel spin order in antiferromagnets. For S/AF heterostructures with canted AFs the mixture of conventional and Nรฉel triplet correlations has been predicted <cit.>.
At the same time effects of SOC on the proximity effect in S/AF hybrids are much less explored. In particular, the anomalous phase shift in S/AF/S Josephson junctions with SOC<cit.>, anisotropy of the critical current <cit.> and topological superconductivity in S/AF hybrids <cit.> have been predicted, also the anisotropic magnetoresistance has been calculated<cit.>. Here we study anisotropic effect of Rashba SOC on triplets in S/AF thin film bilayers with fully compensated AFs. It is found that in addition to the anisotropic depairing of triplet correlations known in S/F hybrids, a unique effect of anisotropic enhancement of the triplets by the SOC occurs in the S/AF case. We unveil the physical mechanism of the effect and demonstrate that it can manifest itself in opposite trend in the anisotropy of the superconducting transition as compared to S/F heterostructures. Namely, in S/F thin film bilayers the critical temperature is higher for OOP magnetization orientation than for the IP magnetization<cit.> due to the fact that
SOC suppresses triplets oriented OOP more than
triplets oriented IP. Here we demonstrate the possibility of the opposite effect for S/AF thin-film bilayers with SOC, which occurs due to the anisotropic enhancement of triplets by SOC.
System and theoretical approach.โWe consider a thin-film S/AF bilayer, where the antiferromagnet is assumed to be an insulator, see Fig.ย <ref>. The magnetism is staggered. We assume that the S/AF interface is fully compensated, that is the interface magnetization has zero average value. The sites in the superconductor are marked by the radius-vector i = (i_x,i_y, i_z)^T, the interface is in the (x,z)-plane. The influence of the antiferromagnetic insulator on the superconductor is described by the exchange field h_ i = (-1)^i_x+i_z h <cit.>. The superconductor S is assumed to be homogeneous along the y-direction and is described by the lattice Hamiltonian:
ฤค= - t โ_โจijโฉ,ฯฤ_iฯ^โ ฤ_jฯ + โ_i (ฮ_iฤ_iโ^โ ฤ_iโ^โ + H.c.) -
ฮผโ_i, ฯnฬ_iฯ +
โ_i,ฮฑฮฒฤ_iฮฑ^โ (h_iฯ)_ฮฑฮฒฤ_iฮฒ +
i V_R โ_i (ฤ_ i^โ ฯ_z ฤ_ i+ e_x-ฤ_ i^โ ฯ_x ฤ_ i+ e_z-H.c.),
where โจ i j โฉ means summation over the nearest neighbors, ฤ_ i ฯ^โ (ฤ_ i ฯ) is the creation (annihilation) operator for an electron with spin ฯ at site i. t parameterizes the hopping between adjacent sites, ฮ_i accounts for on-site s-wave pairing, ฮผ is the electron chemical potential, and the last term describes the Rashba SOC with Rashba constant V_R. nฬ_ i ฯ = ฤ_ i ฯ^โ ฤ_ i ฯ is the particle number operator. e_k with k=x,y,z are unit vectors along the corresponding axis. The lattice constant is denoted by a. Here we define Pauli matrices ฯ = (ฯ_x, ฯ_y, ฯ_z)^T in spin space.
Anisotropy of triplets and T_c.โThe numerical calculations are performed in the formalism of the Gor'kov Green's functions in two-sublattice framework<cit.>, generalized to take into account the SOC. Relegating technical details of the Green's functions formalism to the Supplemental material<cit.>, here we present and discuss the dependencies T_c(h) for S/AF structures with IP and OOP orientations of the Nรฉel vector. They have been compared to T_c(h) of the S/F system with the same absolute value of the induced exchange field h. The numerical results are presented in Fig.ย <ref>. First, it is seen that while for S/F heterostructures T_c is always higher in the presence of SOC (dashed curves), for AF/S heterostructures the trends are opposite for large ฮผโซ T_c0 and for small ฮผโฒ T_c0, where T_c0=T_c(h=0). At small ฮผ the behavior of T_c is qualitatively similar to the case of S/F bilayers, and at large ฮผ it is opposite - the presence of SOC suppresses T_c.
Furthermore, in the presence of SOC T_c is anisotropic depending on the angle ฮธ between the magnetization and the interface plane. For S/F heterostructures T_c is always higher for OOP orientation (dashed curves) <cit.>. At the same time for AF/S heterostructures the ratio between the values of T_c for IP and OOP orientations is again opposite for large ฮผโซ T_c0 and for small ฮผโฒ T_c0.
At ฮผโฒ T_c0 for S/AF heterostructures the ratio between T_c of IP and OPP is the same as for the S/F case. It is explained by the fact that triplet superconducting correlations, induced by the proximity effect with the magnet are suppressed stronger for OOP configuration and, consequently, have a less damaging effect on the singlet superconductivity. At ฮผโซ T_c0 the anisotropy of the critical temperature, that is the difference between the IP and OOP T_c is opposite. This is due to the existence of a unique mechanism of enhancement of the Nรฉel-type triplet correlations in S/AF structures, which is more effective for OOP orientation of the Nรฉel vector.
The physical description of the both mechanisms is provided below. There is a crossover between the opposite anisotropy regimes at some intermediate value of ฮผโผฯ T_c0. For the chosen parameters of the the system this crossover value ฮผ_c โ 0.25 t โฯ T_c0. It is interesting that the ratio between ฮผ and superconduting energy scale T_c0 is crucial for very different aspects of the proximity physics of S/AF heterostructures. For example, this parameter controls the relative importance of different mechanisms of superconductivity suppression in S/AF hybrids <cit.>. The superconductivity suppression is
dominated by the Nรฉel triplets at ฮผโฒ T_c0. On the contrary, if ฮผโซ T_c0 the superconductivity suppression is dominated by nonmagnetic disorder. For S/AF heterostructures with canted AFs the opposite dependencies of T_c on the canting angle were also predicted at small and large values of ฮผ <cit.>.
Discussion of the mechanisms of T_c anisotropy.โNow we discuss the physical reasons of T_c anisotropy in S/AF structures. In order to unveil them let us consider quasiclassical Eilenberger equations, developed recently for treating the proximity effect in S/AF heterostructures <cit.> and their analytical solutions in the presence of Rashba SOC. The general form of the Eilenberger equation, generalized for treating the SOC, is provided in the Supplemental material<cit.>. In the vicinity of the critical temperature the Eilenberger equation can be linearized with respect to the anomalous Green's function, which for the problem under consideration can be written as fฬ = f_s ฯ_0 ฯ_x + f^i ฯฯ_i. Here f_s is its singlet component in spin space, i=0,y,z and f^i is the vector triplet component in spin space, corresponding to the 0,y,z component in sublattice space. Recall that the y-component in sublattice space accounts for the on-site Nรฉel-type triplet correlations f^AA = - f^BB, while the z(0)-component describes nonlocal Nรฉel (conventional) correlations f^AB = -(+) f^BA. The linearized equations take the form:
2iฯ f_s - 2 i h f^y = ฮ (g_N - gฬ_N),
2iฯ f^0 + 2 i h ร f^z = ฮ ( g_N^0 - gฬ_N^0),
ยฑ 2 i ฮผ f^y(z) + 2 i f^z(y)ร h_R = โ i ฮ ( g_N^y(z) + gฬ_N^y(z)),
where h_R = (V_R/2ta) ( e_y ร v_F) is the effective Rashba pseudomagnetic field seen by an electron moving along the trajectory determined by the Fermi velocity v_F and ฯ is the fermionic Matsubara frequency. The right-hand side contains the electron (hole) normal Green's function วง_N(วงฬฬ_N ) = g_N (gฬ_N )ฯ_0 ฯ_x + g_N^i ฯ (gฬ_N^i ฯ)ฯ_i. The vector part of the on-site normal state quasiclassical Green's function accounting for the Nรฉel order up to the leading order with respect to (h, h_R)/|iฯ+ฮผ| takes the form:
g_N^y = -i h sgnฯ/iฯ+ฮผ - i [ h_R ร ( h ร h_R)]/(iฯ + ฮผ)^3 sgnฯ
g_N^z = i h_R ร h/(iฯ+ฮผ)^2 sgnฯ .
The vector component g_N^y accounts for the Nรฉel-type spin polarization of the on-site DOS P_ N^A = -P_ N^B โก P_ N along the direction N (| N| = 1) in spin space.
P_ N(ฮต) = 2N_F Re[i N g_N^R,y(ฮต)],
where g^R,y(ฮต) is obtained from g_N^y(ฯ>0) by substitution iฯโฮต+iฮด and N_F is the normal state DOS at the Fermi surface and at h=0.
It is important to compare Eqs.ย (<ref>)-(<ref>) to the analogous equations for S/F heterostructures. In this case the anomalous Green's functions has no Nรฉel-type ฯ_y(z) components and the corresponding Eilenberger equations can be written without the sublattice structure, but we write them in terms of the same two-sublattices formalism in order to be directly compared with the AF case. In this case fฬ_F = f_F,sฯ_0 ฯ_x + f_F,0ฯ_0 ฯ_0 + f_F^x ฯฯ_x + f_F^0 ฯฯ_0. It obeys the following Eilenberger equations:
2iฯ f_F,s - 2 h f_F^x = 2 ฮ sgnฯ,
2iฯ f_F,0 - 2 h f_F^0 = 0,
i ฯ f_F^x(0) - h f_F,s(0) + i f_F^0(x)ร h_R = 0 .
The most important difference between equations for triplet correlations for AF/S and F/S structures, Eqs.ย (<ref>) and (<ref>) is that they contain different mechanisms of triplet generation. While in S/F hybrids the triplets are generated by the term h f_F,s(0) via the direct singlet-triplet conversion <cit.>, such type of triplet generator is absent for AF/S structures. Instead, the triplets are generated via the antisymmetric with respect to Matsubara frequencies vector component of the normal Green's function P_N^M = i[ g_N^y(ฯ)+gฬ_N^y(ฯ)] = i[ g_N^y(ฯ)- g_N^y(-ฯ)]. It automatically provides the odd-frequency character of on-site triplets. Turning to the real energies we obtain P_N^s = i[( g_N^y,R(ฮต)+ gฬ_N^y,R(ฮต)] = i[( g_N^y,R(ฮต)+ g_N^y,R(-ฮต)]. It is seen from Eq.ย (<ref>) and (<ref>) that this expression determines the symmetric with respect to ฮต part of the Nรฉel-type spin polarization of the DOS P_ N^s N = [P_ N(ฮต)+P_ N(-ฮต)]/2N_F. Therefore, the Nรฉel triplets are generated by the staggered polarization of the normal metal. An analogous term is absent in S/F heterostructures. In the case h โชฮต_F the spin polarization of normal state is negligible and is considered to be zero in the framework of the quasiclassical approximation. This limit is always relevant for S/F and S/AF heterostructures because h โฒฮ in order to avoid the complete depairing of superconductivity.
Consequently, the physical mechanisms of triplet generation are different in S/F and S/AF structures. The effect of SOC on the triplets can also be different. It can be seen directly from the solutions of Eqs.ย (<ref>) and (<ref>). Up to the leading order in h/|iฯ+ฮผ| and h_R/|iฯ+ฮผ| the on-site Nรฉel-type triplet correlations take the form:
f^y = sgnฯ[iฮ h/ฮผ^2+ฯ^2 + iฮ [ h_R ร ( h ร h_R)](3ฮผ^2-ฯ^2)/(ฮผ^2+ฯ^2)^3]
It can be compared to the solution for on-site triplet correlation in the S/F thin-film bilayer:
f_F^x = - sgnฯ[ฮ h/ฯ^2 - ฮ [ h_R ร ( h ร h_R)]/ฯ^4].
Recall that f_F^x anomalous Green's function accounts for the conventional, not Nรฉel, on-site triplet correlations f_F^x = f_F^AA = f_F^BB.
By comparing Eqs.ย (<ref>) and (<ref>) we see that in S/F heterostructures the SOC weakens triplets, the same is valid for S/AF structures with ฮผโชฯ T_c0 when we can disregard ฮผ with respect to ฯ in Eq.ย (<ref>). At the same time in S/AF with ฯ T_c0โฒฮผ its influence is just opposite - it enhances triplets. In both cases the influence of SOC on triplets is anisotropic. The anisotropy is determined by the vector structure h_R ร ( h ร h_R). The influence of SOC on triplets, that is suppression for S/F and enhancement for S/AF bilayers, is maximal when h_R โฅ h for all electron trajectories n_F, what is realized for OOP orientation of the magnetization. It is monotonically declines with decreasing ฮธ.
The physical mechanism for the suppression of the triplet correlations with f โ h by SOC in S/F structures is well-known. The triplets f = - sgnฯฮ h/ฯ^2 are obtained from singlets by the process of the singlet-triplet conversion<cit.>. Then the SOC partially converts these original triplets into the odd in momentum component โ h_R ร h. As a result, the amplitude of initial triplets along h decreases. For dirty systems this process of triplet depairing by SOC is analogous to anisotropic Dyakonov-Perel spin relaxation <cit.> because the odd in momentum component โ h_R ร h is zero after impurity averaging over trajectories and only the reduced triplets along h survive <cit.>.
On the contrary, the enhancement of triplets by SOC in S/AF structures is caused by the analogous enhancement of the normal state polarization. From Eqs.ย (<ref>) and (<ref>) it follows that up to the leading order in h/|ฮต+ฮผ| and h_R/|ฮต+ฮผ| the Nรฉel-type polarization of the normal state DOS along h takes the form
P_ h^A (ฮต) = - P_ h^B (ฮต) = 2N_F [ h/ฮต+ฮผ + h h_R^2 sin^2 ฯ/(ฮต+ฮผ)^3],
where h = | h| and ฯ is the angle between h and h_R. It is seen that (i) the absolute value of the polarization is always enhanced by the SOC and (ii) the enhancement is anisotropic. It reaches maximal possible value for all the trajectories for OOP orientation of h because h_R is always in-plane. It is worth noting that Eq.ย (<ref>) is not applicable at ฮผ=0 because in this limit the most important contribution to the superconducting properties is given by the small energies ฮตโฒ T_c0, where Eq.ย (<ref>) is not valid because the conditions h โช |ฮต+ฮผ| and h_R โช |ฮต+ฮผ| are violated. Therefore, this consideration is not applicable for explanation of the results at ฮผโฒฯ T_c0 and we comment on this limit later.
The reason for the described above enhancement is the specific reconstruction of normal state electron spectra under the influence of the SOC. In Fig.ย <ref> we demonstrate the normal state electron spectra of the S layer proximitized by the AF for h_R โฅ h [(a)] and h_R โฅ h [(b)]. At h_R = 0 the spectra are doubly-degenerated. The eigen states at the Fermi surface for an electron with spin ฯ = ยฑ 1 take the form
( [ ฯฬ_ i ฯ^A; ฯฬ_ i ฯ^B ])( p) = ( [ โ(1+ฯ h/ฮผ); โ(1-ฯ h/ฮผ) ]) e^i p i,
The distribution of probability density of these states in the lattice strongly oscillates between A and B sites. The total spin polarization at the Fermi level caused by these states is P_ h^A = -P_ h^B = 2 N_F h/ฮผ. The nonzero SOC splits the spectra. The splitting is horizontal for h_R โฅ h, that is analogous to the conventional action of the Rashba SOC in nonmagnetic metals. The net spin polarization at the Fermi level is represented by the sum over states 1-4 in Fig.ย <ref>(a), multiplied by N_F. It is again P_ h^A = -P_ h^B = 2 N_F h/ฮผ. On the contrary, for h_R โฅ h the splitting of the spectra is vertical, which is reminiscent of the Zeeman splitting in conventional metals. Effectively it is described by the opposite shifts of the chemical potentials of the both branches ฮผโฮผยฑ h_R. The resulting spin polarizations of the states 1-4 are P_1,4=h/(ฮผ + h_R) and P_2,3=h/(ฮผ - h_R). The total on-site spin polarization is P_ h^A = -P_ h^B = 2N_Fhฮผ/(ฮผ^2-h_R^2). Expanding this expression with respect to h_R/ฮผ we obtain Eq.ย (<ref>) at ฯ=0 and ฮต=0. The enhancement of the staggered spin polarization appears as a nonlinear effect of the opposite chemical potential shifts.
If ฮผโฒฯ T_c the antiferromagnetic gap opens in the superconductor in the vicinity of the normal state Fermi surface. In this case the most important contribution to the pairing correlations is given by the electronic states at the edge of the gap. They correspond to ฮพโ 0, what means that the electrons are practically fully localized at one of the sublattices. Consequently, they only feel the magnetization of the corresponding sublattice and behave in the same way as in the ferromagnet. For this reason our results at ฮผโฒฯ T_c demonstrate the same trends as the corresponding results for S/F structures.
Summary.โThe effect of Rashba SOC on triplets is studied in S/AF thin film bilayers with fully compensated AFs. A unique effect of anisotropic enhancement of the triplets by SOC is found. It can be experimentally observed via the anisotropy of the superconducting transition. Our analysis highlights the importance of the value of the chemical potential ฮผ for the physics of S/AF thin-film hybrids. At ฮผโฒฯ T_c0 the influence of the SOC on superconducting properties of the S/AF bilayers is the same as for the S/F case - the SOC anisotropically suppresses triplets and enhances T_c, and the maximal T_c is reached for OOP orientation of the Nรฉel vector. On the contrary, at ฮผ > ฯ T_c0 the SOC anisotropically enhances triplets and suppresses T_c, and T_c is minimal for OOP orientation. The effect can be especially interesting for heterostructures composed of 2D magnets and superconductors because of possibility of external control of the chemical potential. The anisotropy of the superconducting transition opens a perspective for a reciprocal effect, that is the reorientation of the Nรฉel vector due to superconductivity. In its turn, this possibility is an important step for further developments in AF-based superconducting spintronics.
The financial support from the Russian
Science Foundation via the RSF project No.22-22-00522 is acknowledged.
ยง SUPPLEMENTAL MATERIAL: TWO-SUBLATTICE FORMALISM OF GREEN'S FUNCTIONS
The numerical calculations of the critical temperature are performed in the formalism of the Gor'kov Green's functions in two-sublattice framework<cit.>. The unit cell with two sites in it is chosen as shown in Fig.ย 1 of the main text. Introducing the two-sublattice Nambu spinor ฤ_ i = (ฤ_ i,โ^A, ฤ_ i,โ^A, ฤ_ i,โ^B,ฤ_ i,โ^B, ฤ_ i,โ^Aโ , ฤ_ i,โ^Aโ , ฤ_ i,โ^Bโ , ฤ_ i,โ^Bโ )^T we define the Green's function as วฆ_ i j(ฯ_1, ฯ_2) = -โจ T_ฯฤ_ i(ฯ_1) ฤ_ j^โ (ฯ_2) โฉ,
where โจ T_ฯ ... โฉ means imaginary time-ordered thermal averaging and i is now the sublattice index. The Green's function is a 8 ร 8 matrix in the direct product of spin, particle-hole and sublattice spaces. Therefore, we define the Pauli matrices ฯ = (ฯ_x, ฯ_y, ฯ_z)^T in spin space, ฯ = (ฯ_x, ฯ_y, ฯ_z)^T in particle-hole space and ฯ = (ฯ_x, ฯ_y, ฯ_z)^T in sublattices space. Further we assume the system to be homogeneous along the interface and consider the Fourier-transformed Green's function:
วฆ( p) = โซ d^3 r e^-i p( i - j)วฆ_ i j,
where the integration is over i - j. Then to make the resulting Gor'kov equations simpler it is convenient to define the following transformed Green's function:
วฆฬฬ( p) =
(
[ 1 0; 0 -iฯ_y ])_ฯฯ_x e^- ip_z a_z ฯ_z/ 2ร
วฆ( p)
e^ ip_z a_z ฯ_z/ 2(
[ 1 0; 0 -iฯ_y ])_ฯ ,
where subscript ฯ means that the explicit matrix structure corresponds to the particle-hole space. The Gor'kov equation for วฆฬฬ( R, p) was originally derived in Ref.ย Bobkov2022 and here is generalized to take into account the Rashba SOC. The resulting Gor'kov equation takes the form:
[ศฯ_x -ฮพ( p) - h_R( p) ฯ ]วฆฬฬ=1,
ศ = i ฯ_m ฯ_z + ฮผ + ฯ_z ฮฬ- h ฯฯ_z ฯ_z ,
where ฮพ( p) = -2t(cos p_x a_x + cos p_y a_y + cos p_z a_z), ฯ_m = ฯ T(2m+1) is the Matsubara frequency, h_R( p) = (V_R/2ta) ( e_y ร v( p)) is the effective Rashba pseudomagnetic field seen by an electron moving along the trajectory determined by the velocity v ( p) = dฮพ/d p = 2t( a_x sin[p_x a_x] + a_y sin[p_y a_y]+ a_z sin [p_z a_z]), ฮฬ= ฮ( R)ฯ_+ + ฮ^*( R)ฯ_- with ฯ_ยฑ = (ฯ_x ยฑ i ฯ_y)/2. For simplicity we assume a_x=a_y=a_z=a.
The superconducting order parameter ฮ in S is calculated self-consistently:
ฮ=-Tโ_ฯ_m gโซ(วฆฬฬ( p) ฯ_+ฯ_0ฯ_x)/8d^3p/(2ฯ)^3,
where g is the pairing constant. The critical temperature is calculated from the linearized with respect to ฮ version of Eqs.ย (<ref>) and (<ref>).
Two-sublattice quasiclassical equations, which were derived in <cit.>, can also be generalized to take into the SOC. We introduce quasiclassical ฮพ-integrated Green's function:
วง( R, p_F) = -1/i ฯโซวฆฬฬ( R, p)dฮพ,
where ฮพ( p) = -2t (cos p_x a_x+cos p_y a_y+cos p_z a_z) is the normal state electron dispersion counted from the Fermi energy. Here we also allow for a possible spatial inhomogeneity in the plane of the S/AF interface, where R is the in-plane radius-vector. Performing the standard derivation of the quasiclassical equation <cit.> one obtains the following Eilenberger equation for the quasiclassical Green's function:
[ (i ฯ_m ฯ_z + ฮผ + ฯ_z ฮฬ( R) - h ( R) ฯฯ_z ฯ_z )ฯ_x-.
.- h_R( p_F) ฯฯ_0, วง( R, p_F) ] + i v_F โวง( R, p_F) = 0 .
|
http://arxiv.org/abs/2306.11103v1
|
20230619181047
|
Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target Imputation
|
[
"Sara Bjรถrk",
"Stian N. Anfinsen",
"Michael Kampffmeyer",
"Erik Nรฆsset",
"Terje Gobakken",
"Lennart Noordermeer"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
Forest Parameter Prediction by Multiobjective Deep Learning of Regression Models Trained with Pseudo-Target Imputation
Saraย Bjรถrk,
Stianย N. Anfinsen,
Michael Kampffmeyer,
Erikย Nรฆsset,
Terjeย Gobakken,
and Lennart Noordermeer
Manuscript received ; revised .
S. Bjรถrk is with the Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsรธ, Norway and the Earth Observation Team, Kongsberg Satellite Services, 9011 Tromsรธ, Norway (e-mail: [email protected]).
S. N. Anfinsen is with the Earth Observation Group, NORCE Norwegian Research Institute, 9019 Tromsรธ, Norway and the Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsรธ, Norway.
Michael Kampffmeyer is with the Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsรธ, Norway.
E. Nรฆsset, T. Gobakken and L. Noordermeer are with Faculty of Environmental Sciences and Natural Resource Management, Norwegian University of Life Sciences, 1432 ร
s, Norway.
Received XXX, XXXX; accepted YYY, YYYY
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In prediction of forest parameters with data from remote sensing (RS), regression models have traditionally been trained on a small sample of ground reference data. This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets. This substantially increases the amount of target training data and leverages the use of deep learning (DL) for semi-supervised regression modelling. We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors. A modified U-Net architecture is adapted with a selection of different training objectives. We demonstrate that when a judicious combination of loss functions is used, the semi-supervised imputation strategy produces results that surpass traditional ALS-based regression models, even though data are considered as inferior for forest monitoring. These results are consistent for experiments on above-ground biomass prediction in Tanzania and stem volume prediction in Norway, representing a diversity in parameters and forest types that emphasises the robustness of the approach.
Forest remote sensing, above-ground biomass (AGB), stem volume, synthetic aperture radar (SAR), Sentinel-1, airborne laser scanning (ALS), deep neural networks, regression modelling, U-Net, composite loss function, semi-supervised learning, pseudo-targets, imputation.
ยง INTRODUCTION
Accurate monitoring of forest above-ground biomass (AGB) is essential to better understand the carbon cycle. Vegetation biomass is, for example, a larger global storage of carbon than the atmosphere <cit.>. Additionally, to monitor, measure and predict the amount of available AGB correctly is important for economic aspects, e.g. to estimate available raw materials or the potential for bioenergy <cit.>.
As the stem volume (SV) accounts for the highest proportion of biomass in each tree, typically 65-80% <cit.>, AGB monitoring often focuses on the available SV. In other applications, the total amount of available biomass is of interest, which comprises stems, stumps, branches, bark, seeds and foliage <cit.>.
Today, remote sensing (RS) data from radar, optical or airborne laser scanning systems (ALS) are commonly used together with a sparse sample of collected ground reference forest measurements to develop prediction models over larger areas and regions <cit.>.
Satellite and airborne RS have become an important source of information about these forest parameters and others.
Traditionally, AGB and SV prediction models use relatively simple statistical regression algorithms, such as multiple linear regression, or machine learning regression models like random forests or multilayer perceptrons (MLPs) <cit.>. These models are usually noncontextual, as they restrict the regressor information to the pixel that is being predicted and do not combine regressor and regressand information from neighbouring pixels, known as spatial context.
Remote sensing is commonly used to infer forest parameters on spatial scales that are coarser than the pixel size, for instance on stand level. Hence, there is no formal reason to avoid the use of contextual information and one should select the method that provides the highest accuracy on the desired scale. This motivates the use of deep learning (DL) and convolutional neural networks (CNNs), whose popularity hinges on their efficient use of spatial context and the inference accuracy obtained by these highly flexible function approximators. The ability of CNNs to exploit spatial patterns was also pointed out in a recent review <cit.> as an explanation as to why CNN are particularly suitable for RS of vegetation.
A recent review <cit.> of DL methods applied to forestry concludes that these are in an early phase, although some work has emerged. We build our proposed method on Bjรถrk et al's sequential approach to forest biomass prediction <cit.>, which uses a conditional generative adversarial network (cGAN) to generate AGB prediction maps by using synthetic aperture radar (SAR) as regressors and AGB predictions from ALS as the regressand. Their regression approach consists of two models that operate in sequence to provide more target data for training the model that regresses on SAR data. This implies that the first regression model learns the mapping between a small set of ground reference data and RS data from a sensor known to provide a high correlation with the response variable. ALS data are suitable for this purpose <cit.>, but are expensive to acquire. Hence, the second model in the sequence establishes a relationship between the ALS-derived prediction map, as a surrogate for the ground reference data, and RS data from a sensor that offers large data amounts at low cost, namely the Sentinel-1 SAR sensors.
This paper preserves some of the principal ideas from <cit.>: The first is to train the regression model on an ALS-derived prediction map of the target forest parameter to increase the amount of training data. The motivation is that the small amount of ground reference data used to train conventional models limits their ability to capture the dynamics of the response variable, as demonstrated in <cit.>. The second is to carry forward the use of CNNs to leverage their exploitation of contextual information, their flexibility as regression functions, and their demonstrated performance in other applications.
At the same time, we make several new design choices to improve on the previous approach and remedy its weaknesses: Firstly, the sequential model is replaced by an approach where ground reference data are imputed with data from the ALS-derived prediction map. In practice, this is done by inserting the sparse set of true targets into the dense map of pseudo-targets. By letting these data sources together form the prediction target, the SAR-based prediction model can be trained simultaneously on ground reference data and the ALS-derived prediction map in a problem setting that we frame as semi-supervised learning;
A second improvement is that we replace or combine the generative adversarial network (GAN) loss used in <cit.> with a pixel-wise error loss and a frequency-aware spectral loss. This modification is motivated by an emerging awareness that the GAN loss used by the Pix2Pix <cit.> model employed in <cit.> may be well suited to preserve perceptual quality and photo-realism, which is required in many image-to-image translation tasks, but is less appropriate for the regression task that we address.
This paper has a stronger technical and methodological focus than <cit.> and emphasises the method's ability to handle different tasks and cases: It demonstrates the proposed regression framework both on AGB prediction in dry tropical forests in Tanzania and on SV prediction in boreal forests in Norway, representing different parameters and very different forest types. Another difference is that the ALS-derived SV predictions used as pseudo-targets in the Norwegian dataset cover spatially non-contiguous forest stands, and is not a wall-to-wall prediction map. We have adapted the CNN-based regression algorithm for use with such data by implementing masked computation of the loss functions.
In summary, we make the following contributions:
* We develop a method that enables us to train contextual deep learning models to predict forest parameters from C-band SAR data from the Sentinel-1 satellite.
* We enable the CNN-based regression model to use target data that consist of spatially disjoint polygons, thereby showing that it can be trained on complex datasets that arise in operational forest inventories.
* By testing the method on AGB prediction in Tanzania and SV prediction in Norway, we demonstrate that it can handle different forest parameters and forest types.
* We investigate an established consensus from the image super-resolution (SR) field about the trade-off between reconstruction accuracy and perceptual quality. For this purpose, we perform an ablation study of composite cost functions, including the GAN loss, a pixel-wise loss, and a recently proposed frequency loss.
* We demonstrate state-of-the-art prediction performance on datasets from Tanzania and Norway. Notably, we show that a deep learning model with C-band SAR data as input supercedes a conventional ALS-based prediction model after it has been trained on ground reference data imputed with ALS-derived predictions of the forest parameters.
The remainder of this paper is organised as follows: sec:related reviews published research on related topics in deep learning applied to forest parameter prediction and other topics relevant to the proposed method. sec:data presents the datasets used in this work. sec:method details the proposed approach and describes how we facilitate the imputation of pseudo-targets for regression modelling, enabling the CNN model to learn from continuous and discontinuous target data using a variety of loss functions. Experimental results are provided in sec:res and discussed in sec:disc. Finally, sec:conc concludes the paper.
ยง RELATED WORK
Bjรถrk et al. showed in a precursor of this paper <cit.> that the popular cGAN architecture Pix2pix <cit.> can be used in the forestry sector to predict AGB from data by training it on ALS-derived prediction maps. Their work inspired <cit.> to also exploit ALS-derived AGB prediction maps and cGANs to predict AGB from multispectral and radar imagery and to quantify aleatoric and epistemic uncertainty. Despite apparent similarities, the current paper distinguishes itself from both <cit.> and <cit.> in many ways. The differences from <cit.> are discussed in sec:intro when listing the contributions of the paper. Just like <cit.>, Leonhardt et al. <cit.> train their regression network with adversarial learning through a cGAN architecture, but pretrain the generator with a mean square error (MSE) loss to find a proper initialisation. Notably, their final goal is not point prediction in the MSE sense or according to similar metrics, but to develop probabilistic methods for AGB prediction that quantify uncertainty.
Another example of deep learning applied to AGB prediction is Pascarella et al. <cit.>, who show that a traditional U-Net <cit.> trained with a pixel-wise error loss can be used as a regression model to predict AGB from image patches of optical Sentinel-2 data. Compared to <cit.>, we focus on utilising data from the Sentinel-1 radar sensor that, as opposed to the optical Sentinel-2 sensor, can acquire data both at night and under cloudy conditions and is therefore a more reliable source of data.
Besides these examples, the literature on deep learning for regression modelling of forest parameters is sparse. This is also pointed out in the review of the use of CNNs in vegetation RS conducted by Kattenborn et al. <cit.>. It found that only 9% of the studies surveyed focused on regression modelling and only 8% were specifically related to forestry and forest parameter retrieval, such as biomass prediction.
A recently published review by Hamedianfar et al. <cit.> attributes this literature gap to the challenge of acquiring the large amounts of target data needed to train accurate contextual CNN models for forest. This has been a main motivation for using pseudo-targets from existing prediction maps to train our SAR-based prediction models. For further inspiration, we have had to look to alternative topics in the literature.
Another image processing task that has inspired us to consider alternative loss functions and combinations of these is image super-resolution (SR). Single-image SR techniques are trained in a similar fashion as regression models: A full-resolution image is often used as the prediction target and a reduced resolution version of it as predictor data (see e.g. <cit.>), which renders the problem a prediction task that resembles the one in regression. Both the regression and the single-image SR task can be solved with generative models, but it is noteworthy that the literature identifies the SR task as an attempt to achieve two conflicting goals: It should produce images with high perceptual quality, meaning that they should appear natural and realistic. At the same time, it should reconstruct the underlying truth, that is, the high-resolution version of the input image, as closely as possible <cit.>.
The SR literature associates GAN losses and adversarial training with the perceptual quality criterion, as these enforce realistic fidelity and crispness in the generated image. This is achieved at the expense of accurate reconstruction in the MSE sense, since the generator module of the GAN effectively learns to hallucinate the kind of spatial pixel configurations that fools the discriminator module, but does not consider pixel-wise reconstruction.
On the other hand, pixel-wise losses such as error measures based on the L_1 and L_2 norm naturally reduce the reconstruction error, but lead to a blurry appearance of the generated image that is not realistic <cit.>.
This has made us realise that although the Pix2Pix model has established itself as a preferred standard model in image-to-image translation, its GAN loss and adversarial learning approach may be better suited for generative tasks where the result must be visually credible. This is not a concern in the regression of biophysical parameters, where regression performance in terms of root mean square error (RMSE), mean absolute error (MAE) or similar metrics is used to evaluate and rank methods. When training such models, one should therefore consider other loss functions or composite loss functions that support the relevant aspects of the regression task. The SR literature exemplifies ways of combining different loss functions, both regarding which losses to select and how they should interact <cit.>. For instance, different losses can be used sequentially in pretraining and fine-tuning, or they can be used simultaneously as a composite loss.
Although perceptual quality is not of the essence for prediction maps of forest parameters, it may still be worth including loss functions that promote sharpness and visual information fidelity as part of a composite loss. One particular class of loss functions we find interesting to investigate is frequency-aware losses. Their aim is to preserve the high-frequency content of the image, which can e.g. be related to forest boundaries, structure and texture. These have not previously been utilised in forest applications, and to a limited extent in SR, but relevant work is found in the more general computer vision literature, where issues referred to as Fourier spectrum discrepancy, spectral inconsistency, frequency bias or spectral bias have gained a lot of attention <cit.>. These terms relate to CNN-based generative models' lack of ability to capture the image distribution's high-frequency components, leading to blurriness and low perceptual quality.
Some claim that the spectral bias is caused by the up-sampling method, e.g. transposed convolutions, used by the generator network <cit.>. Thus, changing the up-sampling method in the last layer of the generator network has been suggested <cit.>. However, Bjรถrk et al. <cit.> claim that changing the up-sampling procedure in the last layer from transposed convolution to e.g. nearest-neighbour interpolation followed by standard convolution gives ambiguous results. Chen et al. <cit.> argue that the down-sampling modules in the discriminator network of the GAN are the issue, resulting in a generator network that lacks an incentive from the discriminator to learn high-frequency information of the data. However, more recent work <cit.> proves that the frequency bias must be rooted in the GAN's generator and not the discriminator. Hence, there has been a focus on modifying the generative training objective by incorporating a spectral or frequency-aware loss with the traditional spatial loss during training <cit.>.
The observations and lessons from the precursor paper <cit.> and from the literature on SR and generic generative models prompts us to investigate if model accuracy improves when we combine loss functions and whether pretraining of the model is enough or if we can increase model performance with a fine-tuning phase. Among the loss functions we combine is a newly proposed frequency-aware loss: the simple but promising FFT loss <cit.>. It has been shown to perform better than other more complex frequency-aware losses <cit.> on experiments where it was used to train a generative variational autoencoder (VAE) <cit.>. As the FFT loss has previously only been evaluated on VAEs with images from common benchmark datasets <cit.>, we contribute with new insight into its behaviour when employed for other models and tasks.
ยง STUDY AREAS AND DATASETS
This section introduces the datasets used throughout this work, i.e. the ground reference target data, the ALS-derived prediction maps of AGB and SV, and the SAR data from the sensors. The ALS-derived prediction maps will interchangeably be referred to as the pseudo-target datasets, while the ground reference data are also referred to as field data, data from the field plots, or true prediction targets.
The AGB dataset comes from the Liwale district in Tanzania. The SV datasets are from three regions in the southeast of Norway: Nordre Land, Tyristrand and Hole.
For Tanzania, both the field data and the ALS data were acquired in 2014, as described in <cit.> and sec:tanzania. The satellite was launched in April 2014 and only one single scene acquired in September 2015 was found to comply with our requirements, meaning that it covers one of Liwale's two yearly dry seasons and is close enough in time to the field inventory and the ALS campaigns in Tanzania. For Norway, the acquisition of the ALS data in 2016 and the field inventory in 2017 (see <cit.> and sec:norway) implies that more data are available. Thus, the models we develop for the Norwegian test sites utilise a temporal stack of and scenes from July 2017.
ยง.ยง Study area and dataset description
ยง.ยง Study area and dataset description
This section briefly describes the Tanzanian and Norwegian study areas, including the ground reference data and related ALS-derived prediction maps. The interested reader is referred to <cit.> and <cit.>, respectively, for in-depth descriptions of the ground reference data and the ALS-derived prediction maps.
ยง.ยง.ยง Tanzanian study area
This work focuses on the same study area as <cit.>, i.e. the Liwale district in the southeast of Tanzania (952'-958'S, 3819'-3836'E). The area of interest (AOI) is a rectangular region with a size of 11.25 ร 32.50 km (WGS 84/UTM zone 36S). fig:mapT shows the location of the AOI in Tanzania and the distribution of the 88 associated field plots. These field plots were collected within 11 L-shaped clusters, each containing eight plots, as seen in fig:mapT.
The field work was performed in January-February 2014, and a circular area of size 707 m^2 represents each sample plot on the ground, i.e. they have a radius of 15 m. We refer to <cit.> for a description of the national level sample design in Tanzania, while <cit.> explain how data from the field work are used to develop large-scale AGB models. Generally, the miombo woodlands of the AOI are characterised by a large diversity of tree species. Measured AGB from the field work ranged from 0 to 213.4 Mg ha^-1 <cit.> with a mean and standard deviation of ฮผ=51.3 and ฯ=45.6Mg ha^-1.
ยง.ยง.ยง Tanzanian ALS-predicted AGB data
We follow <cit.> and use the same ALS data from the Liwale AOI, which was acquired in 2014. We refer to <cit.> for details on the ALS flight campaign, ALS data processing, and the match-up of ALS data with ground reference AGB data from the field plots. After model fitting, the ALS-based AGB model was in <cit.> used to infer a wall-to-wall prediction map for the whole AOI in Liwale. The wall-to-wall map is represented as a grid with square pixels of size 707 m^2. We have gained access to this prediction map and will use it as pseudo-targets to train contextual CNN models for AGB predictions based on SAR data.
ยง.ยง.ยง Norwegian study area
The Norwegian study area consists of three regions shown in fig:mapN and referred to as (A), (B) and (C). All field work was performed during the summer and fall of 2017, initially resulting in 386 circular field plots of shape 250 m^2 distributed over the three regions. We refer to <cit.> for a description of the sampling design and related data properties.
Of the original 386 field plots used for modelling stem volume, a total of 122 plots were not located within polygons of forest stands delineated in the inventories, and thus fell outside the spatial extent of the ALS-predicted SV datasets. We therefore excluded these plots from the analysis. In tab:regions, the column No. of plots (after filtering) indicates the number of field plots included in the current study. The remaining entities of tab:regions, such as geographical coordinates, inventory size, field inventory information and distribution of the dominant tree species in each region, are sourced from <cit.>.
In , ground reference values of SV ranged from 33.7 to 659.2 m^3 ha^-1 with a mean and standard deviation of ฮผ=252.7 and ฯ=145.5m^3 ha^-1. In it ranged from 56.1 to 513.3 m^3 ha^-1 with ฮผ=212.6 and ฯ=96.9m^3 ha^-1, while in it ranged from 29.5 to 563.9 m^3 ha^-1 with ฮผ=253.4 and ฯ=125.8m^3 ha^-1.
ยง.ยง.ยง Norwegian ALS-predicted SV data
The ALS flight campaigns were performed in 2016 for all three regions of Norway. We refer to <cit.> for a description of how the ALS data were processed, the formulation of the nonlinear local prediction models and the match-up of ALS-derived predictions with ground reference data. After model fitting, maps of SV predictions were generated for all three regions, limited to areas where the forest height exceeded 8-9 meters. We refer to these as the ALS-derived SV prediction maps. In all regions, predictions were made for square pixels of size 250 m^2, i.e. 15.8 mร 15.8 m on the ground. The ALS-derived SV is given in units of m^3 ha^-1.
ยง.ยง Postprocessing of the ALS-derived prediction maps
The ALS-derived prediction maps have been obtained as vector data in polygon format stored as shapefiles. These must be converted to raster data in order to be used as training data for CNN models. This conversion is straightforward for the Tanzania datasets, where all polygons are square and have the same areal coverage. Hence, we map project and sample the SAR data such that the SAR pixels coincide with the polygons of the AGB prediction map.
The process for the Norway dataset is more complicated. fig:discNL shows a section of the ALS-derived SV prediction map retrieved in the Nordre Land municipality. Brown areas show where SV predictions are available, whereas the background (other colours) is retrieved from OpenStreetMap <cit.>.
An overlaid lattice of square grid cells can be seen at all zoom levels of fig:discNL. This lattice represents two things: Firstly, it contributes to the delineation of the polygons in the SV prediction map. In this dataset, SV has been predicted for polygons of varying size and shape, that are delimited by: 1) the grid cells of the lattice, as mentioned above; 2) the commercial forest boundaries that enclose the brown areas; and 3) curves within the brown areas that mark internal forest boundaries and subdivide different forest areas. These are seen at all zoom levels of the figure. Secondly, the lattice coincides with the map grid of the SAR data, since we have map projected and resampled the SAR images to align their map grid with the lattice of the SV polygons. Hence, the lattice grid is identical to the pixel grid we want for our training dataset.
In summary, SV is only predicted in brown areas. Each prediction is associated with a polygon, which can be square if it is only delimited by the lattice and coincides with a lattice grid cell. It can also be of irregular shape and size, if a forest perimeter or an internal forest boundary delimits it. Each polygon is assigned a stem volume, V, and an areal coverage, A. Some of the square lattice cells are fully covered by one or more polygons, while others are only partly covered. Some lattice cells contain one polygon, while others contain two or more.
We refer to this as a multipolygon format, as every lattice grid cell potentially contains multiple polygons.
The multipolygon dataset must be rasterised into a target dataset with the same pixel grid as the SAR predictor data. This means that all polygons within a lattice grid cell must be merged, and the grid cell must be assigned a single SV value and the associated areal coverage. The predicted SV contributed by all intersecting multipolygons is computed as
V_merged = โ_i=1^nV_mp(i),
where mp(i) indicates multipolygon number i and n is the number of multipolygons in a grid cell.
Simultaneously, the total areal coverage is computed as:
A_merged = โ_i=1^nA_mp(i).
The described merging process guarantees that each grid cell is assigned a unique SV, but this value does not necessarily represent a full grid cell of 250 m^2. To quality assure the SV dataset, we remove all SV predictions with less than 40% areal grid cell coverage. This threshold is chosen heuristically to accommodate all three regions, as this removes less than 12% of the and dataset and less than 10% of the dataset. The remaining SV prediction dataset is deemed suitable for the training of CNN regression models. All postprocessing steps are applied using QGIS <cit.>.
ยง.ยง SAR data
Low data cost can sometimes be crucial for developing forest parameter monitoring systems suitable for commercial or operational use. This paper utilises SAR data from the freely available sensors, which also offer short revisit time and good coverage for the areas of interest. The SAR images are dual-polarisation (VV and VH) C-band scenes acquired in a high-resolution Level-1 ground range detected (GRD) format with a 10 m pixel size. The SAR data was downloaded from Copernicus Sentinel Scientific Data Hub[See <https://scihub.copernicus.eu/dhus/#/home>].
For the AOI in Tanzania, we use a single scene acquired on 15 September 2015, as this is the only available product that covers the AOI at a time close to the acquisition of the ALS data and during one of Liwale's two yearly dry seasons. The latter criterion implies that the radar signal achieves sufficient sensitivity to dynamic AGB levels.
We utilise data from the Sentinel-1A and -1B satellites for the three Norwegian regions. Since the field work for the three Norwegian regions was performed during the summer and fall of 2017, we decided to create temporal stacks of -scenes from July 2017 for each of the three regions.
ยง.ยง SAR data processing and preparation of datasets
The Sentinel-1 GRD product in the Tanzanian dataset was processed with the ESA SNAP toolbox <cit.> following the workflow described in <cit.>.
The GRD products in the Norway dataset have been processed with the GDAR SAR processing software at NORCE Norwegian Research Institute. They are geocoded with a 10 mร 10 m digital elevation model to the same map projection as the ALS-derived SV prediction map and resampled to a pixel resolution of 15.8 m to match the 250 m^2 grid cells of the prediction map.
Since <cit.> showed that it is more advantageous to train CNN-based prediction models with intensity data on decibel (dB) scale, the stacks of scenes for the Norwegian regions are converted to dB format. The final products for the Norwegian regions contain nine features that were extracted from the time series: NDI, mean(VV), mean(VH), min(VV), min(VH), max(VV), max(VH), median(VV), median(VV). NDI denotes the normalised difference index feature, a normalised measure of how much the measured backscatter differs in VV and VH. It is computed as
NDI = (VV-VH)/(VV+VH).
ยง METHODOLOGY
This section describes the proposed methodology to train contextual CNN models for forest parameter prediction. We describe the semi-supervised approach and how training, test and validation datasets are created for each region. In general terms, we introduce the CNN models we use in our work and describe the changes proposed to improve on the performance obtained in <cit.>. This section focuses on a semi-supervised learning strategy where we impute the sparse reference data with data from ALS-derived prediction maps to increase the amount of training data and to create a dataset that allows us to train CNN models. It also explains the multiobjective training approach, which exploits composite loss functions with varying objectives in the pretraining and fine-tuning stages.
ยง.ยง.ยง Overview
The framework of the proposed method is illustrated in fig:pipeline. Initially, the ground reference data, also known as the true prediction targets, are imputed with the ALS-derived prediction map, also called the pseudo-target dataset. Then two binary masks are created, one indicating the pixel positions of the true targets and the other indicating pixels where pseudo-target data are available. The two masks are referred to as ancillary training data. They enable the CNN to learn from discontinuous pseudo-target data and boost learning in regions where ground reference data are available. When the pseudo-target data are spatially continuous and have the same extent as the predictor data, the pseudo-target mask will have a constant value of one. The imputed target dataset and the two masks are combined with regressor data from the sensor. See sec:trtestval and fig:patches for details. fig:pipeline shows that baseline models are pretrained as an initial training step. Following the pretraining stage, fine-tuning may be applied to the baseline CNN models with a composition of different losses. Inference, i.e. production of SAR-derived prediction maps, is done with the resulting models[Code will be available from
<https://github.com/sbj028/DeepConvolutionalForestParameterRegression>].
ยง.ยง Imputing ground reference data with pseudo-targets
The cGAN-based models developed in <cit.> for SAR-based regression trained on ALS-derived prediction maps could not compete with the conventional ALS-based regression model in terms of prediction accuracy. We argue that this is because the cGAN model is not trained on the true prediction targets and therefore inherits too much of the uncertainty in the ALS-based prediction maps. By contrast, the conventional ALS model was fitted directly to all the true prediction targets. To address this shortcoming and improve the performance of CNN models, we propose to impute pseudo-targets from the ALS-derived prediction maps into the dataset of true prediction targets, so that the CNN model is trained on the complete set of available targets. Since the ground reference dataset is much smaller than the prediction maps, this is in practice done by inserting true targets into the pseudo-target prediction maps. Following the imputation process, the Tanzanian dataset comprises less than 0.08% of target values originating from the ground reference data. For the Norwegian datasets, the ground reference data represents less than 0.04%, 0.11%, and 0.13% of the pixels in the respective , , and Hole datasets after the imputation process.
We would generally use all available ground reference data for model training and hyperparameter tuning. However, for model evaluation, we report the performance after cross-validation (CV), where we have trained models on a target dataset that only contains 80% of the true target labels. The remaining 20% are reserved for validation. Results obtained with CV are referred to as CV-RMSE in the result section.
ยง.ยง Preparing the datasets for contextual learning
To create training, test and validation datasets for the Norwegian regions, all true target labels from the field inventory were first inserted into the ALS prediction maps of pseudo-targets. Two binary masks were additionally created; the pseudo-target mask, denoted indicates the positions of available ALS-derived predictions. It is needed for masked computation of the loss functions, which are restricted to pixels where prediction targets are available. The ground reference mask, denoted , holds the positions of the true prediction targets. It is also used in the loss computation, where we weight the loss for the true prediction targets higher than the pseudo-targets.
After having produced the imputed target dataset and the two masks, we follow the workflow shown in fig:patches to create datasets with training, test and validation image patches. The figure illustrates the process for , but it is identical for all three Norwegian regions. Firstly, all available data are combined into a stack, including the mosaic of nine feature bands, the imputed target map, and the two masks. Then the entire scene is divided into superpatches by splitting it into blocks with no overlap. A superpatch is defined as a block of pixels that is larger than the image patches we use for training, testing and validation. See tab:patches for an overview of the total number of pixels in each region, the corresponding size of each superpatch and the number of possible superpatches that can be extracted for that region. was used to remove superpatches with no overlap with pseudo-targets. Among all available superpatches, those with at least 10% overlap with were identified as candidates for the test dataset. Fulfilling this criterion, approximately 15% of all available superpatches were randomly selected as test superpatches. These were further split into test patches of 64ร 64 pixels without overlap. Test patches having no overlap with were discarded. tab:patches shows each region's final number of test patches.
The remaining superpatches were initially used for hyperparameter tuning. See Appendix <ref> for details. After this, all patches not used for testing were combined into training sets for the Norwegian models by splitting superpatches into training image patches of 64ร 64 pixels using 50% overlap and data augmentation with flipping and rotation. Patches with no overlap with were discarded. tab:patches lists the number of training image patches per region after hyperparameter tuning.
The training, test and validation datasets for Tanzania were created by similar use of superpatches. Since the Tanzanian ALS-derived prediction map covers the whole AOI without any discontinuities, there is no need to check if image patches overlap with pseudo-targets. See tab:patches for details on region sizes in pixels and the number of test and training patches.
To evaluate the models, we also created CV target datasets where we used only 80% of the true target labels from the field inventory. We compute the model's performance both when it is trained with all true target labels and also a CV performance for the case when 20% of the true target labels are held out and used for testing. When comparing these results, one must recall that in the former case, the model has seen the test data during training. Moreover, the models are in the CV case trained with less true prediction targets.
ยง.ยง Backbone U-Net Implementation
The CNN we use for SAR-based prediction of forest parameters is a modified version of the U-Net architecture in <cit.>, a fully convolutional encoder-decoder network originally developed for biomedical image segmentation. The U-Net consists of a contraction part and a symmetric extraction path, with skip connections between each encoder block and the associated decoder block. The skip connections imply that low-level feature maps from the contraction part are concatenated with high-level feature maps from the extraction part to improve the learning in each level of the network.
fig:unet illustrates the U-Net generator network we use with an encoder-decoder depth of 4. This is the depth used by the Norwegian models, determined by hyperparameter tuning, while the Tanzanian models use a depth of 5. In both cases, we use ResNet34 <cit.> as backbone for the convolutional encoder network and refer to the whole model as a regression U-Net.
The regression U-Net is trained to perform image-to-image translation. I.e., given image patches from the input domain, the model translates these into prediction maps of AGB or SV maps for the same area, guided by the imputed target data.
For the Norwegian datasets, we have modified the first layer of the encoder to enable nine-channel inputs, i.e. input tensors of dimension 9 ร 64 ร 64. The Tanzanian models take three-channel inputs with a shape of 3 ร 64 ร 64. Additionally, the segmentation head was removed from the original U-Net architecture, as our work concerns a regression task and not a segmentation task. Finally, the softmax activation function in the final layer was replaced with a ReLU activation function to ensure non-negative AGB and SV predictions.
The initial layer of the encoder network uses a 7 ร 7 convolution kernel with a stride of 2, followed by a normalisation layer, ReLU activation and a max-pooling operation. This implies that the number of feature channels is increased to 64, while the image dimension is decreased to 16 ร 16 pixels. The following layer combines residual basic blocks, each using a 3 ร 3 convolution, followed by a normalisation layer, ReLU activation, 3 ร 3 convolution and a normalisation layer. The following encoder layers' residual blocks additionally employ down-sampling layers, which double the feature channels and half the spatial resolution of the image. In addition to the skip connections previously mentioned, each residual block uses common short connections <cit.>.
Each block in the extraction part uses upsampling through nearest-neighbour interpolation and combines feature maps from the skip connection. It further processes the feature maps through two identical transformations, each including 3 ร 3 convolutional filtering followed by a normalisation layer, ReLU activation and identity mapping. The upsampling procedure halves the number of feature maps while doubling the spatial resolution. We use the Pytorch implementation of the U-Net model from <cit.> for our regression U-Net, with the above-mentioned modifications.
ยง.ยง Pretraining Stage
We follow the training procedure proposed for ESRGAN, a super-resolution model trained with multiple objectives in <cit.>, and divide the training of the U-Net architecture into two stages: pretraining and fine-tuning. In the pretraining stage, we train two baseline CNN models: an ล-based regression U-Net and a cGAN-based generative U-Net. In the fine-tuning stage, described in Section <ref>, we continue to train the baseline models with additional losses.
ยง.ยง.ยง Pixel-aware Regression U-Net
We refer to a regression-type U-Net model optimised on a pixel-wise loss computed between model-inferred predictions and target predictions as a pixel-aware regression U-Net (PAR U-Net). In this work, the PAR U-Net is optimised on the ล loss similar to <cit.>, i.e.
โ_1 = โ_k||Y - F(X)|| = โ_k||Y - ลถ||,
where X and Y represent a corresponding pair of input and target image patches from the training dataset, ลถ=F(X) is the image patch predicted by a CNN model F(ยท), and k is the total number of image patches.
ยง.ยง.ยง cGAN U-Net
In addition to training the modified U-Net with a ล loss, we also train it as a cGAN, like in <cit.>. Formally conditioned on image patches from the input domain, the cGANs generator () network is trained to learn the optimal mapping : ๐ณโ๐ด to generate realistic-looking image patches from the target domain. The network also uses the regression U-Net architecture in fig:unet.
Simultaneously as the network aims to improve the generation task, the adversarially trained discriminator network () is trained to distinguish between real or false pairs of image patches successfully. A real pair of image patches corresponds to one image patch and the corresponding target ALS-derived prediction map. On the other hand, a false pair corresponds to one image patch and the corresponding prediction map generated by . Adversarial training of and results from optimising the minimax loss function of the so-called Vanilla GAN (VGAN) <cit.>:
min_max_โ_VGAN(,) = ๐ผ_X,Y[log (X,Y)]
+ ๐ผ_X[log(1 - (X,(X)))].
However, to address stability issues during training of the VGAN, the least squares GAN (LSGAN) was introduced <cit.>. In a conditional setting, it optimises the objective functions
min_โ_LSGAN() = 1/2๐ผ_X,Y[((X,Y)-b)^2] +
1/2๐ผ_X[( (X,(X))-a)^2] ,
min_โ_LSGAN() = 1/2๐ผ_X[( (X,(X))-c)^2],
where X and Y are image patches from the input and the target domain, a and b are labels for false and real data, while c denotes a value that tricks to believe for false data <cit.>.
Isola et al. <cit.> suggest to combine a GAN loss with an ล loss to reduce visual artefacts in the generated images. The contribution of the ล loss to the overall objective function is weighted by a regularisation parameter ฮฑ, which is determined by hyperparameter tuning. In <cit.>, the LSGAN model was found to outperform a VGAN and a Wasserstein GAN <cit.>. We therefore replace (<ref>) with the following objective function for the generator of the baseline cGAN U-Net:
โ_cGAN() = โ_L1 + ฮฑโ_LSGAN(), ฮฑโ[0,1].
We find an optimal value of ฮฑ=0.01, as in <cit.>.
Similar to <cit.>, we do not change the objective function of
for the baseline cGAN U-Net.
In <cit.>, different architectures were evaluated for the discriminator by altering the patch size N of the receptive fields, ranging from a 1 ร 1 PixelGAN to an N ร N PatchGAN. The network applies convolutional processing to the pair of image patches to produce several classification responses, which are then averaged to determine whether the pair of image patches is real or false. In a PixelGAN, the discriminator attempts to classify each 1 ร 1 pixel within the image patch as either real or false. In contrast, for the two PatchGAN networks, the discriminator tries to differentiate each N ร N patch of pixels in the image patch as real or false.
ยง.ยง Fine-tuning Stage
This section describes the loss functions used to fine-tune the PAR U-Net model and the cGAN U-Net model.
ยง.ยง.ยง Pixel- and frequency-aware regression U-Net
To enforce the regression U-Net to focus on the alignment of the image frequency components during training, we propose to add a frequency-aware loss to the pixel-aware regression model or the adversarial cGAN model. We choose to employ the FFT loss from <cit.>, which has shown promising results and is complementary to existing spatial losses. It is formulated as
โ_FFT = 1/kโ(imag[โฑ(Y)]- imag[โฑ(ลถ)])^2 +
1/kโ(real[โฑ(Y)]- real[โฑ(ลถ)])^2,
where โฑ denotes the fast Fourier transform. The uses MSE to enforce alignment of the real and imaginary parts of target and generated image patches in the frequency domain.
The total composite loss function becomes:
โ_tot = โ_1 + ฮฑโ_LSGAN + ฮณโ_FFT, ฮฑ, ฮณโ[0,1].
A regularisation parameter ฮณ is associated with to adjust its influence on . The ฮฑ is still associated with . All objective functions we use in the fine-tuning can be formulated with โ_tot, as we can ablate it by setting ฮฑ=0 or ฮณ=0.
The baseline PAR U-Net model is either fine-tuned on the loss ( with ฮณ=0), the combined ล and loss ( with ฮฑ=0), or the loss. The baseline cGAN U-Net model is fine-tuned on . We refer to Appendix <ref> for an extensive evaluation of model settings and hyperparameters used in the pretraining and fine-tuning phase.
ยง.ยง Masked Loss Computation on Discontinuous Data
Due to the discontinuity of the ALS-derived SV prediction maps from Norway, there is not a target pixel for every pixel of the continuous SAR predictor dataset. To remedy this, we introduce masked loss computation. In this way, the convolutional processing of the predictor data creates a wall-to-wall map of model predictions, but in the comparison with the target dataset, pixels without prediction targets are masked out and excluded from the learning process. In addition to masking the pseudo-targets, we want to boost learning for pixels and patches with true prediction targets, hence reducing the impact of pseudo-targets relative to true targets.
As shown in fig:pipeline, the training dataset contains two binary masks of the same size as the input and target data patches: the ground reference mask, , and the pseudo-target mask, , which for the Tanzanian AOI contains only ones. Masked losses are computed through simple Hadamard products, i.e. element-wise multiplication, denoted โ. For instance, the masked ล loss becomes:
โ_1^โณ = โ(MโY, Mโลถ)
= 1/Nร Nโ_i,j(Mร|y_i,j - ลท_i,j|),
where M can be or , y_i,j and ลท_i,j are pixels of the target patch ๐ and the predicted target patch ๐ฬ, whose size is Nร N. Similarly, โ_FFT can be computed on โฑ(โณโ๐ด) and โฑ(โณโลถ). Also the discriminator can be fed with masked patches, either the real pair (MโX,MโY) or the fake pair (MโX,Mโ(X). With this input to , the losses in (<ref>) and (<ref>) generalise to the masked case.
Let loss functions masked with and be denoted โ^โณ_pt and โ^โณ_gr, respectively. To weight the true targets and the pseudo-targets differently, the total loss is decomposed as:
โ_tot = ฮดโ_tot^โณ_gr + โ_tot^โณ_pt
= ฮดโ_1^โณ_gr + โ_1^โณ_pt + ฮณ(ฮดโ_FFT^โณ_gr + โ_FFT^โณ_pt)
+ ฮฑ(ฮดโ_LSGAN^โณ_gr + โ_LSGAN^โณ_pt) ,
with ฮฑ, ฮณโ[0,1] and true target weighting parameter ฮดโซ 1, found from hyperparameter tuning (see Appendix <ref>). A masked loss decreases when the mask has many zeros, which is as intended, since the amount of true or pseudo-targets contained in a patch should determine its impact.
This is inspired by pseudo-labelling <cit.>, a related semi-supervised learning algorithm for categorical prediction. It recommends to balance the losses computed over pseudo-labels (the categorical equivalent to the pseudo-targets in the regression task) and true labels, as there are generally much more pseudo-labels than true labels. In our training paradigm, this translates to boosting the masked loss computed over the true targets.
ยง EXPERIMENTAL RESULTS
This section presents experimental results of the prediction models trained on the Tanzanian and Norwegian datasets. We provide results on both regional and pan-regional models for the Norwegian datasets. The pan-regional models have been trained on all available training datasets from , and . The regional models were trained on datasets from either , or , and evaluated on the test data from the same region it was trained on. Appendix <ref> provides details on hyperparameter tuning and settings used during model training.
Results are given both for the pretraining stage, i.e. the baseline models, and the fine-tuning stage as root mean square error (RMSE) and mean absolute error (MAE). Models with a low RMSE and MAE are preferred, as indicated by the symbol โ in the tables. Models have been trained to in two ways: (i) using all true target imputed with pseudo-targets; (ii) in cross-validation (CV) mode
by rotationally imputing 80% of the target labels with the available pseudo-targets. For the latter case, a CV-RMSE is reported as ฮผ (mean) ยฑฯ (standard deviation).
In the evaluation, we report model performance on the true targets and unseen test dataset. Since the CNN models work on image patches, model predictions are inferred by processing the AOI as 64 ร 64 image patches with 50% overlap. A wall-to-wall prediction map is created by mosaicking patches through linear image blending, using the p-norm with a heuristic value of p=5, as proposed in <cit.>.
ยง.ยง Results: Tanzania models
The Tanzanian test set consists of 14 patches of pseudo-target AGB predictions and true targets from the 88 field plots. Quantitative results in terms of model performance on both the pseudo-target dataset and on the true targets are given in tab:restanz. Metrics for the original ALS-derived AGB model, see <cit.>, and the best sequential cGAN model from <cit.> are also provided. Note that the best cGAN model from <cit.> was trained only on pseudo-targets, without access to true targets. We do not report the performance of the original ALS-derived AGB model and the sequential cGAN model on the test dataset, or the ฮผยฑฯ CV-RMSE on the true targets, as these metrics were not provided in <cit.>. All units in tab:restanz are of Mg ha^-1. Numbers in boldface indicate the best-performing model per column, while (โ) indicates that a model performs better than the baseline ALS model.
ยง.ยง Results: Norwegian models
The Norwegian models have all been trained to translate data into ALS-derived SV predictions for commercial forests. tab:patches shows that data from is over-represented in the Norwegian dataset. I.e., approximately 80% of the training image patches are from , while only 6% are from the region. Four types of Norwegian models were developed: one pan-regional model that represents all three regions and separate regional models for , and . The pan-regional models were trained on pooled training data from all regions, but evaluated separately on each region's pseudo-target data and true target data. The three regional models were both trained and evaluated on data from each separate region.
Since is over-represented in the dataset, we wish to investigate if the pan-regional models evaluated on perform similarly to the corresponding regional models developed for .
On the other hand, as the available data from both and are limited, we wish to compare the respective regional models to the pan-regional model. The aim is to identify and quantify any difference in performance and, if possible, to draw conclusions about transferability and impacts of dataset size.
As for the Tanzania, different CNN models were evaluated against each other by comparing their performance on unseen test patches of pseudo-target data and on true targets of field measured SV. The number of field plots, i.e. true targets, in each region, can be found in tab:regions. The Hole test set consists of 14 patches of pseudo-target data, Tyristrand of 25 and Nordre Land of 87 test patches, each of 64 ร 64 pixels.
Quantitative results from the evaluation of the pan-regional Norwegian models are listed in tab:resnatnor while tab:reslocnor lists results for the regional models. For the regional models, only results for the baseline PAR U-Net model and the model pretrained on ล and fine-tuned with the loss are given, as these have proven to be robust on both the Tanzanian data and the pan-regional Norwegian dataset. Metrics obtained with the original ALS-derived SV model have been computed for each region by extracting the area-weighted mean of ALS-derived SV predictions at the location of each field plot. The CV-RMSE for the original ALS-derived SV models were not provided to us for this work and are therefore not given in tab:resnatnor or tab:reslocnor. All metrics in both tables are in units of m^3 ha^-1. Boldface numbers in a column of tab:restanz indicate the model that performs best. A (โ) symbol indicates that a model performs better than the baseline ALS model.
ยง DISCUSSION
Six new CNN-based regression models (two baseline and four fine-tuned ones) have been developed to improve earlier work on the Tanzanian dataset using the semi-supervised imputation strategy proposed herein. Above all, tab:restanz shows that the model pretrained on the ล loss and fine-tuned on the loss performs better than the conventional statistical ALS-based AGB model proposed in <cit.>, and all other Tanzanian models on the field data. The CNN model that most accurately recreates the AGB pseudo-target data is pretrained on the ล loss and fine-tuned on the combined ล and loss, see tab:restanz. The results on the Tanzanian dataset show the potential of a two-stage training paradigm and of frequency-aware training to reduce the impact of spectral bias.
Furthermore, the results in tab:restanz show that the baseline PAR U-Net model performs better than the baseline cGAN U-Net model on both the pseudo-target and the true target data. These findings align with existing knowledge in the field of image super-resolution: It is disadvantageous to adopt a purely adversarial training strategy on tasks that require high reconstruction accuracy in terms of RMSE. In this case, employing a simpler pixel-wise regression U-Net is better. The proposed baseline cGAN U-Net model is most similar to the sequential cGAN model proposed in <cit.>. tab:restanz shows that the proposed semi-supervised imputation strategy improves the CNN models' performance in AGB prediction.
Several new CNN models are also proposed for SV prediction on the Norwegian datasets. Our approach is to train pan-regional models by combining data from all three Norwegian regions, , and , followed by evaluation of test and field data from each individual region. The purpose of the pan-regional models is to develop models that generalise well to more than one region, which is particularly advantageous for regions with little training data. As a result, these models hold the potential for substantial cost-savings if field work can be reduced during operational inventories.
According to tab:resnatnor, the baseline PAR U-Net model outperforms the other models in accurately recreating the pseudo-target SV data. We advise to avoid the baseline cGAN U-Net model or the pan-regional model that was pretrained on the loss, followed by fine-tuning on the combined and losses, when training CNN models for SV prediction. As the models are evaluated on RMSE and not perceptual quality, the results suggest that adversarial training should be avoided in the initial training phase.
As demonstrated in tab:resnatnor, fine-tuning and the composition of losses generally improve model performance with respect to field data, with few exceptions. Moreover, all pan-regional fine-tuned models perform better than the conventional statistical ALS-based models derived for either , , or . Based on the CV-RMSE, we recommend using fine-tuned models that are pretrained on ล and fine-tuned on either the combined ล and loss or on . For instance, the model fine-tuned on the combined ล and loss performs best on the Hole field data, whereas the model fine-tuned on performs best on Tyristrand field data.
In addition to the pan-regional models trained on the whole Norwegian dataset, regional models were developed for this work. Unlike the pan-regional models, these were only trained and evaluated on a specific region. tab:patches shows a significant difference in the amount of available training data among the regions. The region has the least data, followed by , while has the most data. Consequently, it implies that the regional model has been trained on almost the same training data as the pan-regional model. For (and ), the regional models are trained on only a fraction of the training data available for the pan-regional model, which could impact their relative performance.
Based on the discussion above, we train the following two models: a regional PAR U-Net model and a regional model pretrained on ล and fine-tuned on the loss. The fine-tuned model was chosen among the other three, as it has proven to be robust on both the Tanzanian data and the pan-regional Norwegian dataset.
In general, comparing the results of the pan-regional models in tab:resnatnor to the regional models in tab:reslocnor, we observe that the pan-regional Norwegian models perform better than all regional models with one exception. The regional model pretrained on the ล objective and fine-tuned on the objective performs better than the pan-regional model on the corresponding regional model on field data. These results show the potential of training regional models that utilises all available data from nearby regions.
To our knowledge, it is the first time that the loss has been evaluated outside the natural image domain, e.g. on remote sensing images. Our results from both the Tanzanian and Norwegian models show that the simple objective function efficiently reduces the impact of spectral bias and thereby improves the performance of the CNN model.
ยง CONCLUSION
Through the use of a semi-supervised imputation strategy, we demonstrate the ability of contextual generative CNN models to accurately map C-band data to target data consisting of spatially disjoint polygons of ALS-derived prediction maps. The generalisation ability of our modelling approach was evaluated for AGB prediction in the Tanzanian miombo woodlands and for SV prediction in three managed boreal forests in Norway. Our results show that the models developed using the imputation strategy achieve state-of-the-art performance compared to previous studies, suggesting that the contextual C-band SAR-based models outperform conventional statistical ALS-based models in accurately predicting the target labels of ground reference data.
Furthermore, we demonstrate that a two-phased learning strategy, which includes pretraining with a pure pixel-wise regression U-Net followed by either a regression cGAN model or a pixel- and frequency-aware regression U-Net in the fine-tuning phase, improves model performance. We argue that pixel-aware pretraining enforces the model to focus on pixel-to-pixel relationships before learning general relationships.
-2plus -1fil
ยง ACKNOWLEDGEMENTS
We gratefully acknowledge the Norwegian University of Life Sciences, the Tanzania Forest Services Agency, Prof. Eliakimu Zahabu and coworkers at Sokoine University of Agriculture, Viken Skog and the Swedish University of Agricultural Sciences for participation in field work and provision of in situ measurements, ALS-derived AGB and SV products. Special thanks to Prof. Hรฅkan Olsson for providing ALS data acquired by SLU and to Mr. Svein Dypsund at Viken Skog for providing in situ measurements in Norway. Many thanks to Assoc. Prof. Benjamin Ricaud for valuable input on relevant experience from the field of single-image super-resolution.
[
< g r a p h i c s >
]Sara Bjรถrk
received the M.Sc. degree in Applied Physics and Mathematics from UiT The Arctic University of Norway, in 2016, where she is currently pursuing the Ph.D. degree in physics. Since 2022, she has been working as a system developer in the Earth Observation Team at KSAT Kongsberg Satellite Services. Her research interests include computer vision, image processing, and deep learning, with a particular focus on developing methodologies that leverage deep learning techniques and remote sensing data for forest parameter retrieval.
-2plus -1fil
[
< g r a p h i c s >
]Stian Normann Anfinsen
received the M.Sc. degree in communications, control and digital signal processing from the University of Strathclyde, Glasgow, UK (1998) and the Cand.scient. (2000) and Ph.D. degrees (2010) in physics from UiT The Arctic University of Norway (UiT), Tromsรธ, Norway. He is a faculty member at the Dept. of Physics and Technology at UiT since 2014, currently as adjunct professor in machine learning. Since 2021 he is a senior researcher with NORCE Norwegian Research Centre in Tromsรธ. His research interests are in statistical modelling and machine learning for image and time series analysis.
-2plus -1fil
[
< g r a p h i c s >
]Michael Kampffmeyer
is an associate professor and head of the Machine Learning Group at UiT The Arctic University of Norway. He is also an adjunct senior research scientist at the Norwegian Computing Center in Oslo. His research interests include explainable AI and learning from limited labels (e.g. clustering, few/zero-shot learning, domain adaptation and self-supervised learning). Kampffmeyer received the Ph.D. degree from UiT in 2018. He has had long-term research stays in the Machine Learning Department at Carnegie Mellon University and Berlin Center for Machine Learning at the Technical University of Berlin. He is general chair of the annual Northern Lights Deep Learning Conference.
-2plus -1fil
[
< g r a p h i c s >
]Erik Nรฆsset
received the M.Sc. degree in forestry and the Ph.D. degree in forest inventory from the Agricultural University of Norway, ร
s, Norway, in 1983 and 1992, respectively. His major field of research is forest inventory and remote sensing, with particular focus on operational management inventories, sample surveys, photogrammetry, and airborne LiDAR. He has played a major role in developing and implementing airborne LiDAR in operational forest inventory. He has been the leader and coordinator of more than 60 research programs funded by the Research Council of Norway, the European Union, and private forest industry. He has published around 250 papers in international peer-reviewed journals. His teaching includes lectures and courses in forest inventory, remote sensing, forest planning, and sampling techniques.
-2plus -1fil
[
< g r a p h i c s >
]Terje Gobakken
is professor in forest planning and has published more than 190 peer-reviewed scientific articles related to forest inventory and planning in international journals. He has been working at the Norwegian National Forest Inventory and participated in compiling reports of emissions and removals of greenhouse gases from land use, land-use change and forestry in Norway. He has coordinated and participated in a number of externally funded projects, including international projects funded by for example NASA and EU, and has broad practical and research-based experience with development of big data and information infrastructures for forest inventory, planning and decision support.
-2plus -1fil
[
< g r a p h i c s >
]Lennart Noordermeer
received the M.Sc. degree in forestry and the Ph.D. degree in forest inventory from the Norwegian University of Life Sciences (NMBU) in 2017 and 2020, respectively. He currently has a researcher position in the Forest Inventory Group at the Faculty of Environmental Sciences and Natural Resource Management, NMBU. His research focuses on operational forest inventory, with emphasis on the use of data from forest harvesters as well as the use of multitemporal remotely sensed data for forest productivity estimation.
IEEEtran
ยง
ยง.ยง Hyperparameter tuning for model selection
Extensive hyperparameter tuning was performed on the training and validation datasets for the Tanzanian and Norwegian models. Experiment tracking with Weights & Biases sweeps <cit.> was used, employing grid search during both pretraining and fine-tuning phases. Unlike most studies in forestry deep learning research <cit.>, the Adam optimiser was used for all proposed models. Hyperparameter tuning focused on finding optimal batch size (BS), ฮฒ_1 value for the Adam optimiser, learning rate (lr), encoder network, encoder/decoder depth, discriminator network, cGAN loss, and number of epochs. Three discriminator networks were evaluated: 1 ร 1 PixelGAN, 16 ร 16 PatchGAN, and 34 ร 34 PatchGAN. The two PatchGAN networks were created following <cit.> by modifying discriminator depth to achieve receptive field sizes of 16 ร 16 or 34 ร 34. Hyperparameter tuning also evaluated normalisation layers, two different weight initialisation methods, and selection of objective functions for the pretraining and fine-tuning stages.
5-fold cross-validation (CV) was used for hyperparameter tuning of the Tanzanian and Norwegian models. For Norwegian models, training and validation datasets used for CV consisted of image patches from all three Norwegian regions. Superpatches not used for test sets were divided into training and validation splits with 80% for training and 20% for testing in 5-fold CV. Superpatches were further divided into training (or validation) image patches of 64ร 64 pixels with 50% overlap allowed for training data. Data augmentation with flipping and rotation was applied to increase training data. Validation loss, based on mean and median RMSE, was used to identify optimal hyperparameters and model settings for both the Tanzanian and Norwegian models. tab:hypbaselinerange lists evaluated hyperparameters and search ranges for the Tanzanian and Norwegian models, where normalisation "None" refers to no normalisation layers.
ยง.ยง.ยง Summary of findings from hyperparameter tuning
We observed that the three ResNet networks used as convolutional encoder had similar performance, but ResNet34 was the most accurate and was therefore selected. We found that cGAN-based models optimised on the objective were more accurate than those optimised on the VGAN objective, and the loss was therefore selected. The evaluation of different networks showed that the Tanzanian adversarial models preferred the PixelGAN, while the 16 ร 16 PatchGAN was preferred by the Norwegian adversarial models.
We also investigated the impact of the network's normalisation layers on the model performance. Previous work has argued that using BN in the network might harm the inherent range flexibility of the features <cit.>. They suggest to remove the BN layers from the model architecture to increase performance and reduce computational complexity for reconstruction tasks that optimise e.g. the RMSE. Motivated by <cit.>,
we compared batch normalisation (BN) layers to instance normalisation (IN) layers and no normalisation. Our experiments did not confirm that normalisation, and BN in particular, should be avoided. On the contrary, most models preferred BN or IN.
The potential of transfer learning was investigated by initialising the Tanzanian or Norwegian networks with or without ImageNet weights. Use of pretrained ImageNet weights requires that the input image patches from must be scaled to the range [0, 1] and normalised with ImageNet mean and standard deviation. This implies that the data no longer are in dB form. However, experiments showed that randomly initialised weights gave better performance than pretrained ImageNet weights. This confirms the claims from <cit.> that avoiding normalisation and keeping the input data on dB form resulted in improved prediction of ALS-derived AGB maps. Thus, no models in this study employ pretrained ImageNet weights.
In <cit.>, the regularisation weight ฮฑ was applied on the ล part of the generator loss function and evaluated for ฮฑโ[0,100]. As explained in sec:cgan and shown in Eq. (<ref>), we apply ฮฑ on and combine it with ล, to form . We evaluated ฮฑ = [0.01, 0.1, 1].
In accordance with <cit.>, we found that models trained with ฮฑ = 0.01 performed best.
Initial experiments showed that the ล loss magnitude varies around 3 ร10^1, while the loss approximates 1 ร 10^0. In contrast, the loss assumes magnitudes around 3 ร 10^8. To balance the impact of the loss with the other loss functions, we evaluated the following range for its regularisation weight: ฮณ = [1e-8, 3e-8, 5e-8, 7e-8, 9e-8, 1e-7]. Our experiments show that ฮณ = 9e-8 or 1e-7 is best.
The true target weight ฮด, used in Eq. (<ref>), must be large to compensate for the strong imbalance between the numbers of true targets and pseudo-targets. We experimented with ฮด=[100, 200, 300, 400, 500] for the Tanzanian models and ฮด=[200, 300, 400, 500, 600, 700] for the Norwegian models.
The selected hyperparameters for the models pretrained on the Tanzania dataset and the Norway dataset are shown in tab:hypbaseline. The resulting hyperparameters for the fine-tuned Tanzania models and the fine-tunes pan-regional Norwegian models are shown in tab:hypfine.
The hyperparameters selected for the regional Norwegian models are similar to those of their pan-regional counterparts. The exceptions are the regional PAR U-Net baseline model, which was trained for 250 epochs instead of 200, and the regional model pretrained on the ล loss and fine-tuned on the loss, which was fine-tuned for 250 epochs instead of 100. The tables report the number of epochs as E_p+E_f, denoting E_p epochs of pretraining and E_f in fine-tuning.
|
http://arxiv.org/abs/2306.09324v1
|
20230615175728
|
Single-Stage Visual Query Localization in Egocentric Videos
|
[
"Hanwen Jiang",
"Santhosh Kumar Ramakrishnan",
"Kristen Grauman"
] |
cs.CV
|
[
"cs.CV"
] |
Language-Guided Music Recommendation for Video via Prompt Analogies
Daniel McKee^1Work done as an intern with Adobe ResearchJustin Salamon^2Josef Sivic^2,3Bryan Russell^2
^1University of Illinois at Urbana-Champaign^2Adobe Research
^3Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University
[email protected]
[email protected]
[email protected]
[email protected]
<https://www.danielbmckee.com/language-guided-music-for-video>
Received ... ; accepted ...
==========================================================================================================================================================================================================================================================================================================================================================================================================================
Visual Query Localization on long-form egocentric videos requires spatio-temporal search and localization of visually specified objects and is vital to build episodic memory systems. Prior work develops complex multi-stage pipelines that leverage well-established object detection and tracking methods to perform VQL. However, each stage is independently trained and the complexity of the pipeline results in slow inference speeds. We propose , a novel single-stage VQL framework that is end-to-end trainable. Our key idea is to first build a holistic understanding of the query-video relationship and then perform spatio-temporal localization in a single shot manner. Specifically, we establish the query-video relationship by jointly considering query-to-frame correspondences between the query and each video frame and frame-to-frame correspondences between nearby video frames. Our experiments demonstrate that our approach outperforms prior VQL methods by 20% accuracy while obtaining a 10ร improvement in inference speed. is also the top entry on the Ego4D VQ2D challenge leaderboard. Project page: https://hwjiang1510.github.io/VQLoC/https://hwjiang1510.github.io/VQLoC/
ยง INTRODUCTION
Episodic memory, a specific type of long-term memory in humans, facilitates the retrieval of our past experiences such as their time, location, related context, and activitiesย <cit.>. This form of memory is pivotal to the progressive evolution of embodied intelligence due to its foundational role in enabling long-horizon reasoningย <cit.>. Nevertheless, human memory can falter in accurately recalling the details of daily activities - such as where we kept our keys, or who we met while going for a run. This perceptual overload can be mitigated through the utilization of head-worn cameras or AR glasses to capture our past experiences and the development of systems that can retrieve information. To this end, the episodic memory benchmarkย <cit.> is recently proposed, which aims to develop systems that enable super-human memory by allowing humans to query about their past experiences recorded on head-worn devices. These systems also have the potential to serve as a conduit for learning from past experiences and dataย <cit.>, promoting long-term robot learningย <cit.>.
We focus on Visual Query Localization (VQL), a vital task in the episodic memory benchmark that aims to answer the question โwhere did I last see object X?". Specifically, given a long-form egocentric video representing past experiences from a human camera-wearer, the goal is to search and localize a visual query object, specified as a visual image crop, in space-time (see Fig.ย <ref>, left). Addressing VQL is of paramount importance, as query object localization serves as a prerequisite for downstream object-centric reasoning tasks.
VQL presents a myriad of challenges to existing computer vision systems. For example, current object detection systems localize pre-defined object categories on well-curated internet-style imagesย <cit.>.
In contrast, VQL requires localizing an open-set of objects specified as visual queries. Current object tracking systems expect to be initialized with a bounding box of the object next to its appearance in the videoย <cit.>. However, the visual query crop of VQL originates from an image outside of the video being queried; that is, there may not be an
exact match and โclose frame" for the visual query.
Complicating matters further, the egocentric nature of the VQL task presents its own unique challenges. Compared with the visual query, the object may appear in the video under significantly varying orientations, sizes, contexts, and lighting conditions, experience motion blur and occlusions. Finally, egocentric videos can span several minutes, hours, or days in real-world applications, while the object itself may appear only for a few seconds, resulting in a โneedle-in-the-haystack" problem.
Prior work has attempted to address VQL through a bottom-up framework with three stagesย <cit.>: i) in each video frame, detect all objects and conduct pairwise comparisons with the visual query to obtain the proposal that is most similar to the query; ii) identify the similarity score peaks throughout the video;
and iii) perform bidirectional tracking around the most recent peak to recover the spatio-temporal response. Although this framework is grounded in well-established object detection and tracking approaches, it relies heavily on the first stage to detect the object by independently looking at each frame. While this may be possible if the object appears clearly in the video, it is often not the case due to the egocentric nature of images. Errors in frame-level object detection can potentially cause the entire system to fail since the framework is not end-to-end differentiable and errors in earlier stages may not be rectifiable in later stages. Moreover, the methods suffer from a slow inference speed due to the high complexity of pairwise comparison with redundant object proposals.
To address these limitations, we propose (Visual Query Localization from Correspondence), a novel single-stage framework for VQL.
Our key insight in is that a holistic understanding of the query-video relationship is essential to reliably localize query objects in egocentric videos.
Accordingly, jointly models the query-to-frame relationship between the query and each video frame and frame-to-frame relationships across nearby video frames (see Fig.ย <ref>, right), and then performs spatio-temporal localization in a single-stage and end-to-end trainable manner.
Specifically, we establish the query-to-frame relationship by extracting image features for the visual query and each video frame using a ViT backbone pretrained with DINOย <cit.> and using a cross-attention transformer moduleย <cit.> to establish correspondences between the image regions in the query and video frame. We then propagate these correspondences over time using a self-attention transformer moduleย <cit.> that exploits the frame-to-frame relationships resulting from the temporal continuity of videos to capture the overall query-video relationship. Finally, we use a convolutional prediction head that performs spatio-temporal localization by utilizing the query-video relationship to make frame-level predictions. Critically, our model operates in a single stage, i.e., there are no intermediate localization outputs with dedicated post-processing steps, and is end-to-end trainable since it uses only differentiable modules to obtain the final prediction.
has several benefits when compared with the prior stage-wise methods. Unlike prior work that explicitly generates object proposals in the video frame and compares them with the visual query, implicitly establishes the query-frame relationship by performing attention-based reasoning between the visual query features and the video frame features.
This approach effectively utilizes image regions of the background and non-query objects as contextual information for inference. Additionally, our implicit query-frame relationships are significantly faster to compute than explicitly generating proposals and performing pairwise comparisons, which is critical for real-world episodic memory applications. Finally, is end-to-end trainable, which results in much better performance compared to prior work.
We evaluate on Ego4D Visual Query 2D Localization benchmark. achieves the state-of-the-art on this benchmark, outperforming previous methods by 20%, and is also ranked first on the public Ego4D VQ2D challenge leaderboard.[Ego4D VQ2D challenge leaderboard: <https://eval.ai/web/challenges/challenge-page/1843/leaderboard>] Furthermore, achieves real-time inference with 36 Frames Per Second (FPS), which is 10 ร faster than prior work.
ยง RELATED WORK
Object detection.
Prior work has made impressive progress in building state-of-the-art object detectors for localizing objects in imagesย <cit.>. These methods typically detect objects with pre-specified object categories and are evaluated on internet-style and human-curated image datasetsย <cit.>. While these methods relied significantly on having sufficient training data, recent work has extended them to incorporate long-tail object categoriesย <cit.> and free-form language-based detectionย <cit.>. Prior work has also proposed methods for one-shot detection, which build on the success of object detection methodsย <cit.>. They leverage objectness priors by calculating similarities between visual queries and region proposals. However, all of the methods work on well-curated images, e.g. the COCO datasetย <cit.>.
Our work focuses on the VQL task, which is a challenging version of one-shot detection on long-form egocentric videos. Unlike image-based one-shot detection where the object is assumed to be present in the image, video-based one-shot detection requires us to identify in which frame the object appears for
spatio-temporal localization.
Object Tracking.
There is rich literature on object tracking methods that track a static or dynamic object throughout a given videoย <cit.>. The object is typically specified as a bounding box in a video frame, and the goal is to track it through the subsequent frames in the same videoย <cit.>.
Short-term tracking methods are developed to track objects till they completely disappear from the video frameย <cit.>, while long-term tracking methods are capable of recovering the object track when it re-appears in the videoย <cit.>. Tracking methods are initialized with an object bounding box close to its appearance in the video, while visual queries in VQL may originate from outside the video. This can pose a challenge for tracking methods. We compare with a state-of-the-art long-term tracker in our experimentsย <cit.>.
Visual Query Localization (VQL).
The recently proposed episodic memory benchmark introduces the VQL task on egocentric videos, where the goal is to spatio-temporally localize a query object specified as a visual image cropย <cit.>. Prior work has tackled VQL using a three-stage detection and tracking framework. First introduced as baseline in Ego4D,
this approach performs frame-level detection, identifies the most recent โpeak" in the detections across time, and performs bi-directional trackingย <cit.> to recover the complete temporal extent of the responseย <cit.>. Subsequent research in this paradigm improves the frame-level detector by reducing false positive predictions through negative frame samplingย <cit.> and proposes using background objects as contextย <cit.>. However, these multi-stage approaches independently perform spatial and temporal reasoning in a modular fashion without end-to-end training, which results in compounding errors across the stages. Furthermore, they incur significant computational costs and time for training and inference, which limits their applicability to real-time episodic memory applications. Our proposed method, , distinguishes itself by first establishing holistic spatio-temporal correspondences and then localizing the object with an end-to-end paradigm. This leads to state-of-the-art results with fast inference speeds.
Visual Correspondence. Establishing correspondence is a core problem in computer visionย <cit.>. The advancement of visual correspondence models has significantly contributed to the success of various computer vision tasks, including image matching and retrievalย <cit.>, tracking and flow estimationย <cit.>, 3D visionย <cit.>, and representation learningย <cit.>. Recent developments in vision foundation models have demonstrated remarkable capabilities to establish correspondence between multi-modalย <cit.> and spatio-temporalย <cit.> inputs, empowering new applicationsย <cit.>.
The design of model architectures plays a critical role in finding correspondence. Specifically, transformer modelsย <cit.>, which propagate information by establishing similarity-based correspondence, are widely used.
For example, TubeDETRย <cit.>, which performs visual question answering, concatenates the text query and frame features and finds their correspondence
using self-attention transformers. STARKย <cit.> employs a similar architecture and extends it to visual templates for visual object tracking. These methods do not differentiate the query features and the frame features after the concatenation, making the features lose spatial correlation with the input frames. In contrast, utilizes the cross-attention transformer to establish the query-to-frame correspondence, updating the frame features by incorporating query features. This maintains the spatial correlation between frame-level features and the frames, providing strong priors for object localization.
ยง METHOD
We propose a novel end-to-end framework for localizing visual queries in egocentric videos.
We design our model based on the insight
that a holistic understanding of the query-video relationship is crucial for reliably localizing query objects appearing in the video.
To this end, we propose to first capture both the query-to-frame relationship for each video frame and
the frame-to-frame relationship across nearby frames.
We then predict the results based on the captured query-video relationship in a single shot.
Next, we will describe the formulation of the VQL task in Sec.ย <ref>, introduce our model architecture in Sec.ย <ref>, and describe our approach for training and inference in Sec.ย <ref>.
ยง.ยง VQL Task Formulation
The VQL task is formulated as open-set localization in egocentric videos, answering questions like โwhere did I last see my wallet?". Formally, given an egocentric video ๐ฑ = {v_1, โฏ, v_T} consisting of T frames and a visual query ๐ฌ specified as an image crop of the query object[The image crop of the query object is sampled from outside the ground-truth response track.],
the goal is to spatio-temporally localize the most-recent occurrence of the query object in the video.
The localization result
is a response track โ = {r_s, r_s+1, ..., r_e}, where s and e are start and end frame indices, respectively, and r_i is a bounding box containing the query object on the i^th frame.
ยง.ยง VQLoC Architecture
Our proposed method is illustrated in Fig.ย <ref>. employs a shared image encoder to extract visual features for both video frames and the visual query.
We utilize the DINOย <cit.> pre-trained ViTย <cit.>, which can capture object-level semantic correspondence under varying camera viewpointsย <cit.>.
First, leverages a spatial transformer to find the query-to-frame
correspondence between each video frame and the visual query. then uses a spatio-temporal transformer that propagates the query-to-frame correspondence over time by leveraging the frame-to-frame correspondence arising from temporal continuity in the video, and establishes the query-video correspondence.
Finally, it uses the holistic understanding of the query-video relationship to predict the response track. Next, we describe these components in detail.
Visual Encoder. Our visual encoder consists of a fixed DINO pre-trained ViT followed by several learnable
convolution layers. We independently extract visual features for each of the T video frames and the visual query and obtain
the video features ๐ฏโโ^Tร Hร W ร C and query features ๐ชโโ^Hร W ร C.
H, W signify the spatial resolution of the features, and C is the channel size.
Spatial Transformer. The spatial transformer establishes query-to-frame correspondences between the visual crop and each video frame using cross-attentionย <cit.> between their visual features.
Formally, let v_i โโ^HW ร C be the flattened video feature ๐ฏ_i for video frame i and let q โโ^HWร C be the flattened query feature ๐ช. The query-to-frame correspondence feature f_i is obtained as follows:
f_i = FFN(CrossAttention(v_i, q)),
where FFN is a feed-forward neural network.
Intuitively, the frame features v_i are updated by incorporating the corresponding query features q, where the cross-attention computes their feature similarity as the fusion weights. The transformer output f_i is reshaped to โ^H ร W ร C, denoted as ๐_i.
Here, the updated features ๐_i retain
the spatial arrangement and maintain the
spatial cues for performing accurate localization.
This process is repeated for all video frames to obtain the final
query-to-frame correspondence features ๐โโ^T ร H ร W ร C.
Spatio-Temporal Transformer.
The spatio-temporal transformer is designed to propagate and refine the query-to-frame correspondence features ๐ from Eqn.ย <ref> over time,
by establishing frame-to-frame correspondences between nearby video frames.
It leverages the temporal continuity
of videos.
We illustrate the architecture in Fig.ย <ref>. We first decrease the spatial resolution of the feature maps ๐ using shared convolution layers, where the output is ๐ฬ
_dโโ^Tร h ร w ร c
, h, w and c are spatial resolutions and channel size of the downsampled feature maps.
We add 3-D spatio-temporal positional embedding ๐ฉโโ^Tร h ร w ร c to ๐ฬ
_d, and flatten it into 1-D tokens ๐_d:
๐_d = Flatten(๐ฬ
_d + ๐ฉ) โโ^Thwร c,
where the positional embedding ๐ฉ is learnable and initialized as zeros.
Next, we use the spatio-temporal transformer module to establish the query-video relationship. It consists of multiple layers
of locally-windowed temporal self-attention, i.e., we restrict the feature propagation within a local time window.
In particular, a feature element belonging to time t only attends to feature elements from time steps in the range [t-w, t+w]. t-w and t+w are the start and end of the local time window with temporal length 2w+1. We found that this was beneficial since
the locations of query objects in nearby frames are highly correlated, providing strong priors for feature refinement.
We achieve locally windowed attention via masking attention weights.
After performing feature refinement using the spatio-temporal transformer module, we reshape the 1-D tokens back into the original 3-D shape to get the
query-video correspondence features ๐ฏ^*โโ^Tร h ร w ร c.
Prediction Head. Instead of directly predicting a single bounding box for an image, we first define anchor boxes โฌโโ^h ร w ร n ร 4 on the feature mapsย <cit.>. We found that this was advantageous because the sizes and locations of query objects are diverse. The multi-scale anchor boxes can provide strong prior on query object sizes and locations. For each frame, we define n anchor boxes for each of the
spatial elements of the feature map (h ยท w elements in total).
Each anchor box defined on the feature map space can be mapped back to its corresponding location in the pixel space for inferring predictions.
For each frame from V_i, we obtain the predictions by passing the query-video correspondence feature ๐ฏ^*_i through two heads:
๐ซฬ_i = ProbHead(๐ฏ^*_i) โโ^h ร w ร n and ฮbฬ_i = RegHead(๐ฏ^*_i) โโ^h ร w ร 4n,
where each head consists of several convolution blocks. ๐ซฬ_i is the anchor-level query object occurrence probability and ฮbฬ_i is the regressed bounding box refinement. We then reshape ฮbฬ_i to โ^h ร w ร n ร 4, denoted as ฮโฌฬ_i. The final refined anchor boxes for the i^th frame are โฌฬ_i = โฌ + ฮโฌ_i. We perform the same operation on all frames.
ยง.ยง Training and Inference
In both training and inference, we chunk the untrimmed video into fixed-length clips.
During training, we ensure that the clip covers at least a part of the response track. During testing, we concatenate the predictions on all clips as the final result. We introduce the training and inference details as follows.
Training. is trained with the loss L_img = L_bbox + ฮป_pยท L_prob on all clip frames for supervising bounding box regression and query object occurrence probability prediction.
For each refined anchor box ๐ฬโโฌฬ, we calculate the intersection of union (IoU) between the ground-truth query object box and the original corresponding anchor box ๐. If the IoU is higher than a threshold ฮธ, it will be assigned as a positive box, denoting the anchor box region captures the query object well. We apply the bounding box loss as L_bbox(โฌฬ, ๐) = ฮฃ_bฬโโฌฬ^p L_reg(๐ฬ, ๐), where ๐ is the ground-truth box for the query object, and โฌฬ^pโโฌฬ is the set of positive boxes. Specifically, L_reg is a combination of the L_1 loss and the generalized IoU (GIoU) lossย <cit.>:
0.75!L_reg(๐ฬ, ๐) = ||๐ฬ_c - ๐_c|| + ||๐ฬ_h - ๐_h|| +||๐ฬ_w - ๐_w|| + ฮป_giouยท L_giou(๐ฬ, ๐),
where ๐_c, ๐_h and ๐_w are center coordinate, height and width of the bounding box, and ฮป_giou is a weight that balances the lossesย <cit.>.
To get the query object probability loss, we assign the probability labels to the positive and negative anchor boxes with 1 and 0, respectively. We denote the assigned probability labels as ๐ซ. The query object occurrence probability loss is defined as L_prob = L_bce(๐ซฬ, ๐ซ), which is the binary cross-entropy (BCE) loss.
Since the target videos are long, false positive prediction, where the model wrongly identifies other objects as the query object, is one of the bottlenecks that limit the temporal accuracy of the model.
To prevent this,
we perform hard negative mining (HNM) (similar toย <cit.>).
Given a mini-batch of N videos, we diversify the negative anchors for each batch element n by treating frames from every other video n^'โ n as a negative. After expanding the pool of negatives, we calculate the query occurrence probability for all the negative anchors using Eqn.ย <ref> and sample the top K anchors with the largest BCE loss as hard negatives (i.e., negatives which our model thinks are positives). We then train the model using the positives from the same batch element and the hard negatives sampled across all batch elements with the BCE loss. We keep the ratio between positives and negatives as 1:3 during training.
Inference.
During inference, we get the prediction for each frame by
choosing the refined anchor box with the highest query object probability.
We reject low-confidence predictions by applying a threshold ฯ to the predicted query object probability. See supplementary for more details.
ยง EXPERIMENTS
First, we describe our implementation details, the dataset and evaluation metrics.
We then quantitatively compare with prior methods, and present ablation studies that examine the impact of various design choices in .
ยง.ยง Experimental Setup
Implementation Details.
We train the model with
448 ร 448 resolution videos obtained by resizing the source videos along the longer edge and applying zero-padding to the shorter edge for each frame.
We set the loss coefficients ฮป_p = 1 and ฮป_giou=0.3. The positive anchor box threshold is ฮธ=0.2. is trained for 60,000 iterations with batch size 24, utilizing the AdamW optimizerย <cit.> with a peak learning rate of 0.0001 and a weight decay of 0.05. A linear learning rate scheduler and a warmup of 1,000 iterations are employed. We adopt DINOv2's ViT-B14 as the backbone and train on clips of 30 frames with a frame rate of 5 fps. Clips are randomly selected to cover at least a portion of the response track.
We define anchor boxes on the feature map with spatial resolution of 16ร16, each element has 12 anchor boxes with 4 base sizes (16^2, 32^2, 64^2, 128^2) and 3 aspect ratios (0.5, 1, 2).
During training, we apply random data augmentations to the clips, such as color jittering, horizontal flipping, and random resized cropping.
We train and evaluate on RTX 6000 GPUs.
Dataset.
We train and evaluate our model on the Ego4D datasetย <cit.>. Ego4D is a large-scale egocentric video dataset designed for first-person video understanding and contains open-world recordings of day-to-day activities performed in diverse locations. We use the VQ2D task annotations from the episodic memory benchmark in Ego4D, which consists of 13.6k/4.5k/4.4k queries annotated over 262/87/84 hours of train / val / test videos.
The average target video duration is โผ 140 seconds and the average response track length is โผ 3 seconds, making this a challenging benchmark for the VQL task. Ego4D is, to the best of our knowledge, the only publicly available dataset for VQL.
Metrics.
In our evaluation, we adhere to the official metrics defined by the Ego4D VQ2D benchmark. We report two average precision metrics: temporal AP (tAP_25) and spatio-temporal AP (stAP_25), which assess the accuracy of the predicted temporal and spatio-temporal extents of response tracks in comparison to the ground-truth using an Intersection over Union (IoU) threshold of 0.25. Additionally, we report
the Recovery (rec%) metric, which quantifies the percentage of predicted frames where the bounding box achieves at least 0.5 IoU with the ground-truth. Lastly, we report the Success metric, which measures whether the IoU between the prediction and the ground-truth exceeds 0.05. Apart from VQ2D evaluation metrics, we also report the inference FPS of each method as a measure of computational efficiency.
Baselines. We compare our approach against the following baselines. All baselines are trained on the Ego4D dataset for a fair comparison.
โขย ย SiamRCNNย <cit.>: This is the official baseline for VQL introduced in Ego4D. It employs a three-stage pipeline with frame-level detectionย <cit.>, latest peak detection, and object trackingย <cit.>.
โขย ย NFMย <cit.>: It employs the three-stage pipeline introduced in SiamRCNN and improves the frame-level detector training by sampling background frames as negative to reduce false positives. This was the winning entry of the Ego4D VQ2D challenge held at CVPR 2022.
โขย ย CocoFormerย <cit.>: It improves the frame-level detection architecture from SiamRCNN by reasoning about all object proposals in a frame jointly (as a set) using a transformer module.
โขย ย STARKย <cit.>: It is an end-to-end method for long-term visual object tracking. We adapt the method with our object occurrence prediction head to enable it to work on long-form videos.
ยง.ยง Experimental results
The results are shown in Tableย <ref>. Our method outperforms prior works on all metrics and sets the state-of-the-art for this task. relatively improves over the next-best baseline (CocoFormer) by 19% tAP_25, 16% stAP_25, 25% recovery, and 17% success. Furthermore, the running speed of is 10ร faster than the three-stage VQL methods. Besides, we also demonstrate that directly applying tracking methods, i.e. STARK, doesn't work as well for VQL.
We conjecture the reason is that STARK concatenates the query features and frame features makes without differentiating them. In this manner, the concatenated features, which are used to regress the localization results, lose the spatial correlation with input frames.
While this design might work for normal videos where the camera viewpoint changes slowly and smoothly, its problem is exacerbated in egocentric videos where the camera moves dramatically and the object specified by the visual query is quite different from its appearance in the video.
As shown in Fig.ย <ref>, shows a reasonably good balance on the speed-accuracy trade-off. At 80 FPS, we match the best baseline performance; at 33 FPS, we outperform the best baseline by a healthy margin.
We present a qualitative analysis in Fig.ย <ref>.
ยง.ยง Ablation Study
We present an ablation study on each block of , examining and evaluating their impact within the structure of our model.
Backbone Size and Input Resolution. We explore the impact of using different backbone sizes and input spatial resolutions. We test the following alternatives: i) using a smaller ViT-s14 backbone with 448ร 448 resolution, and ii) using the original ViT-b14 backbone but a smaller 224ร 224 resolution. Specifically, the input with spatial resolutions 448ร 448 and 224ร 224 result in features with spatial size 32^2 and 16^2, respectively. As shown in Tableย <ref>, the model with the largest input size and the largest backbone achieves better results. However, experiments demonstrate that using a larger backbone only brings marginal gains (comparison between the last two rows). In contrast, when inputting the model with low resolution, the performance drops dramatically. Specifically, the ratio of tAP_25: stAP_25 drops from 66% to 45%. This decrease reflects the huge degeneration of localization accuracy.
Since the query objects in the videos are usually small,
the low-resolution inputs and
feature maps may
not capture their details clearly.
This matches our intuition that accurate localization requires relatively high-resolution features.
Query-Frame Correspondence. We study
different methods for establishing the correspondence between the query and the frame.
Specifically, we compare against a model variant that aggregates the spatial information from the query using convolutions. In detail, we apply downsample convolutions on the query features to obtain a 1-D feature vector (with 1ร 1 spatial resolution). It then tiles this feature to obtain a H ร W feature map and concatenates it with the frame features. Finally, we use convolutions to obtain the query-to-frame correspondence features. Importantly, this does not use our Spatial TX module for establishing this correspondence. As shown in Tableย <ref>, the performance using this naive fusion method is much worse (rows 1 vs. 2), highlighting the value of establishing correspondences using our spatial transformer.
We also experiment with using more transformer layers in our spatial TX module (rows 2 vs. 3 in Tableย <ref>), but this provides minimal gains, likely due to the strength of the image backbone features.
Spatio-Temporal Transformer Designs. Next, we study the role of using local temporal windows in our spatio-temporal TX block in Tableย <ref>. We experiment with a global window, and local windows of sizes {5, 7, 9}, with 5 being our default choice. Using a global window (row 1) significantly reduces the performance since the object's location, pose and appearance change dramatically over time in egocentric videos, making it hard to establish long-range dependencies. Instead, focusing on the local temporal window with several consecutive frames makes the feature propagation easier, establishing useful short-horizon dependencies. Our experiments show that the a window size 5 works best, where 5 consecutive frames span 1 second for the 5 FPS videos.
Prediction head and Loss Function. Finally, we study the design of using anchors in the prediction head and the loss functions used to supervise the anchor probabilities in Tableย <ref>. Not using anchors (row 1) performs worse than our models which use both anchors and advanced losses. Our model with anchors performs poorly with the BCE loss due to the data imbalance between positive and negative anchor boxes (row 2). The reason is that even positive frames will introduce hundreds of negative anchors, which makes the anchor-level ratio between negatives and positives much higher than the ratio in frame-level. Using focal loss or hard negative mining improves significantly over using only the BCE loss by accounting for the data imbalance (rows 3 and 4 vs. row 2). We observed that hard-negative mining (HNM) works better than focal loss (rows 3 vs. 4). Unlike HNM that only focuses on hard negatives, focal loss additionally focuses on hard positives, where the object is annotated in the training data but may be difficult to recognize due to effects like severe occlusions (e.g., only 10% of the object is visible). Therefore, these hard positives may provide harmful training signals since they are not reliably distinguishable from negatives.
ยง CONCLUSION
We propose , a single-stage and end-to-end trainable method for the Visual Query Localization (VQL) problem in long-horizon videos. first builds a holistic understanding of the query-video relationship, and leverages this understanding to localize the query
in a single shot. Our key insight lies in how we establish the
spatio-temporal correspondence between the query and video features. Specifically, we build the query-to-frame correspondence for each video frame and then propagated these over time to nearby frames by leveraging the temporal smoothness of videos. Compared with prior stage-wise methods, not only demonstrates better spatio-temporal localization accuracy but also improves the inference speed significantly by avoiding explicit comparison between region proposals and the query. also achieves the top performance on the Ego4D VQ2D challenge leaderboard. In future work, we plan to develop systems to perform in-depth object-level understanding based on our model's VQL predictions.
Limitation. Similar to prior works, is trained in a supervised manner, requiring a large number of annotations for training. Besides, the hyperparameters, i.e. local window length, might need to be tuned for training or inference on other datasets with different FPS.
plain
ยง BROADER IMPACT
Visual Query Localization (VQL) has broad benefits to AR and robotics applications. For example, VQL is the basis for developing super-human episodic memory systems that can retrieve information from past human experiences. It has the potential to assist and strengthen the long-term reasoning capability of humans. Furthermore, it can be vital for embodied agents/robots to learn from past experiences or perform relevant tasks with long-term dependency.
ยง MODEL DETAILS
Model Architecture.
We provide the workflow of our model as follows. The details of each block, i.e. Encoder, Spatial Transformer, Spatio-Temporal Transformer, and Prediction Heads are included in the main text.
Specifically, T is the length of a clip, and N is the number of anchors. We use T=30 and N=8ยท 8ยท 12 = 768.
Inference Details.
We provide more inference details here. For the predicted object occurrence scores on all frames, we first smooth the scores with a median filter with kernel size 5. Then we apply peak detection on the smoothed scores. We find the peak with the highest score s and use 0.8ยท s as the threshold to filter non-confident peaks. We then select the response track that corresponds to the last peak as the results. To find the start and end time steps of the last response track, we threshold the occurrence scores with the value 0.7ยท s_p, where s_p is the score of the last peak.
ยง MORE RESULTS
Visualization.
We provide the visualization of feature affinity between the query and frame. As shown below, the query point (in blue) has high feature similarity with the same object in the frame. We note that we use the pixel in the visual query as the source feature in this visualization, while we use the pixel in the target frame as the source feature in our spatial transformer. The reason is that if we visualize the feature similarity in the latter manner, the response will cover the entire visual query, which is not informative.
Leaderboard Result. Our model achieved state-of-the-art performance on the public leaderboard of Ego4D VQ2D benchmark.
Detailed Analysis. We compare with the best baseline method CocoFormerย <cit.> for the performance on small, medium, and large objects. Specifically, the definition of small, medium, and large objects is based on the area of the bounding box region for the visual queries in the video, i.e. 0^2 to 64^2 as small objects, 64^2 to 192^2 as medium objects, and objects larger than 192^2 is large objects. The average image size of the target video is about 1K. As shown in Tableย <ref>, CocoFormerย <cit.> achieves better performance in the case that query objects are small in the videos. In contrast, demonstrates better accuracy on medium and large-size of query objects. We conjecture that there can be two reasons. First, CocoFormerย <cit.> operates on the original high-resolution images while works on downsampled frames with roughly half of the resolution. Second, CocoFormerย <cit.> is built on state-of-the-art object detection methods, which include useful tricks for small object detection, while does not have an inductive bias for working with small objects.
|
http://arxiv.org/abs/2306.12111v1
|
20230621085235
|
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
|
[
"Shaohui Mei",
"Jiawei Lian",
"Xiaofei Wang",
"Yuru Su",
"Mingyang Ma",
"Lap-Pui Chau"
] |
cs.CV
|
[
"cs.CV"
] |
Journal of Class Files,ย Vol.ย 14, No.ย 8, Augustย 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohuiย Mei,ย Senior Member,ย IEEE,
Jiaweiย Lian,ย Graduate Student Member,ย IEEE,
Xiaofeiย Wang,
Yuruย Su,
Mingyangย Ma,ย Graduate Student Member,ย IEEE,
andย Lap-Puiย Chau,ย Fellow,ย IEEE
This work was supported in part by the National Natural Science Foundation of China (62171381 and 62201445) and in part by the Fundamental Research Funds for the Central Universities. (Corresponding author: Shaohui Mei.)
Shaohui Mei, Jiawei Lian, Xiaofei Wang, Yuru Su, and Mingyang Ma are with the School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, China (Email: [email protected]; [email protected]; [email protected]; suyuruย [email protected]; [email protected]).
Lap-Pui Chau is with the Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, China ([email protected]).
July 31, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Deep neural networks (DNNs) have found widespread applications in interpreting remote sensing (RS) imagery.
However, it has been demonstrated in previous works that DNNs are vulnerable to different types of noises, particularly adversarial noises.
Surprisingly, there has been a lack of comprehensive studies on the robustness of RS tasks, prompting us to undertake a thorough survey and benchmark on the robustness of image classification and object detection in RS.
To our best knowledge, this study represents the first comprehensive examination of both natural robustness and adversarial robustness in RS tasks.
Specifically, we have curated and made publicly available datasets that contain natural and adversarial noises.
These datasets serve as valuable resources for evaluating the robustness of DNNs-based models.
To provide a comprehensive assessment of model robustness, we conducted meticulous experiments with numerous different classifiers and detectors, encompassing a wide range of mainstream methods.
Through rigorous evaluation, we have uncovered insightful and intriguing findings, which shed light on the relationship between adversarial noise crafting and model training, yielding a deeper understanding of the susceptibility and limitations of various models, and providing guidance for the development of more resilient and robust models
Robustness, noises, remote sensing, image classification, object detection.
ยง INTRODUCTION
The proliferation of remote sensing (RS) technologies has remarkably augmented the volume and fidelity of RS imagery, which is critical and influential for characterizing diverse features of the earth's surface.
As a consequence, the automated and intelligent processes of satellite or aerial images have become indispensable for earth observation and analysis.
The significance of RS image (RSI) interpretation such as image classification and object detection is paramount and extends to a multitude of applications, encompassing but not limited to environmental monitoring, intelligent transportation, urban planning, and disaster management.
In response to the pressing demand for automated analysis and comprehension of optical RSIs, there has been a surge in the development of diverse techniques for aerial detection <cit.> over the past few years.
In recent years, algorithms based on deep learning (DL) technologies have emerged as the forerunners in top accuracy benchmark for a range of visual recognition tasks, image classification <cit.>, object detection <cit.>, semantic segmentation <cit.>, , owing to the remarkable feature representation capability of deep neural networks (DNNs).
As a natural progression, DNNs have been widely adopted for the processing of optical RS imagery, with particular emphasis on image classification and object detection tasks.
Undoubtedly, DNNs-based models <cit.> have emerged as a dominant approach, surpassing the performance of previous traditional methods by a significant margin.
However, good fortune brings misfortune on its train.
The utilization of DL in intelligent recognition brings forth notable advantages, yet it also introduces substantial security concerns. The black-box nature of DL has been the subject of critique due to its inherent lack of interpretability and transparency.
Furthermore, the susceptibility and vulnerability of DL models to adversarial examples have garnered significant attention within the academic community, prompting questions regarding the veracity of these models as reliable predictors.
As a result, there are growing concerns that these models may merely be clever "Hans," achieving acceptable outcomes via flawed methods, which undermines the credibility and trustworthiness of DNNs-based systems.
The temporal progression of publications and citations related to adversarial attacks is shown in Fig. <ref>.
Previous works <cit.> have demonstrated that DNNs are susceptible and vulnerable to adversarial examples, which involve the addition of carefully crafted imperceptible perturbations to benign images that can lead to erroneous predictions and pose a significant threat to both digital and physical applications <cit.> of DL.
The research areas that have been threatened by adversarial attacks are detailed and exhibited in Fig. <ref>.
Furthermore, studies <cit.> have also shown that DNNs can be easily disturbed by natural noises, indicating that DL systems are not inherently secure and robust.
The phenomena mentioned above underscore the need for delving into the mechanism of adversarial attacks and improving the resilience and reliability of DL systems.
In addition, it is beneficial to comprehensively benchmark the robustness of DNNs for better understanding and developing robust DNNs-based models, while none of the public surveys and benchmarks provide a comprehensive study on the robustness of image classification and object detection in RS.
We summarize the existing surveys and benchmarks as shown in Table <ref>.
Specifically, most existing related works <cit.> focus on surveying and benchmarking in computer vision (CV).
Wei <cit.> surveyed physical adversarial attacks and defenses in CV and briefly reviewed physical attacks in RS.
In <cit.>, the authors discussed the challenges and future trends of AI security for geoscience and RS while without further study on the robustness of DNNs-based methods in optical RSIs.
Work <cit.> attempts to comprehensively analyze the diversity of adversarial attacks in the context of autonomous aerial imaging and provides a literature review of adversarial attacks on aerial imagery processing but without further analysis of models' robustness.
Driven by the above requirements, we present the first comprehensive study on the robustness of DNNs-based models and examine both natural and adversarial robustness in RS field.
Additionally, we have meticulously curated and released datasets that encompass both natural and adversarial noises, providing a valuable resource for evaluating the robustness of DNNs-based models.
To comprehensively assess the robustness of these models, we conducted rigorous experiments involving a diverse set of classifiers and detectors, representing a wide range of mainstream methods.
Through this extensive evaluation, we have uncovered insightful and intriguing findings that illuminate the relationship between the crafting of adversarial noise and the training of models.
These findings contribute to a deeper understanding of the vulnerabilities and limitations inherent in various models and offer guidance for the development of more resilient and robust models in the future.
The main contributions of this article are four-fold as follows:
* Comprehensive survey and benchmark on the robustness of DNNs-based models.
To the best of our knowledge, this study represents the first comprehensive survey and benchmark effort to investigate the robustness of image classification and object detection models in the field of RS, specifically in the presence of both natural and adversarial noises (see Table <ref> for the top-5 robust models against different noises).
* Creation of benchmark datasets[<https://github.com/wangxiaofei2022/Robustness-Evaluation>] with various noises.
This paper presents publicly available datasets containing a diverse range of 7 natural noises and 9 adversarial noises, specifically designed for image classification and object detection tasks.
These datasets have been carefully derived from optical RS imagery, serving as a valuable resource for the research community and facilitating advancements in the field of robustness analysis.
* Rigorous and extensive experiments.
To ensure a comprehensive evaluation of the robustness of DNNs-based models, we have conducted a systematic investigation into the performance of 23 image classifiers and 20 object detectors.
These models encompass a diverse range of mainstream architectures and have been rigorously evaluated on several large-scale optical RS datasets, as well as their corresponding versions with introduced noises.
* In-depth analyses.
Through rigorous and comprehensive experiments, we have derived insightful and intriguing findings that shed light on the potential connection between adversarial perturbation generation and model training.
These findings contribute to a deeper understanding of the sensitivity and vulnerability exhibited by various DNNs-based models across different tasks.
By uncovering these relationships, our study offers valuable insights into the robustness of DNNs in the face of adversarial attacks and provides a foundation for developing more resilient and robust models in the future.
The rest parts of this manuscript are organized as follows:
a) We thoroughly survey the robustness of DNNs in CV and RS in Section <ref>;
b) The robustness of image classification and object detection is comprehensively benchmarked in RS field in Section <ref>;
c) Rigorous and extensive experimental results are provided in Section <ref>;
d) Some further discussions are presented in Section <ref>, and e) followed by the conclusions in Section <ref>.
ยง SURVEY
The integration of DNNs into safety-critical applications, such as autonomous driving <cit.>, face recognition <cit.>, RS <cit.>, , highlights the criticality of enhancing model robustness and developing resilient DL systems.
As a result, there is a growing need to comprehensively evaluate the robustness of DL models for a better understanding of the factors affecting their resilience and facilitate further improvements in DNNs' robustness.
In this section, we first introduce the background knowledge of the adversarial attack.
Then, the robustness of DNNs-based methods are comprehensively surveyed in CV and RS, respectively.
ยง.ยง Background Knowledge
The primary objective of DL is to enable models to learn from data in a manner that allows them to perform tasks similar to humans when confronted with new data.
Over the last decade, DL has made tremendous strides in numerous significant applications.
Although DL has delivered impressive results in practical applications, recent years have revealed a disturbing phenomenon where DL models may make abnormal predictions that are inconsistent with human intuition.
For instance, a model could yield significantly different predictions on two visually similar images, with one being perturbed by malicious and imperceptible noises <cit.>, whereas a human's prediction would remain unaffected by such noises.
We refer to this phenomenon as the adversarial phenomenon or adversarial attack, signifying the inherent adversarial relationship between DL models and human perception <cit.>.
The discovery of the adversarial phenomenon originated from image classification tasks in the digital realm.
As a consequence, the majority of existing research on adversarial attacks has been concentrated on image classification tasks in the digital domain <cit.>, the so-called digital attack.
In comparison, physical attack happens in real physical world scenarios.
Consequently, in this section, we provide an overview of adversarial attacks, offering background knowledge on both digital attacks and physical attacks as illustrative examples.
Digital attacks involve manipulating image pixel values in the digital domain after capturing an image using an imaging device.
On the other hand, physical attacks involve tampering with the target to be disturbed before image capture.
Although digital attack methods can easily fool various DL models in the digital domain, the generated digital perturbations lose their effectiveness in the real physical world because they are often imperceptible and cover the entire image, making them invisible to imaging devices.
As a result, researchers are increasingly studying adversarial attacks that are applicable in the physical world.
Physical attack methods have been proposed and used to attack intelligent systems such as autonomous driving <cit.>, face recognition<cit.>, RS <cit.>, security monitoring <cit.>,
Typically, digital and physical attacks occur at different stages of an intelligent recognition task, as shown in Fig. <ref>, which illustrates the difference between digital and physical attacks in the context of RS.
It is observed that:
* For physical attacks, the attacker manipulates either the actual targets or the imaging process itself to intentionally induce incorrect predictions;
* For digital attacks, the attacker directly modifies the pixel values of the image data captured by the imaging device to implement the attack.
In addition, adversarial attacks can be classified based on other attack characteristics.
Regarding the attacker's access to the victim model's information, adversarial attacks can be categorized into three types: white-box attack, gray-box attack, and black-box attack, as shown in Fig. <ref>.
In white-box attacks, the attacker has full access to the internal information of the model, including its structure, parameters, gradients, and other relevant details.
This comprehensive knowledge enables the attacker to craft sophisticated adversarial examples to deceive the model.
Gray-box attacks grant the attacker partial access to the internal information of the model.
Although not as extensive as in white-box attacks, this limited access still provides valuable insights that can be leveraged for crafting effective adversarial examples.
In contrast, black-box attacks present unique challenges as the attacker lacks access to the specific parameters and structural details of the target model.
Consequently, alternative techniques must be employed to generate adversarial examples in such scenarios.
Transfer-based attacks, where knowledge gained from a substitute model is utilized to craft adversarial examples for the black-box model, are commonly employed.
Additionally, gradient estimation methods based on query results can also be utilized to approximate the gradients of the black-box model and guide the generation of effective adversarial examples.
These approaches showcase the ingenuity and adaptability of attackers in navigating the constraints imposed by black-box settings.
While several white-box attack methods <cit.> have been proposed, they typically demand extensive information about the victim model, rendering their practical applicability in real-world attack and defense scenarios quite challenging.
As a result, researchers in the field of adversarial machine learning have increasingly directed their attention towards black-box attack methods <cit.>, which are more suitable for real-world adversarial situations where the attacker only has limited knowledge of the target model.
We have categorized adversarial attacks based on their distinct characteristics and strategies employed as shown in Fig. <ref>
Furthermore, we also display different forms of perturbations, as shown in Fig. <ref>.
In this section, we illustrate the adversarial attack from different domains, digital attacks and physical attacks.
ยง.ยง.ยง Digital attack formulation
Assuming the presence of an image classifier f(x): xโXโ y โY that generates a prediction y based on an input image x, the primary aim of an adversarial attack is to generate an adversarial example x^* that closely resembles the clean example x but causes the image classifier f(x) to make an incorrect prediction y ^ *.
From a technical standpoint, adversarial attack methods can be categorized as either non-targeted or targeted, depending on the attacker's motives.
Suppose an input image x is properly classified by a model such that its predicted label is y, f(x)=y.
Non-targeted attack methods are designed to generate adversarial examples x ^ * by adding imperceptible perturbations to clean images x, which mislead the classifier into making an incorrect prediction, f(x ^ *) โ y.
Targeted attack methods are designed to manipulate the classifier into predicting a specific label, such that f(x^*) = y^*, where y^* represents the target label specified by the attacker and y^* โ y.
These methods are intended to deceive the classifier into producing a specific output rather than simply causing a misclassification.
The L_p norm is typically used as a measure of the visibility of the adversarial noise.
In the case of digital attacks, the adversarial noise is required to be imperceptible to human vision, less than or equal to a certain threshold value ฯต, expressed as โx^* - xโ_p โคฯต, as shown in Fig. <ref>.
Current adversarial attack methods can be classified into two categories (gradient-based and optimization-based) according to the optimizing strategy adopted to generate the adversarial samples.
In this article, we present the formulation of non-targeted attack methods, and the targeted version can be derived using a similar approach.
Gradient-based methods.
Gradient-based methods, such as the fast gradient sign method (FGSM) <cit.>, aim to elaborate adversarial examples x^* by maximizing loss function L(x^*, y). FGSM crafts adversarial examples to subject to the L_โ norm constraint โx^* - xโ_p โคฯต mathematically written as:
x^* = x + ฯตยท sign (โ_xL(x,y)),
where โ_xL(x,y) is the gradient value of the objective loss L(x,y) clean image x.
A extension of FGSM algorithm is to satisfy the L_2 norm limitation โx^* - xโ_2 โคฯต mathematically define as:
x^* = x + ฯตยทโ_xL(x,y)/โโ_xL(x,y) โ_2.
The aforementioned gradient-based methods are one-step adversarial attacks.
Subsequently, multi-step methods, such as iterative FGSM (I-FGSM) <cit.>, momentum iterative (MI-FGSM) <cit.>, projected gradient descent (PGD) <cit.>, Nesterov accelerated gradient (NI-FGSM) <cit.>, AutoAttack (AA) <cit.>, , iteratively adopt one-step approaches multiple times with a small step size ฮฑ.
The iterative attack method can be expressed as:
x^*_0 = x, x^*_t+1 = x^*_t + ฮฑยท sign (โ_xL(x^*_t,y)).
To ensure that the adversarial perturbations generated are imperceptible to human observers, , satisfy the L_p constraint, which can be achieved by simply clipping x^*_t into the ฯต vicinity of x or simply set ฮฑ = ฯต / T with T being the number of iterations.
Optimization-based methods.
Optimization-based methods, such as L-BFGS <cit.>, Deepfool <cit.>, C&W <cit.>, , directly minimize the distance between the clean and adversarial examples, while ensuring that the adversarial examples are misclassified by the model.
This can be mathematically expressed as:
min_ฮธฮปยทโx^* - xโ_p - L(x^*, y),
where L(x^*, y) is the objective loss adversarial example x^*.
For optimization-based methods, they directly optimize the distance between an adversarial example and the corresponding benign example, thus the optimization of the L_p norm does not guarantee that the norm will be less than or equal to a particular threshold value.
To summarize, as shown in Fig. <ref>, gradient-based adversarial attack methods aim to generate adversarial perturbations that are farthest from the decision boundary within the specified perturbation range.
On the other hand, optimization-based methods aim to minimize the size of the adversarial perturbation, the distance between the adversarial and clean samples, for a given adversarial perturbation.
As a consequence, the adversarial perturbations generated by gradient-based adversarial attack methods are more effective in producing misclassifications, while the perturbations generated by optimization-based methods are more visually imperceptible.
ยง.ยง.ยง Physical attack formulation
As the study of adversarial attack problems has progressed, researchers have found that generating adversarial examples in the digital domain presents considerable difficulties in launching successful attacks in the physical domain.
Kurakin <cit.> first discover that DNNs are also susceptible and vulnerable to adversarial attacks performed in real-world physical scenarios, physical attacks.
Notably, physical attacks carried out in real-world settings are significantly more dangerous than digital ones.
Consequently, the practical feasibility of adversarial attack methods in physical contexts has emerged as a crucial area of research in the domain of machine learning security.
However, physical attacks still face some challenges when compared to digital attacks, such as:
* Physical attack methods should be able to withstand the impact of the imaging process, which mainly includes optical lenses, image sensors, processors, ;
* Adversarial perturbations created using physical attack methods should be robust enough to handle the impact of dynamic environments when they face transformations across different domains, as shown in Fig. <ref>;
* Adversarial perturbations for physical attacks should be as concealed as possible to avoid attention-grabbing anomalies.
Digital attacks typically involve pixel-level modifications to images, which are difficult for human eyes to notice while it is challenging to make physical attacks unobtrusive.
Consequently, numerous studies have aimed at assessing the physical adversarial robustness of DNNs in response to the concerns mentioned above during the past few years.
Physical attacks are executed in practical settings that encompass a diverse range of tasks conducted in physical scenarios.
Prior to executing a physical attack, it is imperative to fabricate the adversarial example properly.
Attackers frequently prioritize the practicality of a given approach within a real-world setting, taking into account factors such as environmental interference, ease of manufacture, and avoidance of visual detection by human observers.
In this paper, we formulate physical attacks in patch form due to the widespread popularity of adversarial patches as an approach for implementing physical attacks in real-world scenarios.
we exhibit different forms of adversarial patches in Fig. <ref>.
In the context of digital adversarial attacks, global perturbations engendered throughout an entire image present substantial impediments to the practical execution of such assaults within real-world environments.
In contrast, adversarial patches, which solely manipulate localized pixel regions, offer a more viable alternative.
These patches can be conveniently produced via printing methods and directly adhered to the designated targets.
A mask is commonly utilized to regulate the geometry of the disrupted area.
Upon completing the optimization process for the adversarial patch within the digital domain, the tailored patch is subsequently crafted and strategically situated on the object's exterior surface or background area, as shown in Fig. <ref>.
Mathematically, the adversarial example with adversarial patches can be formulated as:
x^* = (1-M_p^*) โx + M_p^*โp^*,
where โ and p^* denote Hadamard product and adversarial patches, respectively.
Mask matrix M_p^* is used to constrict the size, shape, and location of the adversarial patch, where the value of the patch position area is 1. 1 is a unit matrix with the same size as M_p^*.
To address the challenge of capturing value discrepancies between neighboring pixels by image acquisition devices, Total Variation (TV), as delineated in <cit.>, is usually incorporated into the objective function of physical attacks.
The inclusion of L_tv serves to ensure that the optimization process favors adversarial patches characterized by smooth patterns and gradual color transitions, as shown in Fig. <ref>.
TV can be mathematically defined as:
L_tv = โ_i,jโ((p_i+1,j-p_i,j)^2 + (p_i,j+1-p_i,j)^2),
where p_i,j denotes the pixel value situated at the ith row and jth column within the adversarial perturbations.
Owing to the color alterations that occur when transitioning the adversarial patch from the digital domain to the physical domain, the non-printability score (NPS) outlined in <cit.> is frequently employed to evaluate the fidelity with which the colors in the adversarial patch can be reproduced in the physical world.
This metric serves as an indicator of the distance between the digital representation of the adversarial patch and its physical manifestation when produced using a standard printer.
L_nps is written as:
L_nps = โ_i,jmin_c_printโ C| p_i,j - c_print|,
where c_print represents an individual color within the set of physically printable colors, denoted as C. By incorporating L_nps as part of the loss, the pixel values of the generated adversarial patch are biased towards printable colors from the set C, thereby promoting the reproducibility of the patch in the physical domain.
Last but not least, camouflage loss L_cam can be added to improve the invisibility of the adversarial patches to human visual perception.
From an academic standpoint, the rationale for employing camouflage loss stems from the observation that carefully crafted adversarial patches often exhibit vibrant hues and unconventional patterns.
By incorporating camouflage loss into the optimization process, it becomes possible to generate adversarial patches that seamlessly blend with natural things, as shown in Fig. <ref>, ensuring that the resultant perturbations remain inconspicuous while retaining their effectiveness in adversarial settings.
Technically, the L_p norm is often used as the camouflage metric to measure the distance between adversarial patches and natural images.
In summary, the total objective function of physical attacks in patch form can be derived from the combination of the aforementioned parts and adversarial loss L_adv (similar to digital attacks).
The total loss is depicted as:
L = L_adv + ฮฑยท L_tv + ฮฒยท L_nps + ฮณยท L_cam,
where ฮฑ, ฮฒ, and ฮณ are adopted to scale different components of the total loss.
ยง.ยง Survey Robustness in CV
In the following subsections, we provide a comprehensive examination of adversarial attacks as they pertain to the domain of CV, encompassing a variety of tasks including, but not limited to, image classification and object detection.
By conducting an in-depth review of the pertinent literature, we aim to elucidate the underlying principles, methodologies, and implications of these attacks, thereby contributing to a more robust understanding of their role and significance within the broader context of CV research.
ยง.ยง.ยง Image Classification
In the present section, we provide an overview of adversarial attacks in image classification, with a particular emphasis on both digital and physical attack methodologies.
The majority of research on adversarial attacks has focused on the digital domain, as the attacks were initially discovered in this context.
172 Digital attack.
White-box attacks: Szegedy <cit.> first reveal that DNNs establish input-output associations characterized by a considerable degree of discontinuity.
More precisely, their findings indicate that the application of a subtle and imperceptible perturbation, identified by maximizing the network's prediction error, can effectively induce DNNs' misclassification.
FGSM <cit.> was the first gradient-based adversarial attack method, in which only one step was moved from benign image x following the sign of gradient with the step size ฯต to obtain adversarial image x^*.
In <cit.>, the proposed DeepFool algorithm effectively generates perturbations that deceive DNNs and initially evaluates the resilience of state-of-the-art (SOTA) deep classifiers to adversarial perturbations on large-scale datasets.
Papernot <cit.> present a formalization of the adversarial space DNNs and introduce a novel set of algorithms that generate adversarial examples through a comprehensive comprehension of the input-output mapping of DNNs.
<cit.> achieves targeted deception of high-performance image classifiers through the development of two innovative attack techniques. The first technique (Universal Perturbations for Steering to Exact Targets, UPSET) generates universal perturbations for specific target classes, while the second technique (Antagonistic Network for Generating Rogue Images, ANGRI) generates perturbations that are specific to individual images.
The authors of <cit.> demonstrate the existence of universal (image-agnostic) and invisible adversarial noise, which reveals important geometric correlations among the high-dimensional decision boundary of classifiers.
Moreover, the universal adversarial noises can generalize well across DNNs.
In <cit.>, the researchers explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image, named Localized and Visible Adversarial Noise (LaVAN).
<cit.> indicates that the existence of non-robust features is directly responsible for the emergence of adversarial examples and confirms their widespread prevalence in commonly used datasets.
Fan <cit.> investigate sparse adversarial attacks, which focus on generating adversarial perturbations on select regions of a benign image.
In research <cit.>, the authors investigate the resilience of Vision Transformers to adversarial examples and their transferability between CNNs and transformers.
Paper <cit.> offers a more comprehensive comprehension of adversarial examples concerning medical images.
Research <cit.> introduces a new form of adversarial attack that is capable of deceiving classifiers through significant alterations.
For instance, even after significant changes to a face, well-trained DNNs are still can identify both the adversarial and original example as the same person.
To find out if the performance of DNNs decreases even for the images only lose a little information, Duan <cit.> propose AdvDrop to craft adversarial examples by dropping part information of images.
Akhtar <cit.> present a practical adversarial attack that can perform targeted fooling of deep visual classifiers on a per-class basis. Furthermore, they adapt this attack to interpret deep representations.
To avoid losing useful statistical information in boundary attacks, <cit.> investigates and enhances boundary attacks by restricting the perturbation direction in a square shape in the geometrical presentation of the image.
Research <cit.> introduces a frequency-tuned universal attack method that employs the frequency domain to generate universal perturbations to attack DNNs-based texture recognition systems.
Wan <cit.> devise a new average gradient-based adversarial attack, in which a dynamic set of adversarial examples is constructed in each iteration by utilizing the gradient of each iteration to calculate the average gradient.
Perceptual Sensitive Attack (PS Attack) <cit.> is introduced to avoid perturbations that are easily spotted by human eyes.
Black-box attacks:
To avoid the demands for knowledge of either the model internals or its training data, the authors in <cit.> introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge by observing labels given by the DNN to chosen inputs.
In work <cit.>, Zeroth Order Optimization (ZOO) is devised to directly estimate the gradients of the proxy model for crafting adversarial examples.
Specifically, They employ a combination of zeroth-order stochastic coordinate descent, dimension reduction, hierarchical attack, and importance sampling techniques to effectively fool black-box models.
By introducing three novel attack algorithms that can successfully penetrate both distilled and undistilled neural networks, Carlini <cit.> establish that defensive distillation does not notably enhance the resilience of neural networks.
To strengthen black-box attack efficacy, Dong <cit.> propose a momentum iterative FGSM (MI-FGSM) by integrating the momentum term into the iterative process of noise optimization, which can stabilize update directions and escape from poor local maxima during the optimization, resulting in more transferable adversarial examples.
Ilyas <cit.> establish three practical threat models that more precisely reflect the nature of many real-world classifiers: the query-limited model, the partial-information model, and the label-only model.
Furthermore, they propose novel attack strategies that can deceive classifiers under these more restrictive threat models.
In the paper <cit.>, the authors propose a novel and data-free method for generating universal adversarial perturbations that can be applied across multiple vision tasks.
Technically, their approach involves corrupting the extracted features at multiple layers to achieve fooling, which makes the objective generalizable and applicable to image-agnostic perturbations for various vision tasks, including object recognition, semantic segmentation, and depth estimation.
Work <cit.> proposes a framework that integrates and unifies a substantial portion of the existing research on black-box attacks and shows how to enhance the performance of black-box attacks by introducing gradient priors as a new factor in the problem.
Su <cit.> analyze an attack in an extremely limited scenario where only one pixel can be modified.
To achieve this, they propose a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE).
Moreover, this method requires minimal adversarial information, making it a black-box attack, and is capable of fooling a wider range of networks due to the inherent characteristics of DE.
Article <cit.> introduces a black-box adversarial attack algorithm that can successfully bypass both standard DNNs and those generated by various recently developed defense techniques, in which adversarial examples are drawn from the probability density distribution over a small region centered around the inputs.
In <cit.>, a meta attack is devised to attack a targeted model with few queries.
<cit.> strengthens query efficiency by leveraging the advantages of both transfer-based and scored-based approaches and addressing a discretized problem through the utilization of a simple yet highly efficient microbial genetic algorithm (MGA).
The authors in <cit.> present the HopSkipJumpAttack family of algorithms, which rely on a novel estimate of the gradient direction obtained through binary information at the decision boundary.
SurFree is presented in <cit.> to decrease the number of queries by focusing on targeted trials along varied directions, guided by precise indications of the geometric properties of the decision boundaries.
The objective of study <cit.> is to train a generalizable surrogate model, termed "Simulator," capable of emulating the behavior of an unknown target model.
To mitigate the query cost, the authors of <cit.> suggest using feedback information obtained from past attacks, example-level adversarial transferability.
By considering each attack on a benign example as an individual task, they construct a meta-learning framework that involves training a meta-generator to produce perturbations based on specific benign examples.
The authors of research <cit.> introduce a novel framework for conducting query-efficient black-box adversarial attacks by integrating transfer-based and decision-based approaches.
They also elucidate the correlation between the present noise and sampling variance, the compression monotonicity of noise, and the impact of transition functions on decision-based attacks.
Guo <cit.> introduce an intermediate-level attack, which establishes a direct linear mapping from the intermediate-level discrepancies, between adversarial features and benign features, to prediction loss of the adversarial example.
To strengthen attacks' transferability against black-box defenses, <cit.> propose a novel transferable attack capable of defeating various black-box defenses and sheds light on their security limitations.
Finally, we summarize digital attacks against image classification (<cit.>) in Table <ref>.
173 Physical attack
In <cit.>, the authors first demonstrate that machine learning systems are vulnerable to adversarial examples even in physical world scenarios and propose a basic iterative method (BIM).
Brown <cit.> propose a method for generating universal, robust, targeted adversarial perturbations in patch form that can be deployed in the real world.
These adversarial patches can be printed, attached, photographed, and then presented to image classifiers for successful attacks.
Subsequently, adversarial patches are broadly adopted in various physical attacks <cit.>.
To better understand adversarial examples in the physical world, Eykholt <cit.> propose a general physical attack method, Robust Physical Perturbations (RP2), to elaborate robust visual adversarial perturbations under dynamic physical conditions.
<cit.> provides evidence for the existence of robust 3D adversarial objects, and introduces the first algorithm Expectation Over Transformation (EOT) capable of synthesizing examples that remain adversarial across a chosen distribution of transformations.
<cit.> focuses specifically on the subset of adversarial examples that correspond to meaningful changes in 3D physical properties, such as rotation, translation, illumination conditions, .
To alleviate unrealistic distortions of adversarial patterns, Duan <cit.> introduces a novel technique called Adversarial Camouflage (AdvCam), which involves crafting and camouflaging physical-world adversarial examples in natural styles that appear legitimate to human observers.
In <cit.>, Feng propose Meta-Attack by formulating physical attacks as a few-shot learning problem to improve the optimization efficiency of physical dynamic simulations.
<cit.> propose an optical adversarial attack, which uses structured illumination to alter the appearance of the target objects to deceive image classifiers without physically touching the targeted objects, moving or painting the targets of interest.
Duan <cit.> demonstrates that DNNs can be easily deceived using only a laser beam.
Research <cit.> uncovers the presence of an intriguing category of spatially constrained, physically feasible adversarial examples, Universal NaTuralistic adversarial paTches (TnTs).
TnTs are crafted by examining the full range of spatially bounded adversarial examples and the natural input space within generative adversarial networks (GANs).
Finally, we summarize physical attacks against image classification (<cit.>) in Table <ref>.
ยง.ยง.ยง Object Detection
In this section, we offer a comprehensive examination of adversarial attacks pertaining to object detection, focusing specifically on digital and physical attack strategies.
Given the practicality of adversarial attacks in object detection tasks, much of the current research focuses on physical attacks.
172 Digital attack
White-box attacks:
In <cit.>, the authors extend the concept of adversarial examples to the domains of semantic segmentation and object detection, which are notably more challenging tasks.
Specifically, they introduce a novel algorithm called Dense Adversary Generation (DAG), which optimizes a loss function over a set of pixels or proposals to generate adversarial perturbations.
To reduce the number of perturbed pixels, <cit.> presents a new technique known as the Diffused Patch Attack (DPAttack), which leverages diffused patches in the form of asteroid-shaped or grid-shaped patterns to deceive object detectors.
This attack only modifies a small number of pixels in the image.
Research <cit.> introduces a novel approach called Contextual Adversarial Perturbation (CAP), which targets contextual information of objects in order to degrade the recognition accuracy of object detectors.
Zhang <cit.> introduce a novel Half-Neighbor Masked Projected Gradient Descent (HNM-PGD) approach, capable of generating potent perturbations to deceive various detectors while adhering to stringent limitations.
<cit.> presents a new and distinctive patch configuration comprised of four intersecting lines.
The proposed patch shape is shown to be a powerful tool for influencing deep convolutional feature extraction with limited pixel availability.
To ensure the stability of the ensemble attack, Huang <cit.> present a gradient balancing technique that prevents any single detector from being over-optimized during the training process.
Furthermore, they propose a novel patch selection and refining mechanism that identifies the most crucial pixels for the attack, while gradually eliminating irrelevant perturbations.
Black-box attacks:
Liu <cit.> introduce DPATCH, a black-box adversarial-patch-based attack designed to target popular object detectors, such as Faster R-CNN <cit.> and YOLO <cit.>.
In contrast to the original adversarial patch, which only manipulates the image-level classifier, the DPATCH simultaneously targets both the bounding box regression and object classification of the object detector in order to disable their predictions.
<cit.> introduces Efficient Warm Restart Adversarial Attack for Object Detection, which comprises three modules:
Efficient Warm Restart Adversarial Attack, which selects the most appropriate top-k pixels for the attack;
Connecting Top-k pixels with Lines, which outlines the strategy for connecting two top-k pixels to minimize the number of changed pixels and reduce the number of patches;
Adaptive Black Box Optimization, which leverages white box models to improve the performance of the black box adversarial attack.
To fool context-aware detectors, Cai <cit.> introduce the pioneering method for producing context-consistent adversarial attacks that can elude the context-consistency check of black-box object detectors working on intricate and natural scenes.
Finally, we summarize digital attacks against object detection (<cit.>) in Table <ref>.
173 Physical attack
Lu <cit.> present a construction that effectively deceives two commonly used detectors, Faster RCNN <cit.> and YOLO 9000 <cit.>, in the physical world.
<cit.> extend physical attacks to object detection by implementing a Disappearance Attack, which causes a stop sign to "disappear" either by covering the sign with an adversarial poster or by adding adversarial stickers onto the sign.
The work <cit.> introduces ShapeShifter, and demonstrates that the EOT approach, initially proposed to improve the resilience of adversarial perturbations in image classification, can be effectively adapted to the object detection domain.
<cit.> proposes a method for generating adversarial patches that can effectively conceal individuals from person detectors.
This method is particularly designed for targets with a high degree of intra-class variety, such as persons.
In <cit.>, the authors present an intriguing experimental investigation of physical adversarial attacks on object detectors in real-world scenarios.
Specifically, they explore the efficacy of learning a camouflage pattern to obscure vehicles from being detected by SOTA detectors based on DNNs.
To generate visually natural patches with strong attacking ability, Liu <cit.> present a novel Perceptual-Sensitive Generative Adversarial Network (PS-GAN) that can simultaneously enhance the visual authenticity and the attacking potential of the adversarial patch.
Wang <cit.> take the first attempt to implement robust physical-world attacks against person re-identification systems based on DNNs.
They propose advPattern to generate adversarial patches on clothes, which can hide people from being detected.
In <cit.>, the authors study physical attacks against object detectors in the wild.
They propose the Universal Physical Camouflage Attack (UPC), which involves learning an adversarial pattern capable of effectively attacking all instances of a given object category.
Wu <cit.> present a systematic study of the transferability of adversarial attacks on SOTA object detection frameworks.
To avoid direct access to targets of interest, <cit.> presents a novel contactless and translucent patch containing a carefully crafted pattern, which is placed over the lens of the camera to deceive SOTA object detectors.
Zhu <cit.> first demonstrate the feasibility of using two types of patches to launch an attack on YOLOv3-based infrared pedestrian detectors.
Following the previous work <cit.>, <cit.> propose the infrared adversarial clothing by simulating the process from cloth to clothing in the digital world and then designing the adversarial "QR code" pattern.
<cit.> introduces a novel approach called Adversarial Texture (AdvTexture) for conducting multi-angle attacks against person detectors.
AdvTexture enables the coverage of clothes with arbitrary shapes, rendering individuals wearing such clothes invisible to person detectors from various viewing angles.
In <cit.>, the authors introduce the Differentiable Transformation Attack (DTA), which enables the creation of patterns that can effectively hide the object from detection, while also taking into account the impact of various transformations that the object may undergo.
Wang <cit.> introduce a novel training pipeline called TransPatch to optimize the training efficiency of adversarial patches.
To avoid generating conspicuous and attention-grabbing patterns, <cit.> propose to create physical adversarial patches by leveraging the image manifold of a pre-trained GAN.
Inspired by the viewpoint that attention is indicative of the underlying recognition process, <cit.> proposes the Dual Attention Suppression (DAS) attack to craft visually-natural physical adversarial camouflages.
The DAS achieves strong transferability by suppressing both model and human attention, thereby enhancing the efficacy of the attack.
In <cit.>, the researchers propose a novel targeted and universal attack against the SOTA object detector using a label-switching technique.
The attack aims to fool the object detector into misclassifying a specific target object as another object category chosen by the attacker.
Mathov <cit.> introduce a novel framework that leverages 3D modeling to generate adversarial patches for a pre-existing real-world scene.
By employing a 3D digital approximation of the scene, their methodology effectively simulates the real-world environment.
To bridge the divide between digital and physical attacks, Wang <cit.> utilize the entire 3D surface of a vehicle to propose a resilient Full-coverage Camouflage Attack (FCA) that effectively deceives detectors.
A universal background adversarial attack method <cit.> is devised to fool DNNs-based object detectors.
The proposed method involves placing target objects onto a universal background image and manipulating the local pixel data surrounding the target objects in a way that renders them unrecognizable by object detectors.
The focus of the study <cit.> is on the lane detection system, a crucial component in numerous autonomous driving applications, such as navigation and lane switching.
The researchers design and realize the first physical backdoor attacks on such systems.
Zhang <cit.> propose a novel approach for producing physically feasible adversarial camouflage to achieve transferable attacks on detection models.
Study <cit.> explores a new category of optical adversarial examples, generated by a commonly occurring natural phenomenon, shadows.
They aim to employ these shadow-based perturbations to achieve naturalistic and inconspicuous physical-world adversarial attacks in black-box settings.
A systematic pipeline is introduced in <cit.> to produce resilient physical adversarial examples that can effectively deceive real-world object detectors.
Zhu <cit.> present TPatch, a physical adversarial patch that is triggered by acoustic signals.
TPatch differs from other adversarial patches in that it remains benign under ordinary circumstances but can be activated to initiate hiding, altering, or creating attacks via a deliberate distortion introduced through signal injection attacks directed at cameras.
To improve the optimizing stability and efficiency, the study <cit.> presents a fresh and lightweight framework that generates naturalistic adversarial patches systematically, without relying on GANs.
In paper<cit.>, the authors conduct the first investigation towards adversarial attacks that are directed at X-ray prohibited item detection and demonstrate the grave hazards posed by such attacks in this context of paramount safety significance.
Finally, we summarize physical attacks against object detection (<cit.>) in Table <ref>.
ยง.ยง.ยง Face Recognition
In this section, we undertake a thorough assessment of adversarial attacks in the context of face recognition.
The practicality of adversarial attacks in face recognition tasks has resulted in a significant focus on physical attacks in current research on this topic.
Zhu <cit.> introduce a novel method to elaborate adversarial examples for attacking well-trained face recognition models.
Their approach involves applying makeup effects to facial images through two GANs-based sub-networks: the Makeup Transfer Sub-network and Adversarial Attack Sub-network.
<cit.> aims to investigate the robustness of current face recognition models in the decision-based black-box attack scenario.
Sharif <cit.> concentrates on the attack of facial biometric systems, which are extensively used for surveillance and access control.
They introduce a new attack method that is both physically realizable and inconspicuous, enabling an attacker to circumvent identification or impersonate another individual.
The authors of <cit.> investigate the possibility of performing real-time physical attacks on face recognition systems through the use of adversarial light projections.
In study <cit.>, the researchers conduct a comprehensive evaluation of the robustness of face recognition models against adversarial attacks using patches in the black-box setting.
In contrast to previous methods that rely on designing perturbations, Wei <cit.> achieve physical attacks by manipulating the position and rotation angle of stickers pasted onto faces.
Paper<cit.> addresses the importance of position and perturbation in adversarial attacks by proposing a novel method that optimizes both factors simultaneously.
By doing so, they achieve a high attack success rate in the black-box setting.
To comprehensively evaluate physical attacks against face recognition systems, <cit.> introduce a framework that employs 3D-face modeling to simulate complex transformations of faces in the physical world, thus creating a digital counterpart of physical faces.
This generic framework enables users to control various face variations and physical conditions, making it possible to conduct reproducible evaluations comprehensively.
In study <cit.>, the authors investigate the adversarial robustness of face recognition systems against sticker-based physical attacks, aiming to gain a better understanding of the system's vulnerabilities.
To increase the imperceptibility of attacks, Lin <cit.> propose a physical adversarial attack using full-face makeup, as its presence on the human face is a common occurrence.
Singh <cit.> present a new smoothness loss and a patch-noise combo for the physical attack against face recognition systems.
<cit.> aims to devise a more dependable technique that can holistically assess the adversarial resilience of commercial face recognition systems from end to end.
To achieve this goal, they propose the design of Adversarial Textured 3D Meshes (AT3D) with the intricate topology on a human face.
The AT3D can be 3D-printed and then worn by the attacker to evade the facial recognition defenses.
Finally, we summarize adversarial attacks against face recognition (<cit.>) in Table <ref>.
ยง.ยง.ยง Others
To investigate how adversarial examples affect deep product quantization networks (DPQNs), <cit.> propose to perturb the probability distribution of centroids assignments for a clean query to attack DPQNs-based retrieval systems.
<cit.> introduces the Attack on Attention (AoA) technique, which exploits the semantic property shared by DNNs.
AoA demonstrates a marked increase in transferability when attention loss is employed in place of the traditional cross-entropy loss.
Since AoA only modifies the loss function, it can be readily combined with other transferability-enhancing methods to achieve SOTA performance.
In study <cit.>, the authors develop novel techniques to generate robust unlearnable examples that are resistant to adversarial training.
For the first time, paper <cit.> introduces a clean-label approach for the poisoning availability attack, which reveals the intrinsic imperfection of classifiers.
Paper <cit.> highlights how the global reasoning of (scaled) dot-product attention can represent a significant vulnerability when faced with adversarial patch attacks.
The current study puts forth a novel interactive visual aid, DetectorDetective <cit.>, which seeks to enhance users' comprehension of a model's behavior during the traversal of adversarial images through an object detector.
The primary goal of DetectorDetective is to provide users with a deeper understanding of how object detectors respond to adversarial attacks.
Work <cit.> represents an initial stride towards implementing physically viable adversarial attacks on visual tracking systems in real-life scenarios.
Specifically, the authors accomplished this by developing a universal patch that serves to camouflage single-object trackers.
To attack depth estimation, Cheng <cit.> employ an optimization-based technique for systematically creating stealthy physical-object-oriented adversarial patches.
Research <cit.> assesses the effects of the chosen transformations on the efficacy of physical adversarial attacks.
Moreover, they measure attack performance under various scenarios, including multiple distances and angles.
Finally, we summarize other adversarial attacks (<cit.>) in Table <ref>.
ยง.ยง Survey Robustness in RS
In the subsequent subsections, we undertake a meticulous appraisal of adversarial attacks as they relate to RS, with a particular focus on tasks such as image classification, object detection, and additional relevant applications.
Our objective is to provide a systematic and exhaustive analysis of the current literature, thereby fostering a deeper understanding of the principles, techniques, and ramifications of adversarial attacks in the context of RS research.
ยง.ยง.ยง Image Classification
The majority of attacks against RS imagery classifiers stem from the field of CV, thus most of the existing research focuses on digital attacks.
Czaja <cit.> first considers attacks against machine learning algorithms used in RS applications.
Specifically, they present a new study of adversarial examples in satellite image classification problems.
In <cit.>, the authors investigate the properties of adversarial examples in RSI scene classification.
To this end, they create several scenarios by employing two popular attack algorithms, FGSM and BIM are trained on various RSI benchmark datasets to fool DNNs.
The authors of <cit.> perform a systematic analysis of the potential threat posed by adversarial examples to DNNs used for RS scene classification.
They conduct both targeted and untargeted attacks to generate subtle adversarial perturbations that are imperceptible to human observers but can easily deceive DNNs-based models.
Paper <cit.> proposes a UNet-based <cit.> GAN to enhance the optimizing efficiency and attack efficacy of the generated adversarial examples for Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) models.
<cit.> aims to provide a thorough evaluation of the effects of adversarial examples on RSI classification.
Technically, eight of the most advanced classification DNNs are tested on six RSI benchmarks.
These data sets consist of both optical and synthetic-aperture radar (SAR) images with varying spectral and spatial resolutions.
The study <cit.> introduces a novel approach for generating adversarial examples to fool RSI classifiers in black-box conditions by utilizing a variant of the Wasserstein generative adversarial network.
To enhance the success rate of adversarial attacks against scene classification, Jiang <cit.> propose the use of the projected gradient descent method to create adversarial RSIs.
In article <cit.>, the authors analyze adversarial attacks against DL-based unmanned aerial vehicles (UAVs) and propose two novel adversarial attack methods against regression models utilized in UAVs.
<cit.> presents a fully black-box universal attack (FBUA) framework for creating a single universal adversarial perturbation against SAR target recognition that can be used against a wide range of DNN architectures and a large percentage of target images.
Two variants of universal adversarial examples, called targeted universal adversarial examples and source-targeted universal adversarial examples, are proposed in work <cit.>.
The proposed methods aim to extend universal adversarial perturbations to perform the targeted attack.
Xu <cit.> present a comprehensive analysis of universal adversarial examples in RS data, without any prior knowledge of the target model.
Furthermore, the authors introduce Mixup-Attack, a novel black-box adversarial attack method, and its simpler variant Mixcut-Attack, for RS data.
The authors of <cit.> present a comprehensive investigation of backdoor attacks on RS data.
Both scene classification and semantic segmentation tasks are considered, and systematic analysis is provided.
A novel approach called speckle-variant attack (SVA) is devised by Peng <cit.>.
The SVA consists of two major modules: an iterative gradient-based perturbation generator and a target region extractor.
<cit.> proposes a novel method to explore the basic characteristics of universal adversarial perturbations (UAPs) of RSIs.
The method involves combining an encoder-decoder network with an attention mechanism to generate UAPs of RSIs.
Qin <cit.> present a novel universal adversarial attack method for CNN-SAR image classification.
The proposed approach aims to differentiate the target distribution by utilizing a feature dictionary model, without any prior knowledge of the classifier.
Finally, we summarize adversarial attacks against image classification in RS (<cit.>) in Table <ref>.
ยง.ยง.ยง Object Detection
Similarly, the adversarial attack methods are divided into digital attacks and physical attacks according to the attacked domain.
172 Digital attack
The authors of <cit.> first investigate the use of patch-based adversarial attacks in the context of unmanned aerial surveillance.
Specifically, they explore the application of these attacks on large military assets by laying a patch on top of them, which camouflages them from automatic detectors analyzing the imagery.
<cit.> introduces a novel adversarial attack method called Patch-Noobj, which is designed to address the problem of large-scale variation in aircraft in RS imagery.
PatchNoobj is a universal adversarial method that can be used to attack aircraft of different sizes by adaptively scaling the width and height of the patch according to the size of the target aircraft.
Du <cit.> investigate the susceptibility of DL-based cloud detection systems to adversarial attacks.
Specifically, they employ an optimization process to create an adversarial pattern that, when overlaid onto a cloudless scene, causes the DNNs to falsely detect clouds in the image.
In paper <cit.>, the authors devise a novel approach for generating adversarial pan-sharpened images.
To achieve this, a generative network is employed to generate the pan-sharpened images, followed by the application of shape and label loss to carry out the attack task.
In the paper <cit.>, the researchers investigate the effectiveness and limitations of adversarial camouflage in the context of overhead imagery.
Fu <cit.> propose Ad2Attack, an Adaptive Adversarial Attack approach against UAV object tracking.
Adversarial examples are generated online during the resampling of the search patch image, causing trackers to lose the target in the subsequent frames.
Tang <cit.> propose a novel adversarial patch attack algorithm.
In particular, unlike traditional approaches that rely on the final outputs of models, the proposed algorithm uses the intermediate outputs to optimize adversarial patches.
The study <cit.> introduces a novel defense mechanism based on adversarial patches that aim to disable the onboard object detection network of the LSST (Low-Slow-Small Target) recognition system by launching an adversarial attack.
<cit.> introduces a novel framework for generating adversarial pan-sharpened images.
The proposed method employs a two-stream network to generate the pan-sharpened images and applies shape loss and label loss to carry out the attack task.
To ensure the quality of the pan-sharpened images, a perceptual loss is utilized to balance spectral preservation and attacking performance.
Sun <cit.> concentrate on patch-based attacks (PAs) against optical RSIs and propose a Threatening PA without the scarification of the visual quality, dubbed TPA.
173 Physical attack
In work <cit.>, the authors conduct a comprehensive analysis of the universal adversarial patch attack for multi-scale objects in the RS field.
Specifically, this study presents a novel adversarial attack method for object detection in RS data by optimizing the adversarial patch to attack as many objects as possible by formulating a joint optimization problem.
Furthermore, it introduces a scale factor to generate a universal adversarial patch that can adapt to multi-scale objects, ensuring its validity in real-world scenarios.
Du <cit.> have developed new experiments and metrics to assess the effectiveness of physical adversarial attacks on object detectors in aerial scenes, in order to investigate the impact of physical dynamics.
In research <cit.>, the authors propose an Adaptive Patch-based Physical Attack (AP-PA), which enables physically practicable attacks using malicious patches for both the white-box and black-box settings in real physical scenarios.
In <cit.>, Lian made the inaugural effort to execute physical attacks in a contextual manner against aerial detection in the physical world.
Following their previous work, Lian propose Contextual Background Attack (CBA) <cit.>, which can achieve high attack effectiveness and transferability in real-world scenarios, without the need to obscure the target objects.
Technically, they extract the saliency of the target of interest as a mask for the adversarial patches and optimize the pixels outside the mask area to closely cover the critical contextual background area for detection.
Additionally, the authors devised a novel training strategy, in which the patches are forced to be outside the targets during training.
As a consequence, the elaborate perturbations can successfully hide the protected objects both on and outside the adversarial patches from being recognized.
The objective of <cit.> is to create a natural-looking patch that has a small perturbation area.
This patch can be used in optical RSIs to avoid detection by object detectors and remain imperceptible to human eyes.
Paper <cit.> presents an approach to adversarially attack satellite RS detection using a patch-based method.
The proposed method aims to achieve comparable attack effectiveness in the physical domain as that in the digital domain, without compromising the visual quality of the patch.
To achieve this, the approach utilizes pairwise-distance loss to control the salience of the adversarial patch.
Finally, we summarize adversarial attacks against object detection in RS (<cit.>) in Table <ref>.
ยง BENCHMARK
In this study, we introduce a comprehensive benchmark that assesses the robustness of image classification and object detection tasks in optical RSIs, as shown in Fig. <ref>.
we investigate the robustness of DNNs-based models against natural and adversarial perturbations, which are fundamental elements of model robustness.
To explore the comprehensive range of robustness, we examine a diverse set of natural noises and adversarial noises.
Below, we give a detailed introduction to the benchmark on natural robustness and adversarial robustness in Sec. <ref> and Sec. <ref>, respectively.
ยง.ยง Natural Robustness
Various sources in the real world, such as weather fluctuations, sensor deterioration, and object deformations, generate natural noise that can be detrimental to DL models.
These noises are inevitable, presenting a challenge in pursuing accurate and reliable artificial intelligence.
In order to undertake a thorough assessment of the inherent resilience of RSI classification and detection models in the face of varied and diverse forms of noise, it is necessary to adopt a rigorous and systematic benchmark.
This benchmark should encompass a wide range of noise types and intensities, including those arising from natural environmental factors, sensor degradation, and varying degrees of image distortion.
Such an all-encompassing evaluation holds the key to enhancing the practical viability of DNN-based models and to the development of more resilient and adaptable DNN architectures.
Although comprehensive benchmarks <cit.> on natural noises have been established for the CV field, it is still lacking in the area of RS.
Consequently, we built the first benchmark and datasets on natural noises for RS tasks.
Specifically, we benchmark seven natural noises, including Gaussian noise (G), Poisson noise (P), salt-pepper noise (SP), random noise (RD), rain (R), snow (S), and fog (F), as shown in Fig. <ref>.
Each noise is divided into five different intensities as shown in Fig. <ref>.
In the following subsections, we mainly introduce the datasets, models, and metrics in our benchmark on natural robustness for image classification and object detection.
ยง.ยง.ยง Image Classification
Benchmark on natural robustness for image classifiers:
172 Datasets
AID <cit.> is a large-scale aerial image dataset for classification tasks, comprising sample images acquired from Google Earth imagery.
The AID dataset comprises 10000 aerial images from 30 different scene categories, including airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks, and viaduct.
The AID is adopted to train aerial image classifiers.
To compute the overall accuracy, the ratios of training and testing sets are fixed at 50% and 50%, respectively.
AID-NN is introduced as a large-scale benchmark dataset to evaluate the natural robustness of image classification in aerial images, which is derived from AID by adding seven different natural noises in five levels, respectively.
The rest information on AID-NN is the same as the original AID.
173 Classifiers
In order to comprehensively evaluate and investigate the robustness trends across various DNN architectures for image classification, our benchmark endeavors to encompass a wide range of architectures as shown in Table <ref>.
Regarding the CNNs, we select renowned and widely recognized classical network architectures, such as the ResNet series (including various versions of ResNet <cit.>, ResNeXt <cit.>, and WRN <cit.>) and DenseNet <cit.>.
The lightweight ones including MobileNetV2 <cit.>, MobileNetV3 <cit.>, and ShuffleNetV2 <cit.>.
As for the prevalent vision Transformer, Swin Transformer <cit.> and ViT <cit.> are adopted in this benchmark.
174 Metric
Acc:
The mathematical formula to define the image classification evaluation index "Acc" (Accuracy) is as follows:
Acc = TP + TN/TP + TN + FP + FN,
where True Positive (TP) and True Negative (TN) are the numbers of correctly classified positive and negative samples, respectively;
False Positive (FP) and False Negative (FN) are the numbers of incorrectly classified positive and negative samples, respectively.
ยง.ยง.ยง Object Detection
Benchmark on natural robustness for object detectors:
172 Datasets
DOTA <cit.> is a large-scale benchmark dataset for object detection in aerial images, which contains 15 common categories, 2,806 images (image width range from 800 to 4,000), and 188,282 instances.
The proportions of the training set, validation set, and testing set in DOTA are 1/2, 1/6, and 1/3, respectively.
DOTA is adopted to train aerial detectors after cropping[Image cropping tool: <https://github.com/CAPTAIN-WHU/DOTA_devkit>] the images as 1024ร1024.
DOTA-NN is introduced as a large-scale benchmark dataset to evaluate the natural robustness of object detection in aerial images, which is derived from DOTA (after cropping) by adding seven different natural noises in five levels, respectively.
The rest information on DOTA-NN is the same as the original DOTA.
173 Detectors
Our benchmark includes an array of prominent object detectors as shown in Table <ref>, including YOLOv2 <cit.>, YOLOv3 <cit.>, YOLOv5 <cit.>, SSD <cit.>, Faster R-CNN <cit.>, Swin Transformer <cit.>, Cascade R-CNN <cit.>, RetinaNet <cit.>, Mask R-CNN <cit.>, FoveaBox <cit.>, FreeAnchor <cit.>, FSAF <cit.>, RepPoints <cit.>, TOOD <cit.>, ATSS <cit.>, and VarifocalNet (VFNet) <cit.>.
Technically, our benchmark encompasses both one-stage (YOLO <cit.>, SSD <cit.>, ) and two-stage detectors (Faster R-CNN <cit.>, Cascade R-CNN <cit.>, ), as well as CNN-based (YOLO <cit.>, Faster R-CNN <cit.>, ) and Transformer-based (Swin Transformer <cit.>) detectors.
Furthermore, our benchmark also evaluates the performance of anchor-based detectors (Cascade R-CNN <cit.>, RetinaNet <cit.>, ) and anchor-free detectors (FreeAnchor <cit.>).
174 Metric
mAP:
We use mean average precision (mAP) as the evaluation metric of object detection.
The mathematical definition formula of mAP is written as follows:
mAP = 1/nโ^n_i=1 AP_i,
where n is the number of object categories being detected, AP_i is the average precision (AP) of the i-th category, which is calculated as:
AP_i = โซ^1_0 p_interp(r) dr,
where p_interp(r) is the interpolated precision at a certain recall level r, and is defined as:
p_interp = max_rฬโฅ rpฬ(rฬ).
Here, pฬ(rฬ) is the precision at a certain recall level rฬ, and the interpolation is done by taking the maximum precision value overall recall levels greater than or equal to r.
The AP is calculated by averaging the precision values at all the recall levels at which there is a correct detection.
In practice, the mAP is typically calculated for a range of intersections over union (IoU) thresholds and then averaged over those thresholds.
For example, mAP@[.50:.05:.95] means that the mAP is calculated by taking the mean of the AP scores at IoU thresholds of 0.5, 0.55, 0.6, ..., 0.95.
ยง.ยง Adversarial Robustness
In this section, we mainly introduce the adversarial attacks, datasets, models, and metrics in our benchmark on adversarial robustness for image classification and object detection.
ยง.ยง.ยง Image Classification
Benchmark on adversarial robustness for image classifiers:
172 Attacks
In this benchmark, we evaluate adversarial robustness with 5 digital attacks, including Fast Gradient Sign Method (FGSM) <cit.>, AutoAttack (AA) <cit.>, Projected Gradient Descent (PGD) <cit.>, C&W <cit.>, Momentum Iterative FGSM (MIFGSM) <cit.>.
A detailed description of these attack methods is provided in Sec. <ref>.
Furthermore, we conduct the aforementioned attacks in both white-box and black-box conditions.
173 Datasets
AID <cit.> dataset is used for crafting adversarial examples to test the adversarial robustness of aerial image classifiers.
AID-AN is introduced as a large-scale benchmark dataset to evaluate the adversarial robustness of the image classifier in RS, which is derived from AID by adding four different adversarial noises, respectively.
The rest information on AID-AN is the same as the original AID.
174 Models and 175 Metric are the same as the counterparts of natural robustness depicted in Sec. <ref>.
ยง.ยง.ยง Object Detection
Benchmark on adversarial robustness for object detectors:
172 Attacks
We evaluate adversarial robustness with 4 patch-based attacks, including CBA <cit.>, APPA (on) <cit.>, APPA (outside) <cit.>, and the method introduced by Thys in <cit.>.
Detailed information on these representatives and SOTA attacks against object detection is provided in Sec. <ref>.
In addition, we not only test the aforementioned SOTA methods under both white-box and black-box conditions but also conduct experiments in a different domain, digital and physical domains, respectively.
173 Datasets
DOTA <cit.> dataset is used for training the victim (white-box) or proxy (black-box) models, the aerial detectors to be attacked, same as its role in Sec. <ref>.
RSOD[<https://github.com/RSIA-LIESMARS-WHU/RSOD-Dataset->] is adopted to train the adversarial patches, which contains aircraft (4993 aircraft in 446 images), oil tank (1586 oil tanks in 165 images), playground (191 playgrounds in 189 images) and overpass (180 overpasses in 176 images).
In addition, we craft adversarial examples by adding adversarial patches generated by the aforementioned attack methods to perform digital attacks. The different patch settings for digital attacks are shown in Fig. <ref>.
For physical attacks, the elaborated adversarial patches are printed to disturb the targets of interest in the physical real-world scenarios. The different patch settings for physical attacks are shown in Fig. <ref>.
174 Models are the same as the counterpart of natural robustness depicted in Sec. <ref>.
175 Metrics
For digital attacks, we employ the detection results obtained from the clean images as the reference for calculating the AP.
Specifically, the AP of the clean dataset is set as 100% to ensure that targets missed by the original detector are not regarded as successful attacks.
For physical attacks, we conducted experiments scaled at a 1:400 proportion to verify the attack performance in the physical world.
Technically, we trained 20 mainstream object detectors as victim or proxy models and recorded the average confidence of 18 aircraft, with the detection threshold set to 0.2.
Targets with detection confidence lower than 0.2 are regarded as unrecognized because the confidence threshold of the object detection task is usually set to around 0.45.
ยง EXPERIMENTS
In this part, we present the experimental results and deep analysis of benchmarking natural robustness and adversarial robustness in Sec. <ref> and Sec. <ref>, respectively.
ยง.ยง Natural Robustness
ยง.ยง.ยง Image Classification
In this section, we evaluate the natural robustness of the 23 RSI classifiers introduced in Sec. <ref> with AID RSI dataset <cit.> and its derived version with various natural noises.
We show the classification results in Fig. <ref>.
Please note that all the evaluation results presented in this part represent the Top-1 accuracy.
Based on the experimental results, we have the following observations:
* Noise type. The impact of various types of natural noise on classifiers exhibits varying degrees of influence.
Specifically, random noise exerts the most significant impact on classification accuracy, resulting in the greatest reduction in model performance.
In comparison, the classifiers are more robust to other noises.
* Noise level. As expected, for both CNNs and Transformers, an increase in the intensity of noise across all types results in a corresponding escalation of its impact on the model, thereby leading to a more pronounced reduction in classification accuracy.
* Model type. Evidently, when subjected to different types of noise, Transformers exhibit greater resilience compared to CNNs.
For CNNs, the robustness of lightweight networks (MobileNet and ShuffleNet) is slightly lower than other models.
* Model size. When holding the model structure constant, it is apparent that larger models possess stronger robustness, such as ResNet, MobileNetv2, MobileNetv3, etc.
However, it is important to note that there may be exceptions to this general trend, such as DenseNet, Swin, , which could be attributed to overfitting.
ยง.ยง.ยง Object Detection
In this section, we evaluate the natural robustness of various mainstream RSI object detectors introduced in Sec. <ref> with large-scale aerial detection dataset DOTA and its derived version
corrupted with various natural noises.
We show the experimental results in Fig. <ref> and Fig. <ref>.
Please note that we adopt mAP ([email protected] and mAP@[.50:.05:0.95]) as the evaluation metric.
Based on the experimental results, we have the following observations:
* Noise type. Similarly, the influence of different types of natural noise on aerial detectors varies to a certain degree.
Specifically, random noise and salt-pepper noise exert the most significant impact on aerial detectors, followed by Gaussian noise and Poisson noise.
In comparison, rain, snow, and fog have a relatively lesser impact on the performance of the detectors.
* Noise level. Consistent with expectations, all of the aerial detectors exhibit a consistent pattern: as the intensity of noise increases across all types, its impact on the detectors also intensifies, resulting in a more significant decline in detection performance.
In comparison with natural weather noises, the level change of the remaining noises has a greater impact on the detection accuracy.
* Model type. Obviously, YOLOv5 and YOLOv3 are significantly more robust than other detectors and with better detection performance, followed by Swin Transformer, which is slightly more resilient than the rest aerial detectors.
In addition, it is hard to tell the difference between the robustness of different types of aerial detectors, such as CNN-based and Transformer-based, anchor-based and anchor-free, and one-stage and two-stage.
* Model size. Generally speaking, when the model structure is held constant, such as YOLOv5, it becomes evident that larger model sizes exhibit a greater level of robustness same as image classifiers.
However, in several cases, YOLOv5l (the second largest detector) outperforms YOLOv5x (the largest detector), overfitting may be a contributing factor to this phenomenon.
ยง.ยง Adversarial Robustness
ยง.ยง.ยง Image Classification
In this section, we evaluate the adversarial robustness of the 23 RSI classifiers introduced in Sec. <ref> with AID RSI dataset <cit.> and its derived version with various adversarial noises.
We show the classification attack results of FGSM, AutoAttack, PGD, C&W, and MIFGSM in Fig. <ref>,<ref>, <ref>, <ref>, and <ref>, respectively.
Moreover, we also conduct experiments on FGSM with different perturbation sizes, small, middle, and large, as shown in Fig. <ref>.
Please note that in the attack results figure, the diagonal position indicates the cases where the victim model and the proxy model are consistent, reflecting the results of white-box attacks.
In contrast, the remaining positions in the figure correspond to the results of black-box attacks, where the victim model and the proxy model do not align.
Based on the experimental results, we have the following observations:
* Noise type. The impact of various types of adversarial noise on classifiers exhibits varying degrees of influence.
Specifically, MIFGSM shows the best attack performance in both white-box and black-box conditions, followed by AutoAttack.
In contrast, FGSM and C&W attacks are found to have the least detrimental impact on classifiers, particularly in black box scenarios, rendering them nearly ineffective.
* Noise level. Consistent with expectations, both CNNs and Transformers demonstrate an increased attack efficacy to adversarial noise as its intensity escalates, regardless of whether the attacks are conducted in white-box or black-box settings.
Specifically, under white-box settings, FGSMs can successfully execute attacks, while the classification accuracy of most classifiers remains higher than 50%.
However, in black-box conditions, FGSMs are found to be largely ineffective.
As the perturbation amplitude increases, the accuracy of most classifiers drops below 30%, indicating a substantial reduction in classification accuracy.
Even under black-box attacks, the classification accuracy is moderately compromised to some extent.
* Model type. Transformers, such as Swin Transformer and ViT, exhibit a higher level of resilience compared to CNNs when facing various adversarial attacks, particularly in black-box scenarios.
This implies that perturbations trained on CNNs do not transfer well to Transformers, and vice versa.
For all classifiers, white-box attacks consistently outperform black-box attacks, and the generated perturbations demonstrate superior attack transferability across different versions of the training model. Specifically, perturbations trained on ResNet-50 exhibit good transferability to ResNet-18, ResNet-34, ResNet-101, and ResNet-152.
Furthermore, lightweight networks are easier to fool in white-box settings, while the corresponding perturbations exhibit lower attack transferability compared to other models.
* Model size. It is observed that model size does not affect attack efficacy in white-box conditions.
Under black-box settings, when keeping the classifiers' network structure constant, it is intuitive that the bigger the neural networks, the stronger the adversarial robustness.
However, it is important to note that the most robust model is usually not the biggest version of the classifiers but the second largest one, this phenomenon could be attributed to overfitting.
ยง.ยง.ยง Object Detection
In this section, we evaluate the adversarial robustness of the 20 RSI object detectors introduced in Sec. <ref> with the DOTA RSI dataset <cit.> and its derived version with various adversarial noises.
We illustrate the evolutionary progression of the adversarial patch in Figure <ref>, demonstrating its dynamic development over time.
The generated adversarial patches are shown in Fig. <ref>.
We show the detection attack results of four physical attack methods in Fig. <ref>, <ref>, <ref>, and <ref>, respectively.
In addition, the digital attack performance is also exhibited in Fig. <ref>.
Evaluation metrics are introduced in detail in <ref>.
Please note that the experimental results presented in this section are partially derived from our previous works <cit.> and <cit.>.
Based on the experimental results, we have the following observations:
* Digital attack.
172 For attack methods, the attack effects of the four methods exhibit minimal variation in general, but for YOLOv2, the attack methods with the patch outside the target, APPA(outside) and CBA, is less effective than the attack methods with the patch on the target, <cit.> proposed by Thys and APPA(on).
Notably, the background patch is positioned outside the targeted objects in the digital test, and a portion of the patch area is sacrificed to mask targeted objects in the physical world.
173 For detection methods, YOLOv2 is found to be the most vulnerable to attack, even in black-box scenarios.
On the other hand, different versions of YOLOv5 demonstrate robustness across diverse attack settings.
However, detectors such as Faster R-CNN and SSD are comparatively easier to be attacked and compromised.
In general, the Swin Transformer stands out as the most resilient detector, exhibiting a higher level of resistance against various attacks.
* Physical attack.
172 For attack methods, CBA exhibits a notable physical attack effect, causing a significant number of detectors to fail in detecting any objects in real-world scenarios, which is seldom observed for APPA and <cit.>.
In addition, CBA also shows the best attack transferability, even for some robust detectors, YOLOv3, YOLOv5, Swin Transformers, 173 For detection methods, YOLOv5 continues to demonstrate remarkable resilience against attacks compared to other aerial detectors.
However, CBA has proven highly effective in impairing the detection performance of YOLOv5 and shows strong generalization across different versions of YOLOv5.
Similar to the digital attack scenario, YOLOv2 remains the most vulnerable detector in the physical world as well.
Interestingly, it exhibits a certain degree of immunity to adversarial patches placed outside the targets of interest.
* White-box.
172 The contextual background patches demonstrate a remarkable ability to completely impair the detection performance of several aerial detectors.
Specifically, detectors such as SSD, YOLOv2, YOLOv5n, Cascade R-CNN, RetinaNet, FreeAnchor, FSAF, RepPoints, TOOD, and FoveaBox are unable to recognize any of the protected targets when confronted with these patches.
173 The remaining detectors are capable of correctly recognizing some of the protected objects; however, they exhibit a significantly lower average confidence level, averaging below 0.438.
174 In contrast, the patches generated by APPA and <cit.> have a smaller impact on misguiding the detectors, resulting in only a slight deviation in the confidence of correct detection.
* Black-box.
172 Even in the black-box setting, the CBA demonstrates effective transferability of its attack efficacy across different aerial detectors, which significantly outperforms APPA and adversarial patches generated by <cit.>.
173 The CBA trained on YOLOv5n effectively safeguards all the targeted objects, preventing their recognition by YOLOv2, Cascade R-CNN, RetinaNet, FSAF, RepPoints, FoveaBox, and VFNet.
Other attacks, however, do not achieve the same level of success.
174 Under the attack of CBA, the average confidences of all detectors are below the threshold of 0.208, demonstrating its remarkable superiority over other physical attack methods.
ยง DISCUSSIONS
The investigation into the robustness of DNNs has witnessed rapid advancements in recent years, particularly in the field of CV and its related applications such as RS.
Despite the significant progress made, there remain several challenging issues that demand further examination and discussion.
In this section, we delve into these challenges and offer insights into potential research directions of CV and RS as follows:
* Explain the generation of adversarial perturbations with neural network training.
The process of training adversarial perturbations shares great similarities with training neural networks.
The key distinction lies in the update mechanism, where pixels within perturbations are adjusted during adversarial perturbation training, while network parameters are updated during network training.
Consequently, the generation of adversarial perturbations is influenced by various factors, including training samples, victim network models, and optimization strategies.
* Enlighten adversarial attacks with victim model.
The victim model plays a crucial role in determining the characteristics of the generated adversarial perturbations, particularly when the training samples and optimization process are fixed.
As a result, when targeting weak detectors like YOLOv2, an attack method may only learn limited information, which might be sufficient for white-box attacks, successfully attack YOLOv2, but inadequate for targeting more robust models.
This analysis also sheds light on why adversarial patches trained on different versions of the same model exhibit similar pattern styles while differing from others.
* Enlighten adversarial attacks with strategies for strengthening DNNs' performance.
Considering the similarities between perturbation generation and model training, it is worthwhile to explore the effective application of techniques that enhance the performance of DNNs in adversarial attacks.
For instance, methods such as "momentum" introduced in <cit.> and "dropout" discussed in <cit.> have shown the potential in boosting attack efficacy.
Investigating how these techniques, such as training strategies, test augmentations, and so on, can be appropriately utilized in the context of adversarial attacks could provide valuable insights for strengthening attack effectiveness, to further improve the security and resilience of DNN models.
* Bridge the gap between digital and physical attacks.
The majority of existing research primarily concentrates on theoretical analyses of adversarial attacks and their transferability in the digital domain, rendering them ineffective when confronted with real-world physical applications.
However, physical attacks raise substantial security concerns due to their potential implications in practical scenarios.
Therefore, it becomes imperative to bridge the gap between digital and physical attacks by developing techniques capable of effectively translating digital attack strategies into real-world settings.
* Bridge the gap between attacks against different tasks.
The essence of DNNs-based models for visual perception is extracting features, progressing from shallow to deep concepts and from simple to abstract representations.
As a consequence, how to interfere with the feature extraction process in various visual tasks to achieve a universal attack effect is an important and promising direction for research.
By understanding the underlying mechanisms of feature extraction in DNNs, researchers can develop strategies to manipulate and disrupt this process to generate effective adversarial attacks across different visual tasks.
This line of research has the potential to uncover vulnerabilities and weaknesses in DNN models, leading to the development of robust defense mechanisms and improved security in various applications of CV.
* The background features matter more than you think.
The background features of a target are widely acknowledged to play a crucial role in its correct recognition.
However, recent studies <cit.> have demonstrated that intelligent recognition systems based on DNNs can be easily deceived solely by manipulating the background features of the target, even without distorting the target itself at all.
This raises the question of why a well-elaborated intelligent algorithm is so vulnerable to such manipulation.
It suggests that the influence of background features may be more significant than initially anticipated.
Consequently, there is a pressing need to delve deeper into the pivotal role that background features play in CV tasks and to understand their underlying mechanisms.
Such research can provide valuable insights to guide the design of more robust visual perception algorithms and models.
* Background attack in the physical world.
The prevailing physical attacks directed at object detectors primarily focus on the development of perturbations in patch form.
These elaborated adversarial patches are printed and affixed to the surfaces of targeted objects through painting or pasting techniques, thereby compromising the recognition capabilities of intelligent systems operating in real-world environments.
However, the application of patch-based perturbations in the physical realm is accompanied by significant costs and time requirements.
As a viable alternative, background attacks emerge as a promising approach, wherein only the background regions surrounding the targeted objects are manipulated, without any direct alteration of the protected objects themselves.
This approach proves particularly advantageous for scenarios involving small targets, such as object detection in RS applications, where the effectiveness and practicality of adversarial patches are limited.
Furthermore, the practical value of adversarial camouflage in background attacks is of utmost importance to ensure adversarial perturbations' inconspicuousness.
ยง CONCLUSIONS
In this study, we present a comprehensive investigation into the robustness of image classification and object detection in the context of RS.
Our work encompasses an extensive review of existing literature in both CV and RS domains, providing a comprehensive understanding of the research landscape in this area.
Furthermore, we perform a series of extensive experiments to benchmark the robustness of image classifiers and object detectors specifically designed for RS imagery.
We also release the corresponding datasets with various types of noise to facilitate future research and evaluation in this field.
To the best of our knowledge, this study represents the first comprehensive review and benchmarking of the robustness of different tasks in optical RS.
Additionally, we conduct a deep analysis of the experimental results and outline potential future research directions to further enhance the understanding and development of model robustness.
Overall, our work offers a systematic perspective on the robustness of RS models, enabling readers to gain a comprehensive overview of this field and guiding the calibration of different approaches to accelerate the advancement of model robustness.
We also plan to continually update this work by incorporating more details and the latest advancements in the field, to enrich the benchmarking of model robustness in RS.
IEEEtran
|
http://arxiv.org/abs/2306.01428v1
|
20230602103405
|
Improved DeepFake Detection Using Whisper Features
|
[
"Piotr Kawa",
"Marcin Plata",
"Michaล Czuba",
"Piotr Szymaลski",
"Piotr Syga"
] |
cs.SD
|
[
"cs.SD",
"cs.LG",
"eess.AS"
] |
[
Varsha Banerjee
July 31, 2023
===================
With a recent influx of voice generation methods, the threat introduced by audio DeepFake (DF) is ever-increasing. Several different detection methods have been presented as a countermeasure.
Many methods are based on so-called front-ends, which, by transforming the raw audio, emphasize features crucial for assessing the genuineness of the audio sample.
Our contribution contains investigating the influence of the state-of-the-art Whisper automatic speech recognition model as a DF detection front-end.
We compare various combinations of Whisper and well-established front-ends by training 3 detection models (LCNN, SpecRNet, and MesoNet) on a widely used ASVspoof 2021 DF dataset and later evaluating them on the DF In-The-Wild dataset.
We show that using Whisper-based features improves the detection for each model and outperforms recent results on the In-The-Wild dataset by reducing Equal Error Rate by 21%.
Index Terms: audio DeepFake, DeepFake detection, feature extraction, Whisper
ยง INTRODUCTION
Audio DeepFakes (DF) is a collection of deep learning techniques that create artificial speech. These methods may involve creating entirely new sentences using Text-To-Speech or Voice-Cloning (aiming at mimicking speech patterns of a specific person or sounding natural to a human listener) or transferring the qualities of the victim's voice to the attacker's speech, referred to as Voice-Conversionย <cit.>. With the increasing sophistication of deep learning techniques, it has become relatively easy to create audio DeepFakes that are difficult to distinguish from bona fide recordings. Such malicious activities can cause significant harm, including compromising the security of systems protected by speaker recognition and contributing to spreading fake news or defaming an individual's reputation. As a result, developing effective DF detection techniques has become increasingly critical for ensuring the integrity and trustworthiness of audio-based communication systems. This need has been reflected in the recent growth of methods based on deep neural networks that assess the validity of utterancesย <cit.>.
Detecting DeepFake audio is a problem analogous to speech spoofingย <cit.>. Despite the superficial similarity, they differ in the targets they deceive: spoofing aims to fool speaker verification systems, whereas DF targets humans. One can easily point out attacks typical for one of the areas (e.g., replay attack for spoofing) that are not present in the other โ therefore, these areas are considered separate.
Audio feature extraction is crucial in many applications, like speech recognition or speaker identification. Many existing approaches to DeepFake detection focus on some extracted features instead of a raw waveform. That makes the extraction method vital to DF detection and generates motivation for in-detail investigation. Feature extraction aims to identify an audio signal's key characteristics and emphasize them. Mel-frequency cepstral coefficients (MFCC)ย <cit.> and linear-frequency cepstral coefficients (LFCC)ย <cit.> are some of the most widely used methods. MFCCs are based on the human auditory system's non-linear frequency response. In contrast, LFCCs are designed to address some of the limitations of MFCCs, such as their insensitivity to low-frequency information and lack of robustness to noise.
Whisperย <cit.> is a state-of-the-art automatic speech recognition (ASR) system. It was trained on 680,000 hours of content. Due to the data's diversity and magnitude, it has shown to be robust against broad spectra of background interferences, accents, and languages. Its name refers to the family of the models differing, i.a., by width and sizes of layers.
Whisper is based on off-the-shelf encoder-decoder Transformer architectureย <cit.>.
Its encoder is based on two convolutional layers, each processed by a GeLU activation functionย <cit.>. The information is later modified by adding position embeddingsย <cit.>. The encoder ends with a series of the pre-activation residual attention blocksย <cit.>, followed by the normalization layer.
In this work, we harness the feature extraction capabilities of the pre-trained Whisper's encoder not to capture speech properties later used for ASR but to investigate its performance in DF detection.
We use it along with three detection models to verify if neural network-provided features might help in DF detection.
We selected Whisper for the evaluation due to its effectiveness in speech recognition, which comes from the large and diversified speech corpora that it was trained on. As such, we infer that Whisper's features would ignore most of the naturally occurring artefacts and help identify artificially modified speech samples. In particular, it would help with the problem of generalization, which refers to the poor efficacy of the models on the data outside of the training set's distribution โ currently one of the most challenging problems in DF detection.
Our experiments cover the smallest available Whisper version โ tiny.en. We aim at minimal overhead to ensure that the provided solution can be widely used in production environments. In addition, the model was trained strictly on English data, which is the main language among DF detection datasets.
Please note that bigger Whisper versions were shown to yield even more satisfying results in various tasks (e.g. large performs up to 3x betterย <cit.> in speech recognition or translation than the
tiny.en model). This allows us to expect that the results can be enhanced even further.
Note that Whisper was trained on human speech samples โ a bona fide and thus highly-biased set in the sense of DF detection. However, a similar approach was proposed inย <cit.>. The authors used a front-end based on wav2vec 2.0ย <cit.> that was originally designed as an unsupervised pre-trained model for representations and used in the task of speech recognition. wav2vec was also trained on data considered bona fide. Fine-tuning this front-end led to a substantial increase in the results and generalizationย <cit.>.
We choose Whisper for our approach as it significantly improves over wav2vec, not only in the results reported but also in the data scale used for training and self-supervision (over 16x more). While this may play a lesser role for speech recognition as the ratio of importance between the audio model and language model can differ between approaches, we treat Whisper as an audio encoder only and thus expect to see an impact of a much larger dataset used in training.
The codebase related to our research can be found on GitHub: github.com/piotrkawa/deepfake-whisper-features.
ยง DETECTION MODELS AND DATASETS
We consider four models โ three processing spectrogram-like features: LCNNย <cit.>, MesoNetย <cit.> (MesoInception-4 variant), and SpecRNetย <cit.> as well as RawNet3ย <cit.> that analyzes raw audio.
The models consist of respectively 467,425 (LCNN), 28,486 (MesoNet), 277,963 (SpecRNet), and 15,496,197 (RawNet3) parameters.
SpecRNet used in our comparison differs from its original implementation โ to enable processing of the higher-dimensions front-ends we add an adaptive 2D average pooling after the last SeLU layerย <cit.>.
A similar scenario occurred in MesoNet, where we add adaptive 1D average pooling right before the penultimate fully connected layer. In the case of LCNN, we increase the size from 160 to 768 of input features and hidden features of two bi-LSTM layers and the input features of the last Linear layer.
For the spectrogram-based models, we consider 3 front-ends: LFCC, MFCC, and the output of the Whisper ASR encoder.
We additionally evaluate the concatenated front-ends of cepstral-coefficients with Whisper features. The intuition behind it followsย <cit.> โ concatenation of different front-ends may yield better results.
We use LFCC and MFCC based on the window and hop lengths of 400 and 160; they are composed of 128 coefficients. We concatenate front-ends in a second dimension with its delta and double delta features. This results in the shape data (128 * 3, 3000).
In our experiments, we use tiny.en variant of the Whisper model. Its encoder contains 7,632,384 parameters and outputs data of shape (376, 1500). To match it with the size of the other front-ends (required for using it in the concatenated front-end setting), we replicate one of the dimensions achieving a tensor of size (376, 3000).
The dataset used in the paper consists of 125,000 samples randomly selected from ASVspoof 2021 (DF)ย <cit.> and all 31,779 samples of DeepFakes In-The-Wildย <cit.>. The decision is motivated by the general scarcity of DF datasets, of which ASVspoof is among the largest and most popular. In contrast, the latter dataset consists of samples reflecting real-world scenarios (being gathered from the Internet). To emulate the scenario in which architectures are developed using training on the most popular datasets, they should be effective in the actual environment while determining the authenticity of new samples, possibly distorted by noise.
Even though several models achieve high efficacy on popular datasets like ASVspoof or WaveFakeย <cit.>, the investigation inย <cit.> showed that those methods do not generalize well to unknown, real-world samples. The EER of LCNN evaluated on the In-The-Wild dataset increased by up to 1000%, and even more for RawNet3. Naturally, one of the solutions might be similar to the one presented inย <cit.>, mixing the datasets used so that multiple creation methods may be recognized. Such an approach may be infeasible in practical scenarios, where new DF creation methods should also be detected. To mimic the practical scenario in our investigation, we decided to train the models on a well-established DF dataset, as an end-user would, and later test it on the dataset that reflects a possible real-world sample to verify.
ยง EXPERIMENTAL SETUP
Each sample underwent a standardized preprocessing procedure. It covered resampling to 16ย kHz mono-channel, removing silences that were longer than 0.2ย s and padding (by repetition), or trimming samples to 30ย s of content. For two reasons, we decided on the input length of the 30ย s instead of the typical length of about 4ย s (ย <cit.>). Firstly, works likeย <cit.> show that analyzing longer utterances yields better results. Secondly, Whisper takes as an input 30ย s of content: samples are trimmed or padded with zeros. Instead of padding it with zeros, we decided to fill the whole input tensor with speech.
We trained models on a random subset of 100,000 training and 25,000 validation samples of the ASVspoof 2021 DF dataset. We used a subset of this dataset for two reasons: we wanted to make our solution possible to train on a single GPU in about 24 hours; moreover โ we did not anticipate a significant gain from the samples' quantity for an architecture like Whisper tiny. We addressed the disproportion between bona fide and fake classes with oversampling.
We used a learning rate of 10^-4 and a weight decay of 10^-4 for all spectrogram-based models. RawNet3 used a learning rate of 10^-3 and a weight decay of 5ยท 10^-5. We trained models with a binary cross-entropy function for 10 epochs with a batch size of 8. Training of RawNet3 included SGDR schedulingย <cit.> with a restart after each epoch.
The checkpoint of the highest validation accuracy was selected for later tests on the full In-The-Wild dataset. We present our results using Equal Error Rate (EER) metric as a fraction. EER is commonly used in DF and spoofing problems. To ensure reproducibility, we ran each process with a fixed randomness seed. Each experiment was run on a single NVIDIA TITAN RTX GPU (24GB VRAM).
ยง BENCHMARKS
ยง.ยง Baseline comparison
Our baseline comparison covered LCNN, MesoNet, SpecRNet and RawNet3 models. We tested front-ends of LFCC, MFCC and using Whisper's encoder. The encoder was not optimized (the weights were frozen), i.e., we used it purely as a feature extractor and based solely on its pre-trained features.
Note that the results presented in Tab.ย <ref>, similarly to those reported inย <cit.>, deviate from the low errors typically reported in the DeepFake detection literature. This phenomenon is caused by training the models on an 'artificial' dataset created in a controlled manner (ASVspoof), whereas the evaluation is done on real-world samples from In-The-Wild datasetย <cit.>. The distribution of the artifacts in both sets differs significantly. As shown, the models do not have sufficient generalization capabilities, and when verified on recordings of a substantially different nature, the detection capabilities deteriorate significantly. Additionally note that, both, in the case ofย <cit.> that was trained on LA subset of ASVspoof 2019ย <cit.>, and in this paper (trained on DF subset of ASVspoof 2021), some of the models perform worse than random guessing (EER=0.5).
These results do not undermine the models in the traditional setup. In fact, LCNN with LFCC front-end and SpecRNet with MFCC, i.e., two architectures that achieved the worst results on In-The-Wild dataset, scored a satisfying EERs of 0.0149 and 0.0218 during validation on ASVspoof 2021.
Notably, in the case of all detection models, LFCC and MFCC front-ends tend to provide features that were well-suited in the case of ASVspoof, and when trained and verified on the data from the same source have high efficacyย <cit.>, yet do not occur regularly in the DeepFakes from the In-The-Wild dataset. We discuss the nature of the extracted features in more depth in Sect.ย <ref>.
Interestingly, smaller architectures โ SpecRNet and MesoNet, seem to generalize better and provide higher efficacy than LCNN. In fact, MesoNet (MFCC-variant) achieved the lowest EER. These results are similar to the ones reported inย <cit.>, where the model achieved the best results among the spectrogram-based architectures. One of the reasons may be the lower number of parameters, which results in a lesser degree of 'adjusting' towards the artifacts specific to the ASVspoof datasets, thus, higher generalization capabilities.
Using Whisper-based features helps with generalizing. In the case of SpecRNet, we achieve a 29.71% improvement in comparison with LFCC and 47.17% in comparison with MFCC. In the case of LCNN, we improve both by 54% and 47.25%. Following the intuition that additional information may improve the detection, we investigate the synergy of the feature extractors inย Sect.ย <ref>. Moreover, to improve the results even further and to address the worse results in the case of MesoNet, we decided to unfreeze the model (Sect.ย <ref>).
*Constant Q-cepstral coefficients
One of the popular spectrogram-based front-end used in speech and audio signal processing is Constant Q-cepstral coefficients (CQCC)ย <cit.>. Works likeย <cit.> tested these features for spoofing detectors trained on ASVspoof 2021 (LA). To provide an extensive investigation of different front-ends, we used the CQCC with the LCNN model. However, when training the architecture with the same parameters as other feature extractors, the results on ASVspoof 2021 DF were unsatisfactory, achieving only around 60% accuracy on train and test datasets. Consequently, we did not proceed with training other models using CQCC or evaluating them on the In-The-Wild dataset.
ยง.ยง Concatenated front-ends
Works likeย <cit.> showed that using the concatenation of multiple front-ends could increase the detectors' effectiveness. The discussed pipeline did not differ from the one in Sect.ย <ref>. We considered spectrogram-based models and used them with a concatenation of the classical front-ends and Whisper's encoder.
The comparison between the results of concatenated features (Tab.ย <ref>) and a single feature extractor (cf.ย Tab.ย <ref>) shows that LCNN and SpecRNet models based on cepstral front-ends improve when trained with Whisper features. Detection enhances by up to 40.32% for SpecRNet and up to 19.15% for LCNN.
This suggests some positive synergy between the features and that additional knowledge is gained. However, the synergy does not guarantee that the results of joint-features detection outperform the Whisper-based. This may be caused by 'covering' some of the important Whisper features by the spectrogram-based front-ends. We investigate this issue in Sect.ย <ref>.
Conversely, one may notice a negative synergy between the Whisper-extracted features and the others (not a substantial one in the case of MFCC and significant in the case of LFCC). One may assume that the antagonistic effect is due to the architecture details and 'compression' of the provided information, which results in performing the classification based on the 'noised' information rather than an enhanced set of features. In Sect.ย <ref>, we show how the extracted features may be presented and discuss the issue further.
ยง.ยง Whisper fine-tuning
The following benchmark concerned models using Whisper's encoder. This time, however, we did not treat the encoder strictly as a front-end algorithm but rather fine-tuned it to the problem of DF detection. The intuition was the following โ while the features produced by Whisper tend to provide better performance than the typically used front-ends (cf. Tab.ย <ref>), this model was trained for a different purpose (ASR), and on the biased (in the sense of DF) data. We assume that the encoder fine-tuned to a specific task, in our case DeepFake detection, might yield even better results.
For this purpose, we fine-tuned models using the Whisper front-end and evaluated the results with a fine-tuned version of the feature extractor.
After the initial training presented in Sect.ย <ref>, we trained models for additional 5 epochs. This time, however, we unfroze Whisper layers and performed fine-tuning with a learning rate of 10^-6.
One may note that for all architectures (with a notable exception of SpecRNet with Whisper and LFCC features, where we got results worse by less than 9%), the fine-tuning provided an improvement. Notably, unfrozen Whisper features allowed us to improve even the previous best result โ MesoNet with MFCC features โ by 14.69%. The best model we obtained, MesoNet with fine-tuned Whisper+MFCC, scored an EER of 0.2672. This surpasses the state-of-the-art results reported inย <cit.>, where authors obtained 0.3394 EER evaluating RawNet2ย <cit.> on DeepFake In-The-Wild after training model on 4s samples from ASVspoof 2019ย <cit.>. Our results indicate that unfreezing the model and using Whisper extracted features may improve the results of detecting DeepFakes from a significantly different distribution than the set the model was trained on, which would address the vital issue of generalization.
ยง FEATURES COMPARISON
In order to check if the different front-ends indeed generate different features, we analyze which parts of the input data most significantly affect the detection results. Additionally, we compare two architectures that achieved the highest results โ SpecRNet containing a recurrent layer (GRU) and MesoNet, which primarily consists of convolutional and max-pooling layers. We use a technique known from adversarial attacksย <cit.> that relies on calculating the gradient on the input data.
We observed that the choice of the model's architecture has great importance in processing front-ends. As the MesoNet consists of 4 max-pooling layers, the final linear layers of the model receive max-pooled information from spatially distributed blocks of size equal to 32ร32 (see Fig.ย <ref>). In turn, information in SpecRNet is processed using the GRU, and the decision is mainly impacted by the end part of the signal (Fig.ย <ref>). Having that said, the models utilizing Whisper features often rely on some characteristics extracted from one or more narrow signal slices (see two peaks around 220k and 360k in Fig.ย <ref>). We suppose that Whisper works well with recurrent NN because it extracts prominent attributes that do not tend to be hidden, passing through the recurrent sequence.
As we have not assessed any mechanism for spatially-independent processing of the combination of front-ends, our findings suggest a negative impact of Whisper features on other front-ends. Specifically, models that solely rely on MFCC or LFCC for decision-making tend to prioritize the lower band of the front-end and disregard delta and double-delta features (Fig.ย <ref> andย <ref>). Conversely, the 2D Whisper features show the most significant impact of the full band, with a noticeable focus on the middle (Fig.ย <ref>). Fine-tuning Whisper features obtain further performance improvement; surprisingly, in this case, the importance of delta and double-delta features increases (Fig.ย <ref>). We suppose that using the Whisper front-end effectively captures the speech features, and including deltas could enhance the results by providing additional coefficients to describe the spectrum dynamic.
ยง SUMMARY
In this paper, we show that using Whisperย <cit.> as a feature extractor in DeepFake detection may improve the efficacy of the detection architectures, particularly in the case of evaluation on samples with significantly different distribution than the training set. We trained the models on ASVspoof 2021 DF and evaluated them on the DeepFakes In-The-Wild dataset.
Using fine-tuned Whisper as a sole feature extractor, we achieved EER of 0.33 ยฑ 0.01 for all the investigated architectures, which is better than the recent results reported inย <cit.>. Moreover, using Whisper and MFCC joint features for MesoNet, we gained EER as low as 0.2672, showing that even tiny.en version of Whisper significantly helps in the generalization of DF detection scoring the state-of-the-art results on the In-The-Wild dataset.
In future work, we would like to investigate more Whisper models, following the intuition that larger models used as feature extractors may improve the generalization. Additionally, we are interested in exploring the combinations of front-ends concerning architectures of various models.
ยง ACKNOWLEDGEMENTS
This work has been partially funded by Department of Artificial Intelligence, Wrocลaw University of Science and Technology.
IEEEtran
|
http://arxiv.org/abs/2306.02690v1
|
20230605083047
|
Decay and revival dynamics of a quantum state embedded in regularly spaced band of states
|
[
"Jan Petter Hansen",
"Konrad Tywoniuk"
] |
quant-ph
|
[
"quant-ph"
] |
[email protected]
[email protected]
Department of Physics and Technology, University of Bergen, 5020 Bergen, Norway
The dynamics of a single quantum state embedded in one or several (quasi-)continua is one of the most studied phenomena in quantum mechanics. In this work we investigate its discrete analogue and consider short and long time dynamics based on numerical and analytical solutions of the Schrรถdinger equation. In addition to derivation of explicit conditions for initial exponential decay, it is shown that a recent model of this class [Phys. Rev. A 95, 053821, (2017)], describing a qubit coupled to a phonon reservoir with energy dependent coupling parameters is identical to a qubit interacting with a finite number of parallel regularly spaced band of states via constant couplings. As a consequence, the characteristic near periodic initial state revivals can be viewed as a transition of probability between different continua via the reviving initial state. Furthermore, the observation of polynomial decay of the reviving peaks is present in any system with constant and sufficiently strong coupling.
Decay and revival dynamics of a quantum state embedded in
regularly spaced band of states
Konrad Tywoniuk
July 31, 2023
==========================================================================================
ยง INTRODUCTION
The decay dynamics of an unstable or excited quantum state has been one of the most studied topics since
the invention of quantum mechanics. Following the pioneering works of Dirac and Landau almost 100 years
ago <cit.>, Wigner and Weisskopf <cit.> derived the characteristic Lorentzian line-shape of the final state probabilities assuming exponential decay of the initial state. The constant transition rate derived in the earliest works was later known as Fermi's โGolden Ruleโ with reference to his ground-breaking description of nuclear ฮฒ-decay <cit.>. In the period up till today the number of articles addressing the topic is probably in the order of thousands or more, when counting decay phenomena in subatomic, atomic, molecular and solid state physics.
In the time-dependent regime a constant transition rate, as provided by the โGolden Rule,โ immediately leads to an exponential decay law using basic statistics of independent events together with a constant rate <cit.>. However, there exist numerous cases where non-exponential dynamics is observed, e.g., in <cit.> and indeed the systems where the Hamiltonian provides an exact exponential decay law directly from the time-dependent Schrรถdinger equation, are quite few.
The prime example of exact exponential decay from initial time t=0 up to a fixed time t+ฮ t appears in a model system with constant coupling from the initial embedded state to a regularly spaced band of states, sometimes representing a discretized continuum. In its simplest form there is no coupling between band states <cit.>. Thus, the interaction matrix contains non-zero elements on a single row and column which motivates to refer to this model in following as the row-column (RC) model <cit.>. The RC model dates back to the the 1930s and was analyzed by Fano <cit.> in 1935. Examples of real systems that can be reasonably well described by the RC model are excited atoms or molecules decaying through one-photon transitions to lower state or electron spins interacting with a bath of nuclear spins <cit.>. Very recently, it was shown that initial exponential decay is a universal phenomenon occurring in a wide range of related many body systems <cit.>. It was also shown <cit.> that harmonic energy dependent coupling parameters results in initial exponential decay of an artificial two-level atom, interacting with surface acoustic phonon states.
An RC model with regularly spaced energy band, with energy separation ฮ, reproduce the dynamics of a true continuum as long as the product of the absolute square of the coupling parameter |ฮฒ|^2 and density of states ฮ^-1 are kept constant in the limit where ฮโ 0 (ฮ itself represents the energy separation between the states in the band, which we assume for now are uniformly distributed). The discrete and the continuum system then reproduce the same physics up to a maximum time ฯ = 2ฯ/ฮ. From then on the dynamics of the discrete system separate from its continuous counterpart. A series of revivals of the initial state probability occur in the discrete version at times near integer number of ฯ. The revival phenomenon of the RC model was initially discussed in <cit.> and is also well known in other discrete systems, such as the Jaynes-Cummings model in quantum optics <cit.>.
The origin of the revivals of the RC model can be understood from the fact that the dynamics of the initial state may be transformed into a time delay equation <cit.>. Each time the system passes through an integer number of ฯ a new term is added to the amplitude which can cause a partial revival. Alternatively the sequence of imperfect revivals can be understood from the quasi-periodicity which characterizes this particular model when an eigenvector basis is applied. In <cit.> it was shown that revivals also occur with harmonic coupling to a true continuum, in sharp contrast to the infinite value of ฯ for constant coupling. In this case the revivals are occurring at the time-scale T of the harmonic modulation of the coupling. The phenomenon was later confirmed in experiments <cit.>.
In this paper we explore the short and the long time dynamics in the RC model for both types of couplings, constant and harmonic, as well as coupling two distinct bands. The analysis and computations will be based on the eigenvector basis of the full Hamiltonian, which gives a completely analytical expression for the dynamics as long as the energies of the eigenstates are numerically obtained <cit.>. A coherent theoretical analysis, based on the Laplace transform method <cit.>, valid for both coupling types and independent of the inverse density of states is outlined in parallel. We will show that there exists a close connection between constant couplings to parallel bands and the harmonic coupling to a single continuum. This observation leads to the identification of a long-time power law decay of revival peaks in the constant-coupling system which is identical to what was discovered in the harmonic case. By simply including an additional dense background reservoir to account for, e.g., noise and temperature fluctuations, we obtain excellent agreement with the experiment <cit.>. The analysis is carried out in the two following sections, one for each coupling type. The Laplace transform method is is provided an Appendices. Atomic units are applied throughout.
ยง THE RC MODEL WITH CONSTANT COUPLING
Most theoretical methods for solving the Schrรถdinger equation in a true continuum are based on various analytical methods to obtain a closed form expression of the amplitude describing the embedded decaying initial state. Two classic approaches are the ones of Stey and Gibberd <cit.>, utilizing the Laplace transform, and that of Milonni et al. <cit.>, based on the Poisson summation formula. In both cases a delayed time differential equation of the initial amplitude appears. Our approach is based on a discrete representation of the continuum, diagonalizing the Hamiltonian and expressing the initial state in the eigenvector basis. The true continuum dynamics can be obtained when letting the energy separation go to zero in the final expression ฮโ 0 while keeping the ratio |ฮฒ|^2/ฮ = const, where ฮฒ is the coupling between the singled-out state a(t) and the band. This approach has some advantages since the eigenvector basis is analytical and also numerically available on almost any computer for 10^4 states or more. Computations of time development becomes extremely efficient since only eigenvectors which has a significant overlap with the initial state needs to be included in the basis.
ยง.ยง A single state embedded in a single quasi-continuum
Consider an unstable state | a, 0 โฉ with energy ฮต_a which decay to the ground state with energy ฮต_b via interactions with a (quasi-)continuum. The field free transition frequency is defined as ฯ_ab = ฮต_a-ฮต_b. Accompanying the initial state is a vacuum state of the continuum, indicated by the zero in the initial state ket-vector. Correspondingly, the transition to the ground state imply that some of the continuum states are populated by an energy quanta ฮต_n = n ฮ. For a slow transition only ฮต_n=ฯ_ab gets populated after infinitely long times. For finite transition times many eigenvectors may overlap with the initial state. The state vector describing the complete single photon (phonon) transition, |ฮจ(t) โฉ, is then described by the superposition,
| ฮจ(t) โฉ = a(t) | a, 0 โฉ + โ_n b_n(t) | g, n ฮโฉ,
with a(t), b_n(t) being time dependent probability amplitudes. By assuming that the state |aโฉ couples with strength ฮฒ_i only to each of the discretized states and that the latter do not couple between themselves, the time dependent Schrรถdinger equation is transformed into a set of N coupled first order differential equations for the expansion coefficients,
i / t(
[ a; b ])
=
(
[ ฯ_ab C^โ ; C ฮฉ; ])
(
[ a; b ]) .
Here the row vector b contains the N-1 expansion coefficients b_n(t). The sparse coupling Hamiltonian is Hermitian with nonzero elements defined by the row vector C^โ = {ฮฒ_1^*, ฮฒ_2^* ... ฮฒ_N-1^*} and the energy levels of the discretized continuum are given by the diagonal matrix, ฮฉ = diag{ฮต_1, ฮต_2... }. It is straightforward to solve Eq.ย (<ref>) numerically even for order of thousands to hundreds of thousands of equations. Alternatively, and in general more computationally efficient, we may diagonalize the coupling matrix once, and obtain the time dependent amplitudes explicitly. The survival amplitude takes the form,
a(t) = โจ a | ฮจ(t) โฉ = โ_n=-โ^โ |โจ a | v_n โฉ|^2 | v_n โฉ e^-iE_n t ,
where | v_n โฉ is the n'th eigenvector having energy E_n and a(0)=1. The simple structure of the coupling matrix leads to analytical eigenvectors and eigen-energies which can be obtained from a simple implicit equation for each eigenvalue,
E_m-ฯ_ab = โ_n=-โ^โ|ฮฒ_n|^2/E_m-ฮต_n .
As the right-hand side has poles at E_m = ฮต_n, we obtain exactly one eigenvalue in each interval
(ฮต_n,ฮต_n+1). Furthermore, the set of energies E_m-ฯ_ab is symmetrically
distributed around ฯ_ab. In the case of an unbounded continuum the effect of the
detuning is simply to shift the distribution populated final amplitudes b_n around centre eigenvalues from E_m โผฯ_ab to E_m โผ 0.
Without loss of generality we therefor ignore the detuning in the following.
The positive eigen-energies can then be expressed as E_m = m + ฮด_m for integer m โฅ 0 and we have E_-m = E_m <cit.>. With the eigenvalues at hand, we obtain the eigenvectors
v_m = 1/x_m(
[ 1; ฮฒ_1/E_m-ฮต_1; ฮฒ_2/E_m-ฮต_2; .; .; . ]) ,
with
x_m^2 = 1 + โ_j=-โ^โ|ฮฒ_j|^2/(E_m - ฮต_j)^2 .
In <cit.> it is shown that when all couplings elements are identical, ฮฒ_j = ฮฒ for all j,
the summation above can be made, yielding
โ_j=-โ^โ|ฮฒ|^2/(E_m - ฮต_j)^2 =
[ฯฮฒ/ฮ sin(ฯE_m/ฮ)]^2
It follows that the implicit eigenvalue in Eq.ย (<ref>) can be written as
E_m = ฯฮฒ^2/ฮtan(ฯE_m/ฮ) .
The normalization factor x_m can then be brought to the form,
x_m^2 = 1/ฮฒ^2[ ฮฒ^2 + (ฯฮฒ^2/ฮ)^2 + E_m^2 ],
which leads to a survival probability expressed as,
a(t) = โ_n=1^โ2 ฮฒ^2/ฮฒ^2 + (ฮณ/2)^2 + E_n^2cos(E_n t) ,
where we introduced the rate ฮณโก 2ฯฮฒ^2/ฮ.
Exponential decay is obtained in the limit of ฮโ 0 while requiring that the rate ฮณ is kept constant. The first term in the denominator
then vanish and we can take the continuum limit of the sum over band energies. The resulting integral over energy gives exponential decay for all times,
a(t) = ฮณ/ฯโซ_0^โ E cos(E t)/(ฮณ/2 )^2 + E^2 =
^-t ฮณ/2 .
Hence, the probability of the state obeys Fermi's โGolden Rule,โ namely P_a(t) = | a(t) |^2 = ^-t ฮณ.
For true discrete systems the replacement of the sum with an integral is less straightforward. The coupling ฮฒ may well be much larger than ฮณ in Eq.ย (<ref>) which prohibits the transition to Eq.ย (<ref>). Thus, one may be lead to the conclusion that the decay dynamics is entirely non-exponential. However, there is always a finite time period t โ (0, ฯ), with ฯโก 2ฯ/ฮ, where the decay follows Eq.ย (<ref>) exactly. Large ฮ, or a small rate ฮณ, imply then that the decay is only partial since |a(ฯ) |^2 > 0.
The proof of the preceding statements is simple: A system with fixed and finite separation between the energy levels can, in the period t โ (0,ฯ), be replaced with a factor k denser quasi-continuum. The decay dynamics of both systems are then exactly equal. In the dense band the coupling is given by ฮฒ_k = ฮฒ / โ(k). Since k can be taken arbitrarily large the factor ฮฒ_k^2 in the denominator eventually vanish and the validity of the transition from Eq.ย (<ref>) to (<ref>) is restored up to t = ฯ. For t > ฯ the dynamics of the original (with spacing ฮ) and dense (with spacing ฮ_k) systems diverge.
The general amplitude for the embedded state of a true discrete system can now be written in terms
of t = m ฯ + t_m,
a(t) = โ_nฮฒ^2/ฮฒ^2 + (ฯฮฒ^2/ฮ)^2 +
E_n^2e^-i m ฮด_n ฯ e^-iE_n t_m ,
where 0<t_m <ฯ.
This expression is identical to the solution based on delayed time differential equations below.
In Fig.ย <ref> we display some characteristics of the solution. In the upper panel we observe the characteristics of initial dynamics for two different set of coupling parameters and densities of states, both plotted in scaled time units t/ฯ. The revivals occur initially close to integer times, the height of the peak of the revival probability seems to be independent of the coupling strength while the โlifetimeโ of the revival region goes from wide to spike-like as the coupling strength is gradually increased. No matter how large the coupling becomes, the initial decay is always exponential and given by Eq.ย (<ref>), as illustrated in the middle left panel of Fig.ย <ref>. Likewise, collecting the peaks in the strongest coupling case (red curves) and plotting them in a log-log plot we observe a near polynomial decay for a large number of peaks (from the second revival and outwards in time). Finally, to be discussed below, we plot the dynamics after a large number of revival times (500) in the lowermost panel. Here we see an almost chaotic dynamics building up towards the max revival peak. The position of the peak on the time axis has now been clearly shifted beyond the integer numbers (500, 501..) of ฯ. At even larger numbers of ฯ eg. 10^4, the signature of the revivals gets more and more blurred and will, due to the true non-periodicity of the system, never return towards their initial prominence.
In App.ย <ref>, we also derive the time-evolution using directly the coupled set of evolution equations and solving them using the Laplace transform. In App.ย <ref>, we obtain the following solution for the time evolution of state a(t),
a(t) = a_0(t) + โ_k=1^โ a_k(t) ,
where a_0(t) = ^- t ฮณ/2, and
a_k(t) = - ฮณ t_k/k^- t_kฮณ/2 L_k-1^{1}(ฮณ t_k) ฮ(t_k ) ,
where we defined t_k โก t-kฯ and L_n^{ฮฑ}(x) is the generalized Laguerre function, see also <cit.>. In particular, the first terms read
a_1(t) = -ฮณ(t-ฯ)^-(t-ฯ)ฮณ/2ฮ(t-ฯ) ,
a_2(t) = [ - ฮณ (t-2ฯ) + ฮณ^2/2 (t-2ฯ)^2]
ร^-(t-2 ฯ)ฮณ/2ฮ(t- 2 ฯ) ,
a_3(t) = [ - ฮณ (t-3ฯ) + ฮณ^2 (t-3ฯ)^2 - ฮณ^3/6(t-3ฯ)^3 ]
ร^-(t-3 ฯ)ฮณ/2ฮ(t-3ฯ) .
At first look, it seems sufficient to demand that
ฯฮณ/2 โซ 1 ,
in order to ensure that each term decays sufficiently fast, and in the step n ฯ < t < (n+1) ฯ it is only the nth term that contributes. We will assume this condition is fulfilled in the following discussion.
The first contribution a_1(t), that breaks with the exponential decay behavior, sets in at times ฯ < t < 2ฯ, and reaches a peak value at t = ฯ + 2/ฮณ. Here, the probability peaks at 4/^2โ 0.54. In the second interval, 2ฯ < t < 3ฯ, a_2(t) starts contributing leading to two peaks at times t = 2ฯ +( 3 ยฑโ(5))/ฮณ. At these positions, the probability peaks at {4(2-โ(5))^2^-3+โ(5), 4(2+โ(5))^2 ^-3-โ(5)}โ{0.10, 0.38 }, respectively. Hence, the smaller peak precedes the bigger one. Similarly, in the third interval, 3ฯ < t < 4 ฯ, there are three peaks and so on.
There are several features worth pointing out. First, while the position of the peaks clearly depend on ฯ and ฮณ, the height of the peaks are completely independent of the parameters of the model. Second, at late times the multiple peaks that occur force the largest, and also latest, peak to be displaced so that ultimately the condition ฯฮณ/2 โซ 1 is not sufficient to ensure that only one contribution from the sum contributes in a certain time interval. This โleakageโ effect inevitable leads to a chaotic behavior at late times, as can be inferred from the bottom panel of Fig.ย <ref>.
ยง.ยง Two and more parallel bands of states
In nature, several de-excitation processes occur in the presence of competing and separable continua
such as Auger versus radiative decay processes in atomic physics. The problem was treated by Fano in 1961 <cit.> for real continua. Here we consider the dynamics for parallel energy bands with focus on the total system behavior when the involved continua have different isolated revival times. Within the RC model, two-continua in its simples form, has two different constant coupling parameters, ฮฒ_1 and ฮฒ_2 and/or inverse densities of states ฮ_1 and ฮ_2. No direct coupling between the bands are taken into account, as well as coupling between states belonging to the same band due via interaction with the initial state. The dynamics is now described by the expansion,
| ฮจ(t) โฉ = a(t) | a โฉ + โ_n b_n(t) | g, nฮ_1 โฉ + โ_mc_n(t) | g, mฮ_2โฉ ,
Without loss of generality, again we neglect the effect of detuning. Projecting out on each basis vector we obtain a coupling matrix which has the same structure as in the case of a single continuum,
i / t(
[ a; b; c ])
=
(
[ ฯ_ab C_1^โ C_2^โ ; C_1 ฮฉ_1 0; C_2 0^โ ฮฉ_2; ])
(
[ a; b; c ]) ,
where again the amplitudes and coupling to the bands has been expressed as vectors b,c. The matrices ฮฉ_1,2 contains the discrete band energy levels on the diagonal, ฮฉ_1,2^n,m = ฮด_n,m nยทฮ_1,2. Finally the matrix 0, contains only zeros since the two continua do not interact directly.
The implicit equation for the eigenvalues now becomes,
E_m = |ฮฒ_1|^2 โ_n1/E_m-nฮ_1 + |ฮฒ_2|^2โ_k1/E_m-kฮ_2 .
To obtain the decay dynamics, consider the case when ฮ_2 > ฮ_1 and replace this continuum with one which has the same energy separation as the first, i.e. ฮ_1.
This implies a reduced coupling strength in the replacement band ฮฒฬ_2 = ฮฒ_2 โ(ฮ_1/ฮ_2). The new implicit equation for eigenvalues then takes the form,
E_m = (|ฮฒ_1|^2+|ฮฒฬ_2|^2) โ_n=-โ^โ1/E_m-nฮ_1 .
As remarked by Fano <cit.>, this leaves us with exactly the same identities Eqs.ย (<ref>)-(<ref>). Therefore, the obtained exponential decay of the initial channel becomes,
P_a(t) = ^- ฮณ_12t ,
at early times, where the rate constant
ฮณ_12/2 = ฯ|ฮฒ_1|^2+|ฮฒฬ_2|^2/ฮ_1
= ฯ( |ฮฒ_1|^2/ฮ_1+|ฮฒ_2|^2/ฮ_2) .
Each band become initially populated according to a weighted fraction
P_i(t) = ฮณ_i/ฮณ_12(1 - ^-ฮณ_12 t)
where P_i(t) (i=1,2) is the total probability summed over all, in principle infinite, number of states
within band i and ฮณ_i =2ฯ |ฮฒ_i|^2/ฮ_i, for further details see App.ย <ref>.
Exponential decay is now limited to the period t โ (0,min[ฯ_1, ฯ_2]), where ฯ_i = 2ฯ/ฮ_i. For integer revival periods of ฯ_1, ฯ_2 we will expect the system to undergo a back coupling of probability to the initial state as long as the band in question is not empty. In App.ย <ref> we have solved the analytical case of two continua. In the strong coupling region a infinite set of additional revival times occur, namely at integer intervals of ฯ_1 + ฯ_2, as well as arbitrary multiples of the two, i.e. n ฯ_1 + m ฯ_2, where n,m >1 and nโ m.
Another striking feature, is that the heights of the revival peaks now depend on the ratio of the rates ฮณ_1/ฮณ_2. This, in contrast to the case of constant coupling, and harmonic as we will discuss below, where the height of the peak is completely determined by the โstructureโ of the system while the peaks does not depend on the rate. For further details see App.ย <ref>. In other words, the heights of the peaks can be tuned by the rates ฮณ_i, and the revival times are given by ฯ_i [This discussion assumes that the revival peaks are completely suppressed when the next revival time sets in. In the opposite case on should expect further interference effects between contributions from different revival-time intervals.]
In Fig.ย <ref> we display an example. The upper panel shows the exponential decay into two different continua at short times. The lower panel shows the redistribution-dynamics of probability at long timescales, first from the โblueโ continuum to the โblack,โ since the blue bans has the shorter revival time. The transfer of probability takes place via the initial state manifesting itself as a transient revival of that state accompanying each partial revival. In the figure the first (blue) revival takes place around t = 61 au, the second (black) around t = 102 au. At t=2 ร 61 au we see a weak signature in the initial state of the second blue rival time and then around t=161 we observe a strong new revival at t = ฯ_1 + ฯ2, as expected. At increasing integer number of linear combinations of the two isolated revival times the two continua interchange probability each time, and each interchange imply, on this time-scale and coupling parameters, a spike-like re-population of the initial state.
Both methods can be directly expanded to describe more than two parallel bands, which in general will display
even more complex dynamics as long as the revival times of each band differ.
ยง HARMONIC COUPLING
Another class of atom-field couplings are coupling parameters which are periodic. The periodicity can originate from partial reflection of the radiation emitted from an excited atom, where the partial reflected part at some later time interfere with a later emitted radiation which is unreflected <cit.>. Another example is a qubit connected to a substrate at two points separated a distance 2L. The coupling from the two-level system to the surface plasmon states were recently analyzed in detail <cit.>. Writing ฮฒ_n = ฮฒcos(n ฮ T/2), in the continuum limit one derived the elegant solution, which in our notations reads
a(t) = โ_n=0^โ[-ฮณ/4 (t-nT) ]^n/n!^-(t-n T)ฮณ/4ฮ(t-nT) ,
where we again have chosen a(0)=1 and neglected the detuning energy. This structure is considerable simpler than the one for a constant coupling, see Eqs.ย (<ref>) and (<ref>). In particular, note the absence of multiple peaks at later revival times t > n T [This becomes clear from the simpler polynomial structure accompanying the exponentials in the sum.]. In order to address this โsimplicityโ, we set out to analyze specific cases when the band density, concretely 2ฯ/ฮ, are multiples of the modulation time-scale T.
We now address the dynamics using the discrete basis eigenvector approach. Consider Eq.ย (<ref>) and replace the constant coupling parameter ฮฒ with an energy dependent coupling ฮฒ_n = ฮฒcos (n ฮ T/ 2). In the analysis in <cit.>, the period T is related to the the distance L between two contact points between the qubit and the substrate, T= L/v_g where v_g is the speed of sound in the acoustic medium. By defining a symmetric and antisymmetric discrete phonon states from the basis applied in <cit.>, only the symmetric ones couple to the qubit and we arrive at our Eq.ย (<ref>). The eigenstates are then given by the sum of Eq.ย (<ref>). Let us rewrite the sum in terms of
S = โ_n=-โ^โ|ฮฒcos(n ฮ T/2 ) |^2/E - ฮ n .
A general solution can be found using special functions. Here, we consider a few illustrative cases, using increasingly smaller energy separation between the discrete states:
ฮ = 2ฯ/T: This is equivalent to a constant coupling ฮฒ_n=ฮฒ to all the energy levels. The answer is
ฮ/ฮฒ^2 S = ฯ/tan[ ฯ z] ,
where z = E/ฮ. Further discussion can be found in Sec.ย <ref>.
ฮ = ฯ/T: The constant coupling ฮฒ is modulated by the following possible values cos(ฯ_n T/2) = {1,0,-1,โฆ} for n={0,1,2,โฆ} et cetera. The sum therefore becomes
ฮ/ฮฒ^2 S = โ_n=-โ^โ1/z-2n = ฯ/2 tan[ฯ/2z ]
This is equivalent to a constant coupling ฮฒ_n=ฮฒ to all the energy levels.
This choice of ฮ gives the correct exponential decay up to t=T with a decay rate constant half the magnitude of the decay constant of Eq.ย (<ref>), namely ฮณ' = ฯฮฒ^2/ฮ = ฮณ/2. Further discussion can be found in App.ย <ref>.
At t=T the back-coupling to the initial state departs from the correct continuum behavior. From Eq.ย (<ref>), we find a first peak at t_1 = T + 4/ฮณ reaching P(t_1) = 4/^2. This is a factor 4 too large compared to the continuum solution from Eq.ย (<ref>). Obviously, the multiple peaks associated with revivals at t > 2 T, discussed in detail in Sec.ย <ref>, are also not present in the continuum limit.
ฮ = 2ฯ/(3T): The constant coupling ฮฒ is now modulated by the following possible values cos(ฯ_n T/2) = {1,1/2,-1/2,โฆ} for n={0,1,2,โฆ}. The sum therefore becomes
ฮ/ฮฒ^2 S = โ_n=-โ^โ1/z-3n
+1/4โ_n=-โ^โ[ 1/z-(3n-1) + 1/z-(3n+1)] ,
= ฯ/3 tan[ฯ/3 z ] - ฯ/3sin[2 ฯ/3z ]/1+ 2 cos[2ฯ/3z ] .
This representation describes the dynamics up to the revival time t=2T. The couplings and energy levels of of the discrete states distribute themselves in what can be viewed as to parallel continua: The first one has a the coupling strength and energy separation |ฮฒ,3ฮ|, the second |ฮฒ/2,ฮ|, in addition making sure that the overlapping levels are not occupied twice.
We have worked out this case in detail up to times t โค 3T in App.ย <ref>. For the time-interval 0 < t < 2T, we find that
a(t) = ^- tฮณ/4 - ฮณ/4 (t-T)^-(t-T)ฮณ/4ฮ(t-T) .
At the first revival at t=T, the peak position remains at T + 4/ฮณ, but now the height is reduced to P= 1/^2, which is in agreement with the continuum result in Eq.ย (<ref>). Strikingly, as for the case of constant coupling to a single (quasi-) continuous band in Sec.ย <ref>, the height of the first peak is completely independent of the coupling or level density spacing. This also holds for the subsequent peaks. Only their positions are occurring at intervals nT + c/ฮณ, where c is number describing the positions of the multiple peaks. However, at tโฅ 3T this model starts exhibiting multiple peaks and, thus, does not agree anymore with the continuum solution (<ref>), so we will not discuss it further here.
ฮ = ฯ/(2T): The constant coupling ฮฒ is now modulated by the following possible values ucos(ฯ_n T/2) = {1,1/โ(2),0,โฆ} for n={0,1,2,โฆ}, et cetera. The sum therefore becomes
ฮ/ฮฒ^2 S = โ_n=-โ^โ1/z-4n
+1/2โ_n=-โ^โ[ 1/z-(4n-1) + 1/z-(4n+1)] ,
= ฯ/4 tan[ฯ/4 z ] - ฯ/4tan[ ฯ/2 z] .
Again this is effectively two parallel continua, avoiding any overlaps, now with parameters |ฮฒ,4ฮ| and |ฮฒ/2,ฮ|. We have worked this case out in detail in App.ย <ref>.
This particular discretization gives identical population in the two continua up to t=T and the redistribution gives the correct dynamics of the initial state up to t=3T. Since up to t=2T the result is exactly the same as in the previous cases, let us only write out the additional piece that appears at t>2T, namely
a^(2)(t) = ฮณ^2/32 (t-2T)^2 ^-(t-2T)ฮณ/4ฮ(t-2T) .
Now, there is only one peak in the interval 2T < t < 3T, located at t_2= 2 T+8/ฮณ, where the probability peaks at P(t_2) = 4/^4โ 0.073. The agreement with the continuum result now extends to t < 3T. Again, the height of the second peak does not depend on the parameters of the model.
In summary, the dynamics of the qubit with harmonic coupling to a substrate can alternative be viewed as the dynamics of a series of parallel continua with different constant coupling and density of states. At least one parallel continuum has a revival at integer number of T which causes back-coupling to the initial state followed by a redistribution of the population probability of each quasi-continuum. The dynamics described above is illustrated in Fig.ย <ref>.
By taking a sufficiently small ฮ we can effectively simulate the dynamics for arbitrarily large times. In Fig.ย <ref> this is done for a case which has ฯโ 19 T. A large number of initial state revivals is seen up to the point t=ฯ where all parallel continua has a back-coupling to the initial state. As a consequence this peak is much larger than the partial revivals at integer values of t/T. In the lower left panels the initial decay at very short time again is observed, and in full agreement with the analytical decay rate constant. Finally, as also shown in the case of constant coupling, the revival peaks display a polynomial decay up to t=ฯ.
The use of discrete parallel continua makes it simple to model experiments as well, for example in the case of
Ref.ย <cit.> where a number of other couplings appear due to temperature fluctuations or other conditions related to the experiment. By simply augmenting the discrete model (M=4) with a dense reservoir which on the timescale of the experiment does not undergo any revivals we can simulate a leak of probability due to heat. The result, shown in Fig.ย <ref> is in excellent agreement with experimental data from <cit.>.
ยง CONCLUDING REMARKS
I this work we have considered the dynamics of a quantum model which goes back almost to the invention of quantum mechanics. Inspired by the recent progress and application of the model relevant for qubit dynamics of modern quantum computers, we analyzed the general problem based on a single state interacting with one or several discrete band(s). The analysis were performed in an eigenvector basis and with a complementary analytical method applicable to real as well as discrete โcontinuaโ.
A number of properties of known and new phenomena were identified, in particular:
(i) Discrete band of states imply well defined revival peaks. For a single band the heights of the
peaks are independent of coupling strengths and the peak strengths shows near polynomial decays on a time interval covering several tens of revival times.
(ii) The qubit dynamics with harmonic couplings between the initial state and a true (phonon) continuum is identical to a qubit coupled by constant factors to a number of parallel bands. Revivals in the latter model are understood as a transient population flow through the initial state and between different bands.
(iii) Comparison with experiments based on the numerical approach was straightforward and in excellent overall agreement.
The present approach opens for direct modeling of time dependent scattering phenomena due to point contacts, of time induced multi-phonon (photon) processes and a number of other phenomena relevant for the long research path towards the realization of stable and effective working quantum computers.
One of us (JPH) would like to thank Ladislav Kocbach and Lars Bojer Madsen for initial inspiring discussions
on the topic of this work.
ยง LAPLACE TRANSFORM METHOD
The quantum evolution described above can also be cast as a system of evolution equations for the state a(t) coupled to a quasi-continuum b_n(t),
ศง(t) = - i โ_n=-โ^โฮฒ_n b_n(t) ,
แธ_n(t) = -i ฯ_n b_n(t) - i ฮฒ^โ_n a(t) ,
where ฯ_n = n ฮ, and ฮ is the level spacing of the quasi-continuum. Integrating out the second equation, yielding
b_n(t) = -i ฮฒ_n โซ_0^t t' ^-i ฯ_n(t-t')a(t') ,
and inserting the solution into the first, we obtain the standard solution
ศง(t) = - โซ_0^t dt' a(t') โ_n= -โ^โ |ฮฒ_n|^2 e^- i ฯ_n(t-t') .
This has the form of a delay differential equation <cit.>.
This can be solved in Laplace space, where the Laplace transform of a(t), defined as
รฃ(s) = โซ_0^โ d t e^-s t a(t) ,
is given by
รฃ(s) = a(0)/s + ฮ (s) ,
where ฮ (s) = iโ_n=-โ^โ|ฮฒ_n|^2/is - ฯ_n = i S(is). Note that ฮ (-s) = -ฮ (s).
The inverse transform, defined as
a(t) = โซ_C d s/2ฯ i e^s t รฃ(s) ,
where the contour C runs along the imaginary axis to the right of any poles of รฃ(s), then becomes sensitive to the poles of รฃ(s). These are found by solving
s + ฮ (s) =0 .
By identifying i s โ E, the pole structure is equivalent to the eigenvalue equations.
ยง.ยง Constant coupling
As mentioned above, a constant coupling is equivalent to setting the level spacing ฮ = 2ฯ/T in the harmonic coupling. Using the sum derived in (<ref>), we find
ฮ (s) = i ฮณ/2 (i โ s ) = ฮณ/2(โ s) ,
where ฮณ = 2ฯฮฒ^2/ฮ and โ = ฯ/ฮ. In order to introduce the techniques to inverse the Laplace transform <cit.>, we will review this case in detail.
It is possible to rewrite,
รฃ(s ) = 1- ^-2 โ s/s + ฮณ/2[1 - s- ฮณ/2/s+ฮณ/2^-2 โ s]^-1 .
Since |(s- ฮณ/2)/(s+ฮณ/2) ^-2 โ s|< 1 for sufficiently large โ, or small ฮ,
we can expand the last term as an infinite series (1-x)^-1 = โ_n=0^โ x^n. Then, after further manipulations, we write the whole expressions as
รฃ(s) = 1/s+ฮณ/2 -ฮณโ_k=1^โ(s-ฮณ/2)^k-1/(s+ฮณ/2)^k+1^-2 k โ s .
This demonstrates that there is only one location of the poles, i.e. at s_โ = - ฮณ/2. We can therefore safely let the contour C run along the origin of the complex s plane, i.e. s = i ฮพ with ฮพโ [-โ, โ] with a counterclockwise circle at infinity. Defining
a(t) = a_0(t) + โ_k=1^โ a_k(t) ,
where
a_0(t) = โซ_C s/2ฯ i ^st/s+ฮณ/2 = ^- t ฮณ/2 ,
and
a_k(t) = โซ_C s/2ฯ i รฃ_k(s) ^st ,
with รฃ_k(s) = -ฮณ(s-ฮณ/2)^k-1/(s+ฮณ/2)^k+1^-2 kโ s.
The coefficients a_k(t) can easily be found using the residue theorem,
a_k(t) = 1/k!lim_sโ s_โ^k/ s^k[(s+ฮณ/2)^k+1 รฃ_k(s) ^st] ,
as long as the function decays fast enough on the circle at infinity, in this case as long as t > 2 k โ s. We use the binomial formula to write
^k/ s^k[ (s-ฮณ/2)^k-1^(t- 2 kโ)s]
= โ_n=0^k-1(k-1)!/(k-1-n)!(s-ฮณ/2)^k-1-n (t-2k โ)^k-n^(t-2kโ)s .
Finally, after further manipulations, we can recast the answer as
a_k(t) = - ฮณ(t-k ฯ)/k^-(t-kฯ)ฮณ/2 L_k-1^{1}(ฮณ(t-k ฯ)) ฮ(t- kฯ) ,
where we defined ฯ = 2ฯ/ฮ and L_n^{ฮฑ}(x) is the generalized Laguerre function. Note, that in this case we have exactly that ฯ = T.
The occupation of the continuum states is found from Eq.ย (<ref>). For instance, up to the first revival time, i.e. 0 < t < ฯ, where P_a(t) = ^- t ฮณ, the solution is
b_n(t) = i ฮฒ/ฮณ/2 - i n ฮ( ^-t ฮณ/2 - ^-i n ฮ t) .
The probability for the whole continuum then becomes
P_b(t) = โ_n=-โ^โ| b_n(t) |^2 .
To prove that the total probability is conserved, we use the following identities
โ_n=-โ^โ1/z^2 + n^2 = ฯ/z( z ฯ) ,
โ_n=-โ^โcos(x n )/z^2 + n^2 = ฯ/zcosh( z (ฯ - x) )/sinh( zฯ) ,
for 0 < x < 2ฯ <cit.>, to derive that
P_b(t) = 1- ^-t ฮณ .
This demonstrates that P_a(t) + P_b(t) = 1, as expected.
ยง.ยง Harmonic coupling with ฮ = ฯ/T
Using (<ref>) to obtain, that
ฮ (s) = ฮณ/4(โ s/2) ,
we realize that the pole structure and therefore also the inverse Laplace transform is completely equivalent to the previous case with the replacement ฮณโฮณ/2 and the explicit mapping of ฯโ T. The first correction therefore reads,
a_1(t) = -ฮณ/2(t-T)^-(t-T)ฮณ/4ฮ(t-T ) ,
in this case. Further terms, appearing at t>2T, can be found directly from previous formulas.
ยง.ยง Harmonic coupling with ฮ = 2ฯ/(3T)
In this case, with the help of (<ref>), we found
ฮ (s) = ฮณ/6( โ s/3) +ฮณ/6sinh(2/3โ s )/1+2 cosh(2/3โ s ) .
Following the same line of arguments as before, and introducing the short-hand y โก^-2/3โ s, we find that
รฃ(s) = 1-y^3/s+ ฮณ/4[1 - (s-ฮณ/4)y^3 - (y+y^2)ฮณ/4/s+ฮณ/4]^-1 .
After further manipulations, we arrive at
รฃ(s) = 1/s+ฮณ/4 - ฮณ/2โ_k=1^โ[(s-ฮณ/4)y^3 - (y+y^2)ฮณ/4 ]^k-1/(s+ฮณ/4)^k+1
ร(y^3 + y^2/2 + y/2) .
We identify the emerging multiple poles at s=s_โ = - ฮณ/4, however the structure of the numerator is much more complicated. It is possible proceed further by expanding the numerator by using the binomial formula, but we will not proceed further.
Instead, let us first have a look at the first term in the expansion in (<ref>), which reads
รฃ_1(s)
= -ฮณ/2^-3 T s + ^-2Ts/2 + ^-Ts/2/(s+ฮณ/4)^2 .
The three terms in the denominator will contribute to the evolution only at t>3T, t>2T and t>T, respectively. Similarly, รฃ_2(s) will contain terms that contribute at subsequent time intervals, starting at t>2T and up to t>6T. We will currently only be interested in the terms that describe the evolution in then interval 0 < t< 3T. The contribution that starts to contribute at times n T < t, we will denote a^(n)(t). Naturally, a^(0)(t) = ^- tฮณ/4 from the single pole contribution.
Focussing now on the next interval, T< t<2T, we note that only รฃ_1(s) contributes. We find
a^(1)(t) = -ฮณ/4โซ_C s/2ฯ i ^(t-T)s/(s+ฮณ/4)^2
=-ฮณ/4 (t-T)^-(t-T)ฮณ/4ฮ(t-T) .
This piece reaches a maximal value at t = T+2/ฮณ, as before, where the contribution to the probability reaches a maximal value of ^-2โ 0.14. This is a factor 4 smaller than in the previous cases!
The additional contribution in the next piece, i.e. t > 2T, receives contributions from both รฃ_1(s) and รฃ_2(s). We find
a^(2)(t) = -ฮณ/4โซ_C s/2ฯ i ^(t-2T)s/(s+ ฮณ/4)^2 + ฮณ^2/16โซ_C s/2 ฯ i ^(t-2T)s/(s+ ฮณ/4)^3 , = [(-ฮณ/4)(t-2T) + ฮณ^2/32(t-2T)^2]
ร^-(t-2T)ฮณ/4ฮ(t-2T) .
There are two maxima at t = 2T +4/ฮณ(2โโ(2)),
corresponding to peak values on the level of the probability that are {(3-2โ(2))^-4+2โ(2), (3+2โ(2))^-4-2โ(2)}โ{0.053 , 0.006}.
Compared to the continuum limit, ฮโ 0, we find agreement of the height of the peak in the interval T < t < 2T. The multiple peaks occurring at later times are also not present in the ฮโ 0 solution.
ยง.ยง Harmonic coupling with ฮ = ฯ/(2T)
In this case, from Eq.ย (<ref>), we have
ฮ (s) = ฮณ/8(โ s/4) + ฮณ/8tanh(โ s/2) .
After standard manipulations, we arrive at
รฃ(s) = 1/s+ฮณ/4 - ฮณ/2โ_k=1^โ[(s-ฮณ/4)y^3 + (y-y^2) s ]^k-1/(s+ฮณ/4)^k+1
ร(y^3 - y^2/2 + y/2) .
Proceeding as before, we find the following behavior for the early time evolution,
a^(1)(t) = -ฮณ/4 (t-T) ^-(t-T)ฮณ/4ฮ(t-T) ,
a^(2)(t) = ฮณ^2/32 (t-2T)^2 ^-(t-2T)ฮณ/4ฮ(t-2T) .
Now, there is only one peak in the interval 2T < t < 3T, located at t= 2 T+8/ฮณ, where the probability peaks at 4/^4โ 0.073. The agreement with the continuum result now extends to t < 3T.
We end with a few comments. It becomes evident that the late time behavior of the system is highly sensitive to the spacing of the energy levels. The peaks occurring at later time intervals are also increasingly shifted upward. For ฮ > 0, at some late time the subtle cancellations do not hold any longer and violent, chaotic behavior can occur due to the overlap of several contributions.
ยง COUPLING TO TWO QUASI-CONTINUA
The evolution equations for a coupling to two quasi-continua b_n(t) and c_m(t) read,
ศง(t) = - i โ_n=-โ^โฮฒ^(1)_n b_n(t) - i โ_m=-โ^โฮฒ^(2)_m c_m(t) ,
แธ_n(t) = -i ฯ^(1)_n b_n(t) - i ฮฒ^(1),โ_n a(t) ,
ฤ_m(t) = -i ฯ^(2)_m c_m(t) - i ฮฒ^(2),โ_m a(t) ,
where the two energy levels are give by ฯ^(i)_n = n ฮ_i (i=1,2). The solution in Laplace space is given by
รฃ(s) = a(0)/s + ฮ _1(s) + ฮ _2(s) ,
where ฮ _j(s) = i โ_n=-โ^โ |ฮฒ^(j)_n|^2/ (is - ฯ^(j)_n).
We focus on the case of constant couplings, i.e. ฮฒ^(i)_n = ฮฒ_i. After, by now, standard manipulations we finally arrive at
รฃ(s) = 1/s+/2
- โ_k=1^โ[(s - ฮณ_1/2)^-ฯ_1 s + (s - ฮณ_2/2)^-ฯ_2 s - s ^-ฯ_12s]^k-1/( s+/2)^k+1
ร[ (ฮณ_1 +ฮณ_2/2) ^-ฯ_1 s + (ฮณ_2 +ฮณ_1/2) ^-ฯ_2 s - /2^-ฯ_12s] ,
where ฮณ_i = 2ฯ |ฮฒ_i|^2/ฮ_i and ฯ_i = 2ฯ/ฮ_i and, finally, = ฮณ_1 + ฮณ_2 and ฯ_12 = ฯ_1+ ฯ_2.
The amplitude picks again up new contributions at multiples of the revival times, and also allows arbitrary mixing between multiples of ฯ_1 and ฯ_2, e.g. nฯ_1 ร m ฯ_2, and so forth. The first few terms in the expansion are
a_0(t) = ^- t ฮณ_12/2 ,
a_1(t) = -(ฮณ_1+ฮณ_2/2) (t-ฯ_1) ^-(t-ฯ_1)/2ฮ(t-ฯ_1)
-(ฮณ_2+ฮณ_1/2) (t-ฯ_2) ^-(t-ฯ_2)/2ฮ(t-ฯ_2)
+/2 (t- ฯ_12) ^-(t-ฯ_12) /2ฮ(t-ฯ_12) ,
At second order, we get contributions at revival times t > 2 ฯ_1, 2 ฯ_2, ฯ_12 and also t > 2 ฯ_1+ฯ_2, ฯ_1 + 2ฯ_2, 2 ฯ_12. The former contributions are fully accounted by the two first terms, while the latter will receive further contributions from the third term of the expansion. Let us therefore focus on the former for now. The contributions read
a_3(t) = -(ฮณ_1+ฮณ_2/2)[(t-2ฯ_1) -1/2(ฮณ_1+ฮณ_2/2)(t-2ฯ_1)^2 ]
ร^-(t-2ฯ_1)/2ฮ(t-2ฯ_1)
-(ฮณ_2+ฮณ_1/2)[(t-2ฯ_2) -1/2(ฮณ_2+ฮณ_1/2)(t-2ฯ_2)^2 ]
ร^-(t-2ฯ_2)/2ฮ(t-2ฯ_2)
- [3 /2(t-ฯ_12) -(ฮณ_1+ฮณ_2/2)(ฮณ_2+ฮณ_1/2) (t-ฯ_12)^2 ]
ร^-(t-ฯ_12) /2ฮ(t-ฯ_12) .
There are several distinct features that are worth pointing out. We already
The first peaks related to the revival times ฯ_i appear at times t_i = ฯ_i+2/ where the probability peaks at |2 ฮณ_i + ฮณ_j/|^2 (where ฮณ_j refers to the other coupling, i.e. iโ j), respectively. Strikingly, the height of these peaks depends on the couplings.
The revival time at ฯ_1+ฯ_2 is novel, and involves two peaks. The position and height of the peak can be found in a straightforward manner, but the expressions are complicated and not illuminating. Instead, focussing on the case when ฮณ_1=ฮณ_2=ฮณ, we find that the revival times are located at t = ฯ_1+ฯ_2 + (13ยฑโ(97))/(9 ฮณ) where the probability peaks at |9 ยฑโ(97)/2^-(13 ยฑโ(97))/9|^2. Strikingly, this peak is again not dependent on the coupling. However, this statement does not hold for the general case when ฮณ_1 โ ฮณ_2. We also repeat that these estimates only hold if the couplings are sufficiently large so that the previous revival peaks are completely washed out before the subsequent revival time.
As usual for constant coupling, double peaks also appear at multiples of the revival times ฯ_1 and ฯ_2, and also at their mixed multiples.
Finally, we can also find the occupation of the two bands, using Eq.ย (<ref>). In our case,
b_n(t) = i ฮฒ_1//2 - i n ฮ_1(^- t/2 - ^-i n ฮ_1 t) ,
and similarly for c_n(t) by substituting ฮฒ_1 โฮฒ_2 and ฮ_1 โฮ_2. We define the probabilities,
P_b(t) = โ_n=-โ^โ| b_n(t)|^2 ,
P_c(t) = โ_n=-โ^โ| c_n(t)|^2 ,
and after some algebra, using Eqs.ย (<ref>) and (<ref>), we obtain Eq.ย (<ref>), which proves that P_a(t) + P_b(t) + P_c(t) = 1.
apsrev4-2
|
http://arxiv.org/abs/2306.10841v1
|
20230619104030
|
Blockchain-Enabled Federated Learning: A Reference Architecture Incorporating a DID Access System
|
[
"Eunsu Goh",
"Daeyeol Kim",
"Do-Yup Kim",
"Kwangkee Lee"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"cs.SY",
"eess.SY",
"68T01 (Primary) 68M14, 94A60 (Secondary)",
"I.2.6; I.2.11"
] |
Leveraging The Edge-to-Cloud Continuum for Scalable Machine Learning on Decentralized Data
Ahmed M. Abdelmoniem^*,ย Member,ย IEEE
^*Corresponding Author, E-mail: [email protected]
Ahmed is an Assistant Professor at Queen Mary University of London, UK. Webpage: <http://eecs.qmul.ac.uk/รฃhmed>
Received July 31, 2023; accepted ???
===================================================================================================================================================================================================================
Recently, Blockchain-Enabled Federated Learning (BCFL), an innovative approach that combines the advantages of Federated
Learning and Blockchain technology, is receiving great attention. Federated Learning (FL) allows multiple participants to jointly train machine learning models in a decentralized manner while maintaining data privacy and security. This paper proposes a reference architecture for blockchain-enabled federated learning, which enables multiple entities to collaboratively train machine learning models while preserving data privacy and security. A critical component of this architecture is the implementation of a decentralized identifier (DID)-based access system. DID introduces a decentralized, self-sovereign identity (ID) management system that allows participants to manage their IDs independently of central authorities. Within this proposed architecture, participants can authenticate and gain access to the federated learning platform via their DIDs, which are securely stored on the blockchain. The access system administers access control and permissions through the execution of smart contracts, further enhancing the security and decentralization of the system. This approach, integrating blockchain-enabled federated learning with a DID access system, offers a robust solution for collaborative machine learning in a distributed and secure manner. As a result,
participants can contribute to global model training while maintaining data privacy and identity control without the need to share local data. These DIDs are stored on the blockchain and the access system uses smart contracts to manage access control and permissions. The source code will be available to the public soon.
ยง INTRODUCTION
The field of machine learning is often hampered by the challenge of data availability. Data providers, typically reluctant to share their data without incentives, may hinder progress and even lead to the termination of projects. As the adoption of internet of things (IoT) broadens and data collection from edge devices intensifies, the discourse has shifted towards harnessing this data while protecting the personal information of data providers. In this context, federated learning has emerged as a promising solution, offering improved artificial intelligence (AI) models in that data privacy is maintained despite utilizing valuable data from client devices. However, federated learning is still not without its challenges, including the lack of punitive measures for clients who deliberately disrupt the learning ecosystem, potential issues associated with centralized learning such as the single point of failure problem, and the inability for learners to claim and verify their ownership of locally generated models. Additionally, it is particularly critical to address system heterogeneity in federated learning <cit.>, taking into account the diverse characteristics of the multitude of devices involved.
The integration of blockchain and federated learning is a rapidly evolving area of research aimed at overcoming the aforementioned limitations, yet real-world applications remain scarce. Notably, most studies have not thoroughly examined the constraints and issues associated with real-world blockchain deployment. In this paper, therefore, we develop a reference architecture for the research and real-world implementation of Blockchain and Federated Learning (BCFL).{ why eth }. To this end, we aim to address these limitations by merging blockchain technology with federated learning techniques, introducing an incentive system underpinned by smart contracts, and employing decentralized identifiers (DIDs) for authentication. Our main contributions include:
* The design, implementation, and verification of a BCFL reference architecture in the Ethereum development environment.
* A comprehensive analysis and definition of smart contract functions, stakeholder roles (e.g., job creators, evaluators, and trainers), and the use of interplanetary file system (IPFS) for sharing learning models among federated learning participants.
* An extension structure for applying cross-device and cross-silo federated learning scenarios.
* A proposed method for ID access and management for federated learning participants through integration with a DID access system.
* A review of operating costs through a comparison of deployment costs in the Ethereum test network and the local simulation network of BCFL.
The rest of the paper is organized as follows. Section 2 provides an examination of the key terms that are prominently used throughout this paper. In Section 3, we present a detailed explanation of our proposed approach, including the overall workflow and the roles of each component. Section 4 showcases the simulation results conducted in a real-world environment. Finally, Section 5 concludes the paper.
ยง TERMINOLOGICAL BACKGROUND
ยง.ยง Federated Learning
Federated learning is a machine learning strategy where multiple entities can collaboratively train a shared model without exchanging their raw data with each other. It was first introduced by Google in 2017 as a solution to the problem of training machine learning models on distributed data. The key idea of federated learning is to harness distributed computing power by enabling individual devices to contribute to the training of the shared model. This approach can help maintain data privacy and security, as well as develop models that are more tailored to the specific needs of each individual device. { why privacy }
Since its inception, federated learning has made substantial strides, with researchers proposing various techniques to tackle its challenges. Early approaches relied on simple averaging algorithms to merge updates from multiple devices, but recent advancements have demonstrated that performance can be enhanced by the aid of client and data selection algorithms <cit.> <cit.>. These developments have significantly improved the practicality and efficiency of federated learning, with numerous research results demonstrating its application in diverse fields such as healthcare, finance, and transportation. Despite being in its early stages, federated learning holds immense potential as it offers a novel way to enhance machine learning model performance while preserving data privacy and security.
Federated learning is still in its early stages of development, but it has great potential. It offers a new way to improve the performance of machine learning models while maintaining data privacy and security. Federated learning is expected to emerge as an important technology in the coming years.
ยง.ยง Blockchain
Blockchain technology, at its core, is a distributed ledger technology that enables secure, transparent, and distributed transactions. It has gained significant attention across various industries due to its innovative features and capabilities <cit.>. Blockchain creates a permanent and unalterable record of transactions, making it an ideal solution for industries requiring trust, security, and transparency.
Blockchain is essentially a digital ledger of transactions that is maintained by a network of computers called nodes. Each node in the blockchain network maintains a copy of the ledger, and all changes are verified by consensus among the nodes. Once a transaction is recorded on the blockchain, it cannot be changed or deleted, thus ensuring data integrity and immutability.
One of the most important advantages of blockchain technology is its decentralized nature. This eliminates the need for intermediaries or central authorities, allowing all transactions to be transparent and accessible to all network members.Thanks to such an advantage, blockchain technology is being applied to various use cases requiring data integrity and traceability, such as financial transactions, supply chain management, voting systems, and identity management.
In conclusion, blockchain technology is an innovative and disruptive technology that provides safe and distributed solutions to various industries. It is an area of active research and application due to its unique features that make it an ideal solution for industries that require trust, security, and transparency. It also has the potential to significantly transform various industries <cit.>,<cit.>.
ยง.ยง Blockchain-Enabled Federated Learning
BCFL is a novel approach to machine learning that has gained popularity in recent years, particularly in IoT and healthcare applications <cit.> .
It combines the distributed training of machine learning models in federated learning, where data is not centrally stored but collected and processed locally on individual devices, with the secure and transparent transaction recording of blockchain technology. This combination allows for secure and efficient data and model sharing between multiple parties without a central authority, potentially overcoming challenges associated with traditional machine learning methods, such as data privacy and security issues.
In the meantime, the need for more secure and efficient data sharing methods in a variety of industries, including healthcare, finance, and e-commerce, has accelerated the evolution of BCFL technologies. As data volumes continues to grow and data privacy concerns intensify, this approach is becoming increasingly important.
Future developments in BCFL are likely to include the introduction of new algorithms and protocols to further enhance security and efficiency. As more organizations recognize the potential benefits of this approach, BCFL is poised to gain mainstream acceptance in the realms of machine learning and data sharing. However, despite the growing body of research <cit.> and the increasing number of open-source projects <cit.>, <cit.> dedicated to the design of BCFL, these resources remain relatively limited.
ยง.ยง Decentralized Identity (DID)
DID is a concept that has been gaining attention in recent years as a method of managing and protecting digital identity in a decentralized manner. DID is a unique identifier that can be used to create a verifiable, reliable, and tamper-proof digital identity, which is free from control by a central authority.
A key of feature DID is the distribution of ownership and control of digital identity across multiple parties, as opposed to being controlled by a single organization or entity. This is facilitated through the use of blockchain technology, which provides a distributed and transparent mechanism for managing identity and transactions.
One of the most significant aspects of DID is its privacy features. Traditional identity management systems often rely on the collection and storage of personal information by central authorities, which can lead to data breaches and privacy violations. In contrast DID empowers individuals to control their personal data and share it only with trusted parties when necessary thereby enhancing privacy.
In DID systems, personal information is stored in a distributed and encrypted format, allowing individuals to choose who they share their information with. This provides a higher level of privacy and security traditional identity management systems.
As the digital world continues to evolve, the demand for safe and decentralized identity management solutions is becoming increasingly important. DID is emerging as a promising solution that enables individuals to exert better control over their personal information, thereby bolstering their privacy and security.
ยง.ยง Interplanetary File System (IPFS)
IPFS is a distributed peer-to-peer (P2P) file system that the addresses limitations of the traditional centralized internet system, providing a technical solution for the secure and rapid transmission of distributed data. IPFS employs a hash-based file system (HFS) for file storage and is linked to blockchain technology to ensure file uniqueness and enhance the security of distributed data storage.
IPFS can be utilized for a variety of purposes, with the most common uses file sharing and distributed web hosting. Through IPFS, files can be shared using the distributed technology of blockchain eliminating the need to locate the original file. Additionally, IPFS can be used to host websites in a distributed manner akin to a content delivery network (CDN). These technical solutions are anticipated to play a major role in enhancing the security and safety of the internet.
ยง METHODS
In this section, we first describe the overall flow of our proposed method, which is composed of two phases encompassing six stages, and provide detailed information about each stage. Following this, we delve into an in-depth discussion of the smart contract. We then explain the corss-device and cross-silo scenarios, describe the functions of the stakeholders and the design of other modules. Finally, we discuss the DID and verified credential (VC) certification system.
ยง.ยง Overall Workflow
Our proposed BCFL workflow is organized into two phases. The initial phase comprises two stages: job creation and trainer recruitment. Subsequently, the second phase involves the iterative execution of four stages: training, evaluation, client selection and aggregation, and token distribution. These four stages are repeated for a predetermined number of global rounds. It is noteworthy that each stage can be further divided into participants and functional units. The entire workflow is summarized in Figure <ref>. In the following, we delve into each stage in greater detail.
ยง.ยง.ยง Stage 1: Job creation
A client initiates a BCFL task, prompting the job creator to generate a quote based on the client's request. This quote encompasses the type of deep learning model to be used, the configuration of learning hyperparameters, the desired number of trainers, the global round, and the genesis model. Subsequently, the job creator deploys the genesis model to the BCFL server and registers the task. It is important to note that while the client who requests the job can be a separate entity, it may also assume the role of the job creator.
ยง.ยง.ยง Stage 2: Trainer recruitment
Trainers shoud be able to review the training task via a web app and estimate the potential benefits (tokens) they can earn. Trainers can verify their identity using a VC JSON web token (JWT) or they can issue a new DID and VC to identity verification. During this process, the BCFL server verifies the participants' information and stores the authentication data, enabling the trainer to receive bonus points.
ยง.ยง.ยง Stage 3: Training
Once trainer recruitment is completed and all trainers are ready to commence training the smart contract transitions the training status to the training stage and announces the start of training. The trainers download the genesis model and initiate training with their own data. They then register the local model update in IPFS and the smart contract.
ยง.ยง.ยง Stage 4: Evaluation
Upon completion of the training stage of a global round is completed, the evaluator checks the status of the smart contract to determine if the evaluation can begin. During the evaluation stage, the evaluation commences with pre-prepared evaluation data. Before starting, evaluator obtains the DID authentication client information from the BCFL server and awards bonus points to DID-authenticated trainers. There are several algorithms for awarding contribution points, and each BCFL task selects and utilizes an appropriate method. Once the evaluation is complete, the evaluator records the trainer's model-specific scores in the contract.
ยง.ยง.ยง Stage 5: Client selection and aggregation
The aggregator accesses the smart contract and retrieves the trainer list, model Content Identify (CID) list, and contribution point list for each model. Then, using algorithms such as FedAvg, the aggregator aggregates the local models from the trainers. The global model is recorded in IPFS and the smart contract. In addition, the aggregator selects clients based on the scores at the end of the aggregation stage within a round, and records the list of trainers eligible to participate in the next round in the contract to notify the eligibility to participate.
ยง.ยง.ยง Stage 6: Token distribution
The job creator distributes tokens to trainers using a pre-deployed token contract based on the score list. This value will later serve as a stake in the final global model.
Fig. <ref> presents the full BCFL architecture. The BCFL architecture delineates the roles and functions of the stakeholders involved in BCFL execution, namely the job creator, evaluator, aggregator, and trainer. It also outlines the tasks performed by each stakeholder, and the information recorded and used in IPFS and the smart contract.
ยง.ยง Smart contract
The smart contract controls the overall workflow and manages critical information throughout the entire process. Table <ref> lists the functions that should be included in the contract used in the BCFL cross-device scenario [In this study, we consider both cross-device and cross-silo scenarios. Detailed explanations of these scenarios will be provided later in Section 3.3.]. Below is a detailed description of each function.
Round Control
In BCFL, there is no separate server to manage the learning process, so the smart contract must register and
manage the information of the round and its duration. The round control module contains these functions, allowing stakeholders continuously check the current round and the remaining time during the learning process. A round is completed when the evaluation and aggregation of the round are finished, and the process then moves to the next round.
Client Selection
In client selection, it is necessary to manage the trainers for each round based on the evaluation results. after the aggregator performs client selection in a specific round, the system must register the wallet addresses of the trainers who can access the system. The function must be configured so that the trainers can check it. Trainers will be able to check if they are valid trainers at the start of the round, and after receiving the status value, they can decide if they can participate in the BCFL process further.
Evaluation
The evaluator assesses the performance of the local models submitted by the trainers and records the evaluation results in the smart contract. The smart contract must be able to store the local model CID and the corresponding score as well as the trainer information for the model. Access to the evaluation score should be restricted to the evaluator using access restriction measures such as solidity modifier. If the evaluation of all local updates in a specific round is completed, the evaluation completion status value must be changed to allow other trainers to move to the next training stage. The evaluator can be changed but it is restricted to some unavoidable cases.
Global model saving
After client selection is completed, the aggregator must be able to upload the CID of the aggregated global model through the contract.
Training
During the training process, trainers download the global model, train it with their own data, and then upload trained model to IPFS. Therefore, the smart contract should have a function to store/retrieve the global model CID and the updated local model CID for the corresponding round.
Training initiation
The job creator must define which model to use as the genesis model at the start of training, and record the CID of the initial version of the model uploaded through IPFS in the contract so that everyone can use it.
Token distribution
Trainers want to earn revenue for their contributions. Token distribution oversees this function and connects ERC-20 and other token interface contracts to issue tokens that can be traded in the actual network. These tokens have value, such as operating as a stake in the final global model.
FL progress management
In addition to the functions listed above, the following functions are essential for managing the progress and status of the FL training:
* After all the trainers are recruited, the smart contract is notified that the FL training is ready to start.
* Status check if a particular trainer has successfully uploaded the local model CID for the current round and ensure that there is no abnormality in the training process.
* To evaluate the contribution of the local models for the corresponding round, a list of model CIDs that have completed training
in a round is required.
ยง.ยง Cross-device and cross-silo scenario
Fig. <ref> depicts the overall workflow of the cross-device scenario in BCFL.The roles and functionalities of each stakeholder are represented in different colors, and badges are attached to indicate interactions with IPFS and smart contracts.
The job creator initiates the BCFL process by reviewing the received tasks and creating the training configuration. This includes the total number of global rounds, training hyper-parameters, the number of trainers, evaluation methods, client selection methods, and round duration. participant recruitment is completed by the job creator, the genesis model is uploaded to IPFS. The Job creator then registers the CID of the uploaded genesis model in the smart contract, providing access to all participants.
Fig. <ref> shows the workflow of the cross-silo scenario , which is an adaptation and enhancement of the model presented in <cit.>. This scenario assumes the participation of three entities, with the final global model being an aggregation of their respective auxiliary models. Within each participant's auxiliary model, they take turns performing the evaluator role while the others act as trainers. Since there is no separate evaluator for the main model, the combined evaluation results of each auxiliary model contribute as weights.
The aggregator is tasked with aggregating the auxiliary models and performing the final aggregation for the main model. In Fig. <ref>, a third-party entity not considered as a participant is selected as an example. To select an eligible client for aggregation, the BCFL server which is (responsible for whitelist and client request management) needs to maintain a whitelist of authenticated users, such as those with DIDs, selecting users who consistently contribute to the training. Conversely, if an aggregator is selected among the participants, strategic variations can be introduced by randomly selecting an aggregator from the clients and assigning the role to the client with a high contribution during the training process.
ยง.ยง Stakeholders' functions and other module designs
In this study, we have classified the types of stakeholders into four categories for the cross-device scenario and three categories for the cross-silo scenario.
ยง.ยง.ยง Job creator
The job creator can be the entity commissioning the task, or it can also be a client who receives requests from external organizations and prepares estimates. The Job Creator is responsible for the creation part of the BCFL task. This includes creating the Training configuration, using the BCFL cross-device scenario smart contract and network standard interfaces such as ERC-20 token contracts, deploying DID contracts, and setting up the Genesis model.
* Training configuration : The training configuration includes details such as the global rounds, the number of trainers,and the evaluation and client selection algorithms. This information is disseminated to the stakeholders through servers or smart contracts.
* Smart contracts : The Job Creator deploys the necessary smart contracts for the training. This includes the BCFL contract, token contract using network standard interfaces, and DID contract. The Job Creator accesses the contract after each global round is completed to verify the status and distribute tokens to the trainers.
* Setting the Genesis model : The Job Creator designs the deep learning network according to the objectives and uploads the model to IPFS. This information is recorded in the contract to provide access to all stakeholders.
ยง.ยง.ยง Evaluator
The Evaluator is assumed to be a node equipped with an appropriate test dataset for evaluating local models. The Evaluator should be capable of receiving rewards for providing evaluation data and contributing to the training process. The Evaluator class consists of modules for local model evaluation, global model evaluation, scoring algorithms, and the smart contract bridge, with the following roles:
* Local model evaluation : Once all the Trainers have completed their local training, the evaluation phase begins. At this stage, the Evaluator selects Trainers who are authenticated through their DIDs and awards additional scores to the selected Trainers for reward purposes. Additionally, to perform the evaluation, the Evaluator needs to access the smart contract to retrieve the CIDs of the models that the Trainers have completed.
* Global model evaluation : To monitor the performance of the final model, an evaluation of the global model is necessary. This evaluation can serve as a comparative measure of how much the local models trained by the Trainers have contributed to the global model.
* Scoring algorithms : There can be various types of algorithms for evaluation, with Shapley value and Leave-one-out being commonly mentioned in the field of federated learning. The assessment of contribution in federated learning can be measured by marginal value for a Trainer's contribution to the global model or by considering contribution algorithms like Shapley value or Leave-one-out, according to the guidelines specified in the training configuration created by the Job Creator. The choice of contribution algorithm should consider the objectives and characteristics of the training and allow for easy module design for tailored algorithms specific to the training task.
* Smart contract bridge : Submission of the completed local update CIDs to the blockchain needs to be facilitated through the smart contract bridge.This bridge serves as a connection between the evaluation process and the blockchain, ensuring that all evaluation results are accurately recorded and accessible for further stages of the BCFL process.
ยง.ยง.ยง Trainer
Trainers conduct training according to the given Training configuration and record the results in IPFS and smart contracts. Trainers have the option to decide whether or not to utilize their personal information to obtain DID and VC.
Local training
During the local training phase, trainers need to download the global model before starting their training. The access CID for the global model is specified in the smart contract, and trainers access it to download the model from IPFS. Training is conducted based on the training configuration provided by the Job Creator. After completing their training, each trainer uploads their updated local model to IPFS, records it on the blockchain through the contract, and waits until the evaluation by the evaluator is finished.After the evaluation stage, Trainers can check whether they have been removed by the client selection algorithm at the start of the new round. Upon verifying their status, they proceed to the next round.
* Smart contract bridge : Trainers should be able to upload the CIDs of their locally updated models to the blockchain through the smart contract.
* DID Class : Trainers have the option to obtain DID and VC voluntarily. The BCFL server should have authentication logic, and the DID class is responsible for handling requests to the server. The BCFL server maintains a whitelist for authenticated trainers and manages it for the corresponding BCFL task.
ยง.ยง.ยง Aggregator
The aggregator performs global model aggregation and receives additional rewards accordingly. The selection criteria for the aggregator can be based on the whitelist provided by the BCFL server, selecting clients other than the training participants, or it can provide flexibility in selecting trainers based on specific requests.
* Smart contract bridge : The aggregator uploads the aggregated global model update to IPFS and registers the CID in the smart contract to allow participants to access the global model.
* Building the global model : The primary role of the aggregator is to aggregate the local models. They access the smart contract and IPFS to download the list of local models and perform model aggregation according to the specified aggregation algorithm.
* Aggregation algorithms : Several aggregation algorithms, including FedAvg, are included in this module. The aggregation algorithms module should be designed to accommodate dynamic decisions of the effective aggregation algorithms for the given task.
* Client selection algorithms : After performing aggregation, the aggregator needs to select trainers to participate in the next round. Recently, various client selection algorithms are being actively proposed so that efficient algorithms for the given task in the sense of cost and performance can be chosen. In this study, the option of not performing client selection (all) or excluding trainers based on their performance ranking (scoring order) is provided. In addition to this, there are various client selection algorithms, and we plan to add them so that they can be tested in BCFL as well.
ยง.ยง.ยง Participant in cross-silo scenario
Figure <ref> illustrates the participant module design in the Cross-silo scenario. In the Cross-silo scenario, there is no selection of Evaluators among the participants. Instead, multiple auxiliary models are created, and participants take turns performing the role of an evaluator. Therefore, participants in the Cross-silo scenario can perform both the roles of an evaluator and a trainer.
* Role check : In a specific auxiliary model, the creation of the class should differ depending on whether the participant is a trainer or an evaluator.
* Add auxiliary model : Participants in the Evaluator role deploy their local model as the genesis model. Using this genesis model, other participants start training based on the Cross-device scenario.
ยง.ยง.ยง BCFL utility functions
Figure <ref> depicts the utility functions required for training. The IPFS Class includes functions necessary for uploading and downloading completed local and global models. Since the IPFS connection is done through the HTTP API, it requires a separate Python class to establish the connection, which is used by all participants.
The Smart contract bridge provides contract-related functionalities to all participants of BCFL. This class is implemented using blockchain interaction libraries such as Web3.py. It includes functionalities such as contract deployment, contract instantiation, wallet-related functions (e.g., wallet information retrieval, token deployment, and balance checking), federated learning-related functions, and token-related functions (e.g., token transfer).
ยง.ยง DID and VC certification system
The authentication system using DID and VC is designed to be executed at the service level. Figure <ref> provides an example of the system's utilization. Holder refers to a user who uses the service by issuing a DID. After obtaining the DID, the Holder requests VC issuance from the server and submits it when participating in the FL task. The BCFL server decrypts the VC, verifies if it is signed by the BCFL server, and delivers a verification completion message to the FL task registration organization
Figure <ref> illustrates the additional score obtained through the DID authentication logic. Users can be divided into authenticated and unauthenticated groups. For the authenticated group, in addition to the evaluation score, an additional authentication reward score is awarded. This allows the aggregator, when performing client selection based on the evaluation score, to achieve better performance as the clients are from the authenticated group.This By continuously monitoring and removing malicious trainers from the unauthenticated group, it is expected to improve the performance of the global model.
ยง EXPERIMENTS
ยง.ยง Gas fee Evaluation
In this study, the reference architecture was validated through deployment and execution in the Ethereum environment. The Sepolia test network, an Ethereum test network, was used to verify the deployment costs of the smart contracts used in BCFL. Table <ref> provides an explanation of the deployment costs on the Sepolia test network. The total deployment cost for the four contracts used in BCFL, including the Cross-device scenario, Cross-silo scenario, ERC-20 token contract, and DID Registry contract, is approximately 0.0274 ETH, which is equivalent to around 51.52 USD or approximately 68,026.49 KRW when converted to cash.
Table <ref> shows the costs of deploying the same contracts on a local Ethereum network using Ganache. It can be observed that the deployment costs on the Ganache local network are about 10 times higher compared to the deployment costs on the Sepolia test network. The Sepolia test network is configured to be similar to the Ethereum mainnet, which adopts the PoS (Proof of Stake) consensus algorithm. On the other hand, the network configured using Ganache simulates the Ethereum 1.0 network, which uses the PoW (Proof of Work) consensus algorithm. This difference in gas costs arises due to the simulation of different Ethereum network environments. Therefore, for a thorough verification from a cost perspective, additional deployment and validation in an environment as close as possible to the main network is necessary.
ยง.ยง Verification of operation on non-IID dataset
To verify the operation of BCFL, we first examined the results of performing a basic deep learning task on an Non-Independent Identically Distributed (Non-IID) dataset. The experiment was conducted based on the cross-device scenario. The FEMNIST dataset was used for training and was supplied through LEAF<cit.>. The FEMNIST dataset is a handwritten character dataset with 62 classes, and the data of each trainer has a Non-IID distribution. The same test dataset was used for all evaluations, and a CNN classification model was used for the network. The hyperparameters for training were set to 15 global rounds and 2 local epochs.
Figure <ref> shows the graph of global loss and local loss when four trainers participated in training with the FEMNIST dataset. For the global loss (a), it started at 4.15 in the first round but converged to approximately 0.61 in the final round. The local loss (b) according to cumulative epochs showed a convergence trend as the 15 global rounds progressed, with the local model loss of the four trainers converging as well.
ยง.ยง Global model with DID authentication system
We performed an experiment to validate the combined structure of BCFL and DID. There were a total of 25 trainers used for training, each having their unique wallet address on the Ganache network. All trainers possessed their own training dataset in their local environments, with the FEMNIST dataset being used. Among the 25 trainers, 12 had normal datasets, while the remaining 13 had label-flipped datasets. These 13 trainers were considered malicious trainers, and they did not undergo the authentication process using DID. Figure <ref> shows the test loss graph from the evaluations conducted on these trainers. (a) represents the test loss from the unauthenticated trainers, showing bad performance. This suggests that when aggregated into the global model, they may have a detrimental impact. In contrast, (b) represents the test loss from trainers who underwent DID authentication and possessed normal datasets. It shows a decreasing trend as the training progresses.
To consider the potential impact of unauthenticated malicious trainers participating continuously in training, we conducted evaluations of the global model. In this case, when normal trainers undergo DID authentication, they receive an additional score equal to 10% of the score obtained from local evaluation. As seen in Figure <ref>, when we awarded additional scores to DID-authenticated clients (green), and compared it to the scenario without awarding additional scores (purple), we observed a significant reduction in global loss from approximately 2.9 to 1.3.
ยง CONCLUSION
BCFL is a decentralized solution for federated learning using blockchain. In this research, we proposed a reference architecture and aimed to enhance BCFL by integrating it with a DID-based identity authentication system. This integration aims to reduce the participation of malicious trainers by proving the participants' identities and providing authentication rewards to DID-authenticated clients to increase the likelihood of their local models being aggregated into the global model.
We performed deployment and operation verification on the Ethereum simulation networks constructed using Ganache. The results showed that both the global model and local models exhibited a converging trend in terms of loss. Additionally, we simulated scenarios where trainers with malicious intent participated in training without undergoing DID authentication. When we awarded an additional 10% score to authenticated trainers during client selection for the global model evaluation, we observed a significant improvement in performance, with the global loss decreasing from approximately 2.9 to 1.3. This demonstrated the advantages of combining the DID authentication system with BCFL.
Furthermore, we compared the deployment costs of smart contracts in the Ethereum Sepolia test network, which closely resembles the actual main network environment, and the Ethereum local network simulated using Ganache. The results revealed differences in deployment costs between the real production network environment and the development environment. Therefore, it is necessary to consider the cost aspect, practicality, and transaction processing speed in terms of user experience by conducting future operational verifications and performance evaluations in an actual Ethereum main network or equivalent test network.
For future work, we plan to explore options such as using Ethereum client programs to establish and operate a private network that does not incur gas fees. Another option is to use commercially available main networks with fast transaction processing speeds and low gas fees for deploying the BCFL architecture. Layer 2 Architecture could be a promising solution for this approach.
Additionally, while the current stage of the architecture involved experiments for practical proof of functionality, future research will focus on performance evaluations, considering performance indicators such as processing speed, cost, and model accuracy
ยง ACKNOWLEDGEMENT
This work was supported in part by the Commercializations Promotion Agency for Research and Development Outcome (COMPA) Grant
funded by the Korean Government (MSIT) through the Future Research Service Development Support under Grant 2022-1-SB1-1.
unsrt
1
1
K. Lee et al., โAdaptive federated learning in a dynamic device environment,โ IITP, IT Knowledge Portal Weekly Technology Trend ch. 3, no. 2052, Jun. 2022.
2
F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, โOort: Informed participant selection for scalable federated learning,โ arXiv preprint arXiv:2010.06081, pp. 1โ20, Dec. 2020.
3
J. Shin, Y. Li, Y. Liu, and S.-J. Lee, โFedBalancer: Data and pace control for efficient federated learning on heterogeneous clients,โ in Proc. of the 20th Annual International Conference on Mobile Systems, Applications and Services, Jun. 2022, pp. 436โ449.
4
S.-O. Lee and Y.-H. Park, โApplication of blockchain technology in industry: A survey,โ Journal of the institute of Electronics and Information Engineers, vol. 56, no. 12, pp. 83โ91, Dec. 2019.
5
D.-H. Kim and B.-S. Kim, โImplement of blockchain-based sharing platform for improving travelers' satisfaction with activities,โ Journal of the institute of Electronics and Information Engineers, vol. 56, no. 11, pp. 52โ58, Nov. 2019.
6
J. Lee, N. Lee, H. C. Oh, and B. An, โBlock chain based election system by artificial intelligence authentication,โ Journal of the institute of Electronics and Information Engineers, vol. 56, no. 4, pp. 37โ43, Apr. 2019.
7
Z. Wang and Q. Hu, โBlockchain-based federated learning: A comprehensive survey,โ arXiv preprint arXiv:2110.02182, pp. 1โ18, Oct. 2021.
8
S. K. Lo, Y. Liu, Q. Lu, C. Wang, X. Xu, H.-Y. Paik, and L. Zhu, โTowards trustworthy AI: Blockchain-based architecture design for accountability and fairness of federated learning systems,โ IEEE Internet of Things Journal, vol. 10, no. 4, pp. 3276โ3284, Feb. 2023.
9
H. Cai, D. Rueckert, and J. Passerat-Palmbach, โ2CP: Decentralized protocols to transparently evaluate contributivity in blockchain federated learning environments,โ arXiv preprint arXiv:2011.07516, pp. 1โ6, Nov. 2020.
10
H. A. C. Dias, โImpact Analysis of Different Consensus, Participant Selection and Scoring Algorithms in Blockchain-based Federated Learning Systems Using a Modular Framework,โ M.S. thesis, Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands, Sep. 2022.
11
S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Koneฤnรฝ, H. B. McMahan, V. Smith, and A. Talwalkar, โLeaf: A benchmark for federated settings,โ arXiv preprint arXiv:1812.01097, pp. 1โ9, Dec. 2019.
|
http://arxiv.org/abs/2306.01544v1
|
20230602134753
|
Social Interactions with Endogenous Group Formation
|
[
"Shuyang Sheng",
"Xiaoting Sun"
] |
econ.EM
|
[
"econ.EM"
] |
verbose,tmargin=1.25in,bmargin=1.25in,lmargin=1.25in,rmargin=1.25in
equationsection
figuresection
tablesection
definition
example[section]
plain
assumption
plain
prop[section]
plain
lem[section]
plain
thm[section]
tabnotes[2][1][t]#1
Notes: #2
appendix
[C]Online Appendix
Social Interactions with Endogenous Group FormationWe are grateful to Denis Chetverikov, Aureo de Paula, Jinyong Hahn, Brian Krauth, Michael Leung, Arthur Lewbel, Zhipeng Liao, Adriana Lleras-Muney, Rosa Matzkin, Konrad Menzel, Krishna Pendakur, and Geert Ridder for their helpful comments. We are also grateful to seminar and conference participants at Caltech, Iowa, Jinan University (IESR), Rice, UCL, UC Riverside, UPenn, USC, Yale, 2021 Asia Meeting of the Econometric Society, 2021 Australasia Meeting of the Econometric Society, 2021 China Meeting of the Econometric Society, 2022 California Econometrics Conference, 37th Canadian Econometric Study Group Meetings, 55th Annual Conference of the Canadian Economics Association, ASSA/ES North American Winter Meeting 2021, Econometric Society World Congress 2020, and IAAE 2021 Annual Conference. Sun is grateful for financial support from the Social Sciences and Humanities Research Council of Canada through its Insight Development Grants Program. All errors are our own.
Shuyang ShengDepartment of Economics, University of California at Los Angeles, Los Angeles, CA 90095, USA. Email: [email protected] SunDepartment of Economics, Simon Fraser University, Vancouver, BC, V5A 1S6, Canada. Email:[email protected]
July 31, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper explores the identification and estimation of social
interaction models with endogenous group formation. We characterize
group formation using a two-sided many-to-one matching model, where
individuals select groups based on their preferences, while groups
rank individuals according to their qualifications, accepting the
most qualified until reaching capacities. The selection into groups
leads to a bias in standard estimates of peer effects, which is difficult
to correct for due to equilibrium effects. We employ the limiting
approximation of a market as the market size grows large to simplify
the selection bias. Assuming exchangeable unobservables, we can express
the selection bias of an individual as a group-invariant nonparametric
function of her preference and qualification indices. In addition
to the selection correction, we show that the excluded variables in
group formation can serve as instruments to tackle the reflection
problem. We propose semiparametric distribution-free estimators that
are โ(n) consistent and asymptotically normal.
Keywoods: social interactions, group formation,
many-to-one matching, exchangeability, selection bias, reflection
problem, semiparametric estimation.
ยง INTRODUCTION
Social interaction models are useful tools for exploring the interdependence
of individual outcomes in various contexts, such as education, earnings,
health, and crime. One notable feature of social interactions is their
high tendency to occur among individuals who belong to the same social
or economic group. For instance, students typically interact with
fellow students within their school, and residents often engage with
other residents within their neighborhood.[There is massive literature on peer effects in schools, classrooms,
or dorms (e.g., ; ;
). See <cit.> for a survey.
Examples of neighborhood effects include <cit.>
and <cit.>.] Because individuals choose the colleges they apply to or the neighborhoods
they reside in, the peers they ultimately interact with are often
endogenously determined. Therefore, when we observe that individuals
with more advantageous peers have better outcomes, it is unclear whether
this is due to the influence of peers or their choices of peer groups
<cit.>. This paper develops new methods to identify and
estimate the causal peer effects among group members in the presence
of endogenous groups.
The literature on social interactions typically assumes that peer
relationships are exogenous or endogenous via unobserved group heterogeneity
<cit.>. A recent line
of research has relaxed this assumption by introducing unobserved
individual heterogeneity that influences both the formation of links
and individual outcomes, resulting in endogeneity in peer relationships
(; ; ;
). Such endogeneity can be resolved by controlling
for individual heterogeneity. In contrast to these studies, we explore
a setting where endogeneity in peer relationships stems from selective
entry into a group. For instance, students may be sorted into specific
schools or colleges through an admission process (;
).[Other examples of selective entry include medical graduates being
admitted to residency programs <cit.>
and seniors being placed in nursing homes <cit.>.] This kind of endogeneity cannot be rectified by the approach adopted
in the aforementioned studies because their formation model does not
apply to an individual's decision to enter a group.
The main contribution of this paper is to develop a model of group
formation and provide methods to correct for the selection into groups.
Specifically, we characterize group formation using a two-sided many-to-one
matching model, where individuals in a group are considered as being
โmatched withโ the group. In this model, individuals select groups
based on their preferences, while groups rank individuals according
to their qualifications and accept those with the highest qualifications
until capacities are reached. This process closely resembles how individuals
select schools or colleges <cit.>
and residency programs <cit.>.
The framework covers one-sided group formation as a special case,
in which individuals unilaterally determine the groups they join,
such as neighborhoods <cit.>.
To our knowledge, we are the first to employ a matching framework
to characterize group formation.
The selection into groups leads to a bias in standard estimates of
peer effects. For example, students of higher ability and more favorable
family backgrounds may be sorted into more selective schools, resulting
in an overestimate of peer influence. To examine this selection bias,
we follow the many-to-one matching literature and characterize stable
groups by group cutoffs, where the cutoff of a group is
the lowest qualification among group members <cit.>.
Because equilibrium cutoffs depend on the (observed and unobserved)
characteristics of all n individuals in a market, the selection
bias is a high-dimensional function that involves the observed characteristics
of the n individuals. To reduce dimensionality, we propose a novel
approach that utilizes the limiting approximation of a market as n
approaches infinity <cit.>. Under the limiting
approximation, we can express the selection bias of an individual
as a nonparametric function of her own preference and qualification
indices. Our result extends <cit.>
and <cit.> to more general two-sided group
formation with nonparametric unobservables. The expression also implies
that including group fixed effects, as seen in many empirical studies,
cannot effectively correct for the selection bias because the bias
depends on individual characteristics in group formation.
The selection function is generally group-specific because the cutoffs
and distribution of unobservables may differ across groups. This feature
may pose a further challenge in identifying peer effects because the
effects of group-level variables (such as average peer characteristics)
cannot be distinguished from a nonparametric selection function that
is also group-specific. To address this issue, we assume that the
joint distribution of unobservables in group formation is exchangeable
across groups. With knowledge of the cutoffs, we can represent the
selection bias using a group-invariant selection function. Moreover,
we propose a novel method to identify the cutoffs that leverages between-group
variation under exchangeability.
In addition to the selection correction, the identification of peer
effects may be susceptible to the reflection problem <cit.>.
The literature has proposed achieving identification through variation
in group sizes or intransitive triads (;
; ; ).
<cit.> discovered that the presence of selection
can help facilitate the identification of peer effects. Building upon
their insight, we propose to utilize the individual characteristics
in group formation that are excluded from the outcome equation as
instruments for peer outcomes. The rationale is that these excluded
variables can impact which group an individual joins and consequently
affect the outcomes of her peers. Because the instruments are individual-specific,
it is unlikely for the predicted values of peer outcomes to be collinear
with peer characteristics, thereby alleviating the reflection problem.
This identification strategy remains applicable regardless of whether
oneself is included in a group average, group sizes vary, or networks
are present within groups. To our knowledge, we are the first to
utilize excluded variables in group formation to tackle the reflection
problem.
We propose semiparametric distribution-free methods for estimating
the parameters <cit.>. We first develop
a three-step kernel estimator for the group formation parameters based
on our identification results. Building on this, we then propose a
semiparametric three-step GMM estimator for the social interaction
parameters, where we estimate the selection bias using a sieve estimator
and the interaction parameters using GMM. Moreover, we utilize the
symmetry of the selection function under exchangeability to reduce
the dimensionality of sieve.
The asymptotic properties of our proposed estimators cannot be established
using existing methods because the adjacency matrix is endogenous
and can be correlated with the observables. We therefore generalize
the methods for weighted ๐-statistics <cit.>
to account for the endogeneity in group memberships, where the weights
can be random due to networks within groups. Moreover, we establish
a set of sufficient conditions that restrict the dependence through
networks, enabling the attainment of the desired โ(n) rate.
We show that the estimators in both group formation and social interactions
are โ(n) consistent and asymptotically normal. In addition,
we provide simulation evidence that the estimators perform well.
The remainder of the paper is organized as follows. Section <ref>
develops the model. Section <ref> derives the
selection bias. Section <ref> investigates the
identification. Section <ref> presents the estimation
methods. Section <ref> conducts a simulation study.
Section <ref> concludes the paper. All the proofs
in the paper are provided in Appendix <ref>. Additional
results are presented in the Online Appendix.[All the numbered items designated with an โOโ
are shown in the Online Appendix.]
ยง MODEL
ยง.ยง Social Interactions
Consider a set of individuals ๐ฉ={1,2,โฆ,n} who interact
following the standard linear-in-means social interaction model <cit.>
y_i =โ_j=1^nw_ijy_jฮณ_1+โ_j=1^nw_ijx'_jฮณ_2+x'_iฮณ_3+ฯต_i.
In this specification, y_iโโ represents the outcome
of interest (earnings, employment, or education), x_iโโ^d_x
is a vector of observed characteristics, ฯต_iโโ
is an unobserved shock, and w_ijโโ_+ denotes the
weight of peer j on individual i. We assume that i's outcome
y_i depends on โ_j=1^nw_ijy_j and โ_j=1^nw_ijx_j,
the weighted averages of outcomes and observed characteristics of
i's peers. Following the terminology in <cit.>, ฮณ_1
captures the endogenous social effect, and ฮณ_2 captures
the exogenous/contextual social effect. The parameter of interest
is ฮณ=(ฮณ_1,ฮณ'_2,ฮณ'_3)'โโ^2d_x+1.
In this paper, we focus on a setting where the adjacency matrix w=(w_ij)โโ_+^n^2
presents a group structure. Suppose there is a set of groups ๐ข={1,โฆ,G}
that the individuals can join. We assume that the number of groups
G is finite and each group gโ๐ข has a predetermined
capacity n_g that is proportional to n. The groups are non-overlapping
(such as colleges), so one joins only one group. Let g_i denote
the group that i joins and g=(g_1,โฆ,g_n)'
the nร1 vector that stacks g_i. Assume that w_ij=0
if g_iโ g_j โ an individual is influenced by her groupmates
only. A typical example is given by group averages where w_ij=1/n_g_i
if g_i=g_j and w_ij=0 if g_iโ g_j <cit.>.
We can also allow w_ij to take a more general form so long as
the interactions occur within a group. For example, group members
may form additional friendships and only friends in the group can
have an impact.
Let y denote the nร1 vector that stacks y_i,
x the nร d_x matrix that stacks x'_i,
and ฯต the nร1 vector that stacks ฯต_i.
Write equation (<ref>) in a matrix form
y=wyฮณ_1+wxฮณ_2+xฮณ_3+ฯต.
The literature on linear-in-means models typically assumes that the
adjacency matrix w is independent of the unobservables
ฯต. We relax this assumption to accommodate
endogenous groups. Specifically, we allow w to be
endogenous, although endogeneity only occurs at the group level (as
detailed in Assumption <ref>). This assumption allows
us to focus on group formation to address the endogeneity of w.
In the next section, we develop a model of group formation to account
for endogenous groups.
ยง.ยง Group Formation
Suppose that, prior to social interactions, the groups are established
through two-sided decisions. On one side, an individual chooses a
group according to her preferences over groups. On the other side,
a group ranks individuals based on their qualifications and admits
those with the highest qualifications until its capacity is reached.[For example, in college admissions, a student's qualifications reflect
the preferences of colleges, whereas in school assignments, a student's
qualifications may depend on prioritized factors such as the student's
district of residence and whether the student has a sibling attending
the school.] Two-sided group formation has various applications including school
choices, college admissions, residency program admissions, and nursing
home placements. The framework nests one-sided group formation as
a special case, where each group has an infinite capacity. Thus, an
individual unilaterally determines which group to enter.[<cit.> and <cit.>
provided examples of one-sided group formation where individuals decide
whether to select into a potential group or choose one out of multiple
groups.]
Two-sided group formation can be equivalently characterized as two-sided
many-to-one matching without transfers, where individuals in a group
are considered as being โmatched withโ the group. Therefore, we
specify group formation following the literature on two-sided many-to-one
matching without transfers (; ).
Utility
For individual iโ๐ฉ and group gโ๐ข, let
u_ig denote i's utility of joining group g and v_gi
denote i's qualification for group g
u_ig=z'_iฮด_g^u+ฮพ_ig and v_gi=z'_iฮด_g^v+ฮท_gi,
where z_iโโ^d_z represents a vector of individual-
or pair-specific observed characteristics that affect the preferences
and qualifications in group formation.[To illustrate that the specification in equation (<ref>)
covers both individual- and pair-specific characteristics, suppose
that there are two groups and we specify v_gi=s_iฮด_g,s+d_igฮด_g,d+ฮท_gi,
where s_i is an individual-specific variable and d_ig is
a pair-specific variable (which can be an individual-specific variable
interacted with a group-specific variable). This example can be represented
by equation (<ref>) with z_i=(s_i,d_i1,d_i2)',
ฮด_1^v=(ฮด_1,s,ฮด_1,d,0)', and ฮด_2^v=(ฮด_2,s,0,ฮด_2,d)'.] Note that z_i may overlap with the observed characteristics
x_i in social interactions. The group-specific coefficients
ฮด_g^u,ฮด_g^vโโ^d_z allow the effect
of z_i to be heterogeneous across groups. ฮพ_igโโ
and ฮท_giโโ represent pair-specific unobserved utility
and qualification shocks of individual i for group g.[The preference of an individual for a group may rely on the outcomes
she anticipates upon joining that group. For example, students may
prefer colleges that can help them secure better job opportunities.
Nevertheless, the utility in equation (<ref>) can be regarded
as a reduced-form specification that has already accounted for the
expected outcomes.] Let ฮพ_i=(ฮพ_i1,โฆ,ฮพ_iG)' and ฮท_i=(ฮท_1i,โฆ,ฮท_Gi)'.
We assume that the joint distribution of ฯต_i, ฮพ_i,
and ฮท_i is nonparametric, which has the advantage of allowing
ฯต_i to have flexible dependence with ฮพ_i and ฮท_i.[The nonparametric setup implies that z_i does not include a constant
or group-specific variables because these group-level heterogeneity
cannot be separated from nonparametric ฮพ_i and ฮท_i. ]
It is important to note that our model does not allow for peer effects
in group formation, meaning that the utility and qualification in
equation (<ref>) do not consider prospective group members.
However, we can approximate the peer effects by incorporating past
group members' characteristics and outcomes into z_i as long
as these group-level measures do not vary over time. For example,
we can use the percentage of female students from the previous year
as a proxy for the gender peer effect, assuming that the school's
gender composition remains relatively constant over time.
[College admissions]
College admissions provide an example of two-sided
group formation, where students apply for colleges based on their
utility preferences (u_ig), and colleges admit students based
on their qualifications (v_gi). In this example, ฮพ_i represents
student i's unobserved preferences for colleges (family tradition),
and ฮท_i represents i's unobserved ability that affects
her qualifications (extracurricular activities). z_i represents
observed characteristics that affect i's preferences and qualifications,
which can be individual-specific (family income, parental education,
and SAT scores), or pair-specific (distance to a college, and minority
status of a student interacted with the past minority composition
in a college).
Equilibrium
Following the matching literature <cit.>, we assume
that the group formation outcome is stable.[In college admissions, stability can be achieved through various
means. One way is for students to apply to all acceptable colleges
and use a stable mechanism like the deferred-acceptance algorithm
to determine the matching <cit.>. However, if
students choose not to apply to all acceptable colleges due to application
costs or errors in the application process, stability can still be
achieved theoretically as long as students know their own evaluations
by the colleges before submitting their rank orders <cit.>.] <cit.> showed that a stable matching exists
and can be characterized by group cutoffs. Let p_g denote
the cutoff of group gโ๐ข. It is given by the lowest
qualification among the group members if the capacity constraint is
binding; otherwise, the cutoff is set to -โ. Namely, p_g=inf_i:g_i=gv_gi if โ_iโ๐ฉ1{g_i=g}=n_g,
and p_g=-โ otherwise.
Given the cutoffs p=(p_1,โฆ,p_G)', let ๐_i(p)={gโ๐ข: v_giโฉพ p_g}โ๐ข
denote individual i's choice set โ the subset of groups
that i qualifies for. Within ๐_i(p), i chooses
the group that yields the highest utility, g_i=max_gโ๐_i(p)u_ig.
This is a multinomial discrete choice problem with the choice set
๐_i(p) determined endogenously by the cutoffs p.
Individual i joins group g if (i) i qualifies for group g,
and (ii) for any other group h g, either i does not prefer
group h, or i does not qualify for group h.[For simplicity of exposition, we assume that individuals always prefer
to join a group. This can be relaxed by assuming that the utility
of not joining any group is u_i0=ฮพ_i0. Such relaxation does
not lead to substantive technical modifications of the results.] That is,
1{g_i=g}
= 1{v_giโฅ p_g}ยทโ_hโ g1{u_ih<u_ig or v_hi<p_h}
= 1{ฮท_giโฅ p_g-z'_iฮด_g^v}ยทโ_hโ g1{ฮพ_ih-ฮพ_ig<z'_i(ฮด_g^u-ฮด_h^u) or ฮท_hi<p_h-z'_iฮด_h^v}.
Equation (<ref>) indicates that the group that i joins
depends on i's observed and unobserved characteristics z_i,
ฮพ_i, ฮท_i as well as the cutoffs p. We write g_i=g(z_i,ฮพ_i,ฮท_i;p).
We remark that in one-sided group formation, the capacities are infinite
and the cutoffs p_g are set to -โ. The choice set ๐_i(p)
is simply ๐ข. The optimal decision in equation (<ref>)
reduces to 1{g_i=g}=โ_kโ g1{u_ik<u_ig}=โ_kโ g1{ฮพ_ik-ฮพ_ig<z'_i(ฮด_g^u-ฮด_k^u)},
and we return to a standard multinomial discrete choice problem. The
group that individual i joins is a function of z_i and ฮพ_i
only, that is, g_i=g(z_i,ฮพ_i).
In a stable matching, the cutoffs p clear the supply of and demand
for each group.[The equilibrium cutoffs p satisfy the market-clearing equations:
โ_iโ๐ฉ1{g(z_i,ฮพ_i,ฮท_i;p)=g}โค n_g
for all gโ๐ข, and โ_iโ๐ฉ1{g(z_i,ฮพ_i,ฮท_i;p)=g}=n_g
if p_g>-โ <cit.>.] Let z denote the nร d_z matrix
that stacks z'_i, ฮพ the nร G vector
that stacks ฮพ_i, and ฮท the nร G
vector that stacks ฮท_i. An equilibrium cutoff vector can be
represented as p(z,ฮพ,ฮท).[There may be multiple equilibrium cutoffs in a finite-n economy.
We denote by p(z,ฮพ,ฮท)
the equilibrium that is selected by nature.] Given the equilibrium cutoffs, the groups are formed following equation
(<ref>), and the equilibrium groups can be written as g(z,ฮพ,ฮท;p(z,ฮพ,ฮท)).
ยง SELECTION BIAS
In this section, we investigate the bias that results from the selection
into groups. Throughout the paper, we maintain the following assumptions.
w is independent of ฯต
conditional on g.
(i) x_i, z_i, ฯต_i, ฮพ_i,
and ฮท_i are i.i.d. for i=1,โฆ,n. (ii) The joint cdf
of ฯต_i, ฮพ_i, and ฮท_i is continuously differentiable.
(iii) For all i, ฯต_i, ฮพ_i, and ฮท_i are
independent of x_i and z_i.
As previously stated, Assumption <ref> requires that
selection occurs only at the group level. This conditional independence
assumption implies that ๐ผ[ฯต_i|x,z,g,w]=๐ผ[ฯต_i|x,z,g].
For w that represents group averages with binding
capacities, the assumption is trivially satisfied. For a more general
w, the assumption requires that w
is independent of ฯต given the group structure
g. For example, conditional on group memberships,
how groupmates make friends is independent of ฯต.[We can potentially relax the assumption by incorporating endogenous
friendship formation within a group, following the setup in <cit.>.
However, this endogeneity would complicate the analysis without providing
additional insights. We therefore do not pursue it here.] The assumption is satisfied if students in a school are randomly
assigned to dorms or classes where they interact. Assumption <ref>(i)
imposes an i.i.d. assumption that is typical in social interactions.
Assumption <ref>(ii) imposes a smoothness assumption
on the joint cdf of the unobservables.[The assumption is to ensure that the conditional probability that
individual i joins group g is continuously differentiable.] Assumption <ref>(iii) is a standard assumption that
the observables are exogenous.
ยง.ยง The Presence of Selection Bias
Under Assumption <ref>, equation (<ref>)
presents a selection bias if
๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p(z,ฮพ,ฮท))]โ 0.
Under Assumption <ref>(i)(iii), this bias arises from
the dependence between the outcome shock ฯต_i and the unobservables
in group formation ฮพ_i and ฮท_i. The dependence causes
ฯต_i to be correlated with g_i because g_i is
a function of ฮพ_i and ฮท_i. Moreover, ฯต_i
can be correlated with the entire group structure g
through the equilibrium cutoffs p(z,ฮพ,ฮท),
as the equilibrium cutoffs depend on ฮพ and ฮท
which include ฮพ_i and ฮท_i (the general equilibrium
effect). Below we give an example to illustrate how ฯต_i
is dependent of ฮพ_i and ฮท_i and how the dependence
leads to a selection bias.
[Example <ref> continued]
In college admissions, ฯต_i represents
unobserved ability (IQ and motivation) that affects y_i (labor
market outcome). If this ability also affects an student's performance
in high school, then ฯต_i is correlated with ฮท_i.
Moreover, ฯต_i is correlated with ฮพ_i if, for example,
students of high ability prefer colleges of high (unobserved) quality.
In the presence of this correlation, the admission process will sort
students of higher ability into more selective colleges. Similarly,
if certain family backgrounds improve a student's qualifications and
preferences for higher-ranked colleges, then students from more favorable
family backgrounds will be sorted into more selective colleges. Therefore,
we will observe a positive assortative matching between students and
colleges in the sense that students who attend more selective colleges
are of higher ability and more favorable family backgrounds. The sorting
yields a positive correlation between ฯต_i and peer characteristics/outcomes
in a college. Without correcting for the sorting, we will overestimate
the peer effects.[This example neglects the general equilibrium effect. In fact, the
cutoffs of certain colleges may be raised by the presence of a high-ability
student, potentially reinforcing sorting and increasing bias. In Section
<ref>, we show that the general equilibrium effect is
negligible in a large market.]
Simulation Evidence
To further illustrate the selection bias, we provide simulation evidence
using the design in Section <ref>. We consider both
exogenous and endogenous group formation, depending on whether ฯต_i
is correlated with ฮท_i or not. Using the simulated data, we
display in Figure <ref> the correlation between the group
average characteristic xฬ
_g_i=โ_jw_ijx_j and
the unobserved shock ฯต_i for both exogenous and endogenous
groups.[We control for the individual characteristic x_i by regressing
xฬ
_g_i (resp. ฯต_i) on x_i and taking
the residual of xฬ
_g_i (resp. ฯต_i).] Under the specification that x_i and z_i are correlated
in the same direction as ฯต_i and ฮท_i are correlated,
we find that xฬ
_g_i is uncorrelated with ฯต_i
in exogenous groups but positively correlated with ฯต_i
in endogenous groups. In the latter case, if we run an OLS regression
of y_i on xฬ
_g_i (assuming ฮณ_0=0), the
estimate will be upward biased.
ยง.ยง Limiting Approximation
Because equilibrium cutoffs depend on the (observed and unobserved)
characteristics of the n individuals in a market, the selection
bias in equation (<ref>) is a high-dimensional
function that involves the observed characteristics of all the n
individuals. To reduce dimensionality, we propose a novel approach
that utilizes the limiting approximation of a market as n goes
to infinity. We find that the correlation between ฯต_i
and g through the equilibrium cutoffs becomes negligible
as the market size grows large, thereby reducing the dimensionality
of the selection bias.
To this end, let p_n=(p_n,1,โฆ,p_n,G)' denote a vector
of equilibrium cutoffs in a market with n individuals. <cit.>
showed that there is a unique stable matching in the limiting market
as nโโ, which can be captured by a unique vector
of limiting equilibrium cutoffs, denoted by p^*=(p_1^*,โฆ,p_G^*)'.
Unlike the finite-n cutoffs p_n, the limiting cutoffs p^*
are deterministic because they are determined by the distribution
of the characteristics and each individual has a negligible impact.[In order to establish the limiting approximation, <cit.>
made the assumption that the number of groups is finite, while the
size of each group approaches infinity. Consequently, we adopt the
same assumption as <cit.>. Exploring the scenario
with a growing number of groups could be of interest, but we defer
this extension to future work.]
In the following proposition, we follow <cit.>
and show that the equilibrium cutoffs in a finite-n market converge
to the equilibrium cutoffs in the limiting market as nโโ.
By continuous mapping, the selection bias in a finite-n market
also converges to the selection bias in the limiting market.
Under Assumption <ref>(i)-(ii),
we have
๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p_n)]pโ๐ผ[ฯต_i|x_i,z_i,g_i(z_i,ฮพ_i,ฮท_i;p^*)].
The proposition indicates that the selection bias in a finite-n
market can be approximated by the selection bias in the limiting market.
Because the limiting cutoffs are deterministic, the selection bias
of individual i in the limiting market depends on i's characteristics
only. This reduces the dimensionality of the selection bias from O(n)
to a finite number.
Large-market approximation has been widely used in the matching literature
<cit.>.
We follow the literature and assume that the selection bias takes
the limiting form.[Accounting for the sampling error due to the limiting approximation
is left for future research.]
ยง.ยง Nonparametric Form
Now we derive the selection bias. We start with a nonparametric form
where the selection function is group-specific. The group-specific
selection, however, poses a challenge in identifying group-level peer
effects. By imposing an exchangeability assumption, we can represent
the selection bias alternatively using a group-invariant selection
function.
ยง.ยง.ยง Group-specific selection function
With nonparametric unobservables, the selection bias is a nonparametric
function of the group formation indices. Specifically, let ฯ_i=(z'_iฮด_1^u,โฆ,z'_iฮด_G^u,z'_iฮด_1^v,โฆ,z'_iฮด_G^v)'โโ^2G
denote a vector of preference and qualification indices of individual
i.[The qualification index of a group matters
only if the capacity is binding. If the capacity is not binding, then
the qualification index of this group should be dropped from ฯ_i.
Specifically, let ๐ขโ๐ข denote
the subset of groups whose capacity is binding and G
the cardinality of ๐ข. Then ฯ_i=(z'_iฮด_g^u,gโ๐ข;z'_iฮด_g^v,gโ๐ข)โโ^G+G.] We can represent the selection bias as a group-specific nonparametric
function of ฯ_i.
Under Assumptions <ref>โ<ref>,
for any gโ๐ข, there exists a function ฮป_g:โ^2Gโโ
Below we illustrate the selection function ฮป_g(ยท) in
the case of two groups.
Suppose that there are two groups (G=2) with
the group formation indices ฯ_i=(z'_iฮด_1^u,z'_iฮด_2^u,z'_iฮด_1^v,z'_iฮด_2^v)'โโ^4.
Let ฮพ_i=(ฮพ_i1,ฮพ_i2)' and ฮท_i=(ฮท_1i,ฮท_2i)'.
Let f(ฯต_i,ฮพ_i,ฮท_i) denote the joint pdf of ฯต_i,
ฮพ_i, and ฮท_i, and f(ฮพ_i,ฮท_i) the joint
pdf of ฮพ_i and ฮท_i. The selection bias of individual
i if joining group 1 is
๐ผ[ฯต_i|x_i,z_i,g_i=1]=โซ_R_1(ฯ_i)ฯต_if(ฯต_i,ฮพ_i,ฮท_i)dฯต_idฮพ_idฮท_i/โซ_R_1(ฯ_i)f(ฮพ_i,ฮท_i)dฮพ_idฮท_i=:ฮป_1(ฯ_i),
where R_1(ฯ_i) denotes the set {(ฮพ'_i,ฮท'_i)'โโ^4:ฮท_1iโฅ p_1-z'_iฮด_1^v, ฮพ_i2-ฮพ_i1<z'_i(ฮด_1^u-ฮด_2^u) or ฮท_2i<p_2-z'_iฮด_2^v}.
Similarly, the selection bias if joining group 2 is
๐ผ[ฯต_i|x_i,z_i,g_i=2]=โซ_R_2(ฯ_i)ฯต_if(ฯต_i,ฮพ_i,ฮท_i)dฯต_idฮพ_idฮท_i/โซ_R_2(ฯ_i)f(ฮพ_i,ฮท_i)dฮพ_idฮท_i=:ฮป_2(),
where R_2(ฯ_i) denotes the set {(ฮพ'_i,ฮท'_i)'โโ^4:ฮท_2iโฅ p_2-z'_iฮด_2^v, ฮพ_i1-ฮพ_i2<z'_i(ฮด_2^u-ฮด_1^u) or ฮท_1i<p_1-z'_iฮด_1^v}.
The result in Proposition <ref> extends the standard
sample selection models (; )
to social interaction models with endogenous group formation.[<cit.> explored a sample selection model where
the specification is fully nonparametric. They represented the selection
bias as a nonparametric function of the propensity scores of a multivariate
selection rule. In our setting, the propensity scores that correspond
to the selection rules in equation (<ref>) are not available
because we do not observe individuals' rankings over the groups or
whether they qualify for each group. Instead, we impose an index structure
on the preferences and qualifications so that the selection bias can
be represented as a function of these group formation indices.] <cit.> and <cit.>
considered social interactions with endogenous one-sided group formation,
where the unobservables follow a parametric distribution. We extend
these studies to more general two-sided group formation with nonparametric
unobservables.
Proposition <ref> and Example <ref> indicate
that the selection function ฮป_g(ยท) is group-specific
for two reasons. First, the cutoffs are group-specific and are absorbed
into the selection function. Second, the distribution of the unobservables
(ฮพ_ig,ฮท_gi) may differ across groups.[In addition, equation (<ref>) indicates that the components
of ฯ_i for group g and for the other groups hโ g
also play different roles in the selection function. Nevertheless,
this group-specific feature can be handled by separating the components
for group g from those for the other groups. See equation (<ref>)
for a specific expression.] The group-specific feature poses a challenge in the identification
of group-level peer effects. To see this, let w_i denote the
ith row of w. If both w_iy and
w_ix are group averages that include i, they
are invariant within a group.[If w_iy and w_ix are group averages
that exclude i, they converge to including-oneself group averages
as the number of group members goes to infinity. Hence, the variation
of w_iy and w_ix in a group vanishes
to zero as the group size grows.] The effects of these group-level variables cannot be separately identified
from a nonparametric selection bias that is also group-specific. This
is similar to the case of panel data models, where the effects of
time-invariant variables cannot be separately identified from an individual
fixed effect. In the next section, we propose a novel idea to resolve
the problem of group-specific selection.[If we have an additional network within each group, conditional on
g_i there may be individual-level variation in w_iy
and w_ix because individuals in a group can have
different friends. In this case, we can partial out the group-specific
selection bias by interacting the indices ฯ_i with group dummies
and utilize the within-group variation in w_iy and
w_ix to identify ฮณ. This approach is not
applicable if there is no within-group variation (e.g., group averages).]
We remark that the selection bias is also individual-specific because
it depends on ฯ_i. While the selection occurs at the entry
into a group, individuals with different values of ฯ_i are
subject to different selection biases. Therefore, we cannot correct
for the selection bias by simply introducing group fixed effects in
the outcome equation. An appropriate correction requires us to utilize
the individual-level information in ฯ_i.
ยง.ยง.ยง Group-invariant selection function
Our idea to tackle the group-specific selection function is motivated
by the observation that if the cutoffs are known, by arranging the
components of ฯ_i properly, the selection function becomes
group-invariant so long as the distribution of the unobservables does
not vary across groups.
To this end, we introduce group fixed effects to account for group-level
heterogeneity and assume that the remaining individual-varying unobservables
are exchangeable across groups. Specifically, rewrite the utility
u_ig in equation (<ref>) as
u_ig =ฮฑ_g+z'_iฮด_g^u+ฮพ_ig,
where ฮฑ_g denotes a group fixed effect.[We may also include group fixed effects in the qualification v_gi,
but they cannot be distinguished from the cutoffs and thus are normalized
to 0.] Let ฮฑ=(ฮฑ_1,โฆ,ฮฑ_G)' be a vector of group
fixed effects, which captures vertical preferences over the groups.
For example, in college admissions, ฮฑ represents college quality
and reputation. We assume that the joint distribution of individual-varying
unobservables is exchangeable across groups.
[Exchangeability]
The joint pdf of ฯต_i, ฮพ_i, and
ฮท_i satisfies
f(ฯต_i,ฮพ_i1,โฆ,ฮพ_iG,ฮท_1i,โฆ,ฮท_Gi)=f(ฯต_i,ฮพ_ik_1,โฆ,ฮพ_ik_G,ฮท_k_1i,โฆ,ฮท_k_Gi),
for any permutation (k_1,โฆ,k_G) of
Assumption <ref> assumes that the joint distribution of
the unobservables is invariant under permutations of the groups.
Exchangeability has been used in various contexts such as differentiated
product markets <cit.>, panel data <cit.>,
matching <cit.>, and network formation <cit.>.
We impose exchangeability so that the unobserved heterogeneity across
groups is fully captured by group fixed effects. The assumption is
more general than an i.i.d. assumption because it permits the unobservables
(ฮพ_ig,ฮท_gi) to be dependent across groups. In particular,
it allows for an individual effect in ฮพ_i and ฮท_i.[Exchangeability rules out heterogeneous correlation between colleges.
For example, a student's unobserved preferences may be more strongly
correlated between two elite colleges than between an elite and a
non-elite college. The same may be true for a college's evaluations
of a student's extracurricular activities. Exchangeability also requires
the unobserved preferences/qualifications to have homogeneous variances
across colleges.]
Given the modified utility in (<ref>), rewrite the selection
bias in Proposition <ref> as
๐ผ[ฯต_i|x_i,z_i,g_i=g] = ๐ผ[ฯต_i|x_i,z_i,ฮท_giโฅ p_g-z'_iฮด_g^v, and โ hโ g
ฮพ_ih-ฮพ_ig<ฮฑ_g+z'_iฮด_g^u-ฮฑ_h-z'_iฮด_h^u, or ฮท_hi<p_h-z'_iฮด_h^v]
=: ฮป^e(ฯ_i^e(g)),
where ฯ_i^e(g)=(ฯ_ig^e',ฯ_i,-g^e')'โโ^2G
is a 2Gร1 vector, with ฯ_ig^e=(ฮฑ_g+z'_iฮด_g^u,p_g-z'_iฮด_g^v)'โโ^2
representing the extended indices for group g, and ฯ_i,-g^e=(ฯ_ih^e',โ hโ g)'โโ^2G-2
representing the extended indices for the remaining G-1
groups. Because p and ฮฑ are absorbed into ฯ_i^e(g)
and the components for group g are separated from the components
for the other groups, under exchangeability the selection function
ฮป^e(ยท) is invariant across groups. In contrast to Proposition
<ref>, equation (<ref>) leverages the
arguments ฯ_i^e(g), rather than the functional form ฮป^e(ยท),
to capture the group-specific feature, thereby yielding a group-invariant
selection function.
As a remark, exchangeability implies that the order of the extended
indices in ฯ_i,-g^e does not matter โ the selection function
is symmetric in the extended indices ฯ_ih^e and ฯ_ihฬ^e
for any distinct h,hฬโ g. This symmetry can further reduce
the number of nuisance parameters, which we will delve into further
in Section <ref>.
ยง IDENTIFICATION
Moving on to the identification of parameters, we first discuss the
identification of group formation indices, and then turn to the identification
of peer effects in the presence of selection bias.
ยง.ยง Identification of Group Formation Indices
Let ฮด=(ฮด_1^u',โฆ,ฮด_G^u',ฮด_1^v',โฆ,ฮด_G^v')'
collect the slope parameters in group formation and ฮธ=(ฮด',p',ฮฑ')'
all the group formation parameters. The identification of ฮด
was established in <cit.>.[A main challenge in identifying ฮด is that an individualโs
choice set ๐_i(p) is unobservable to researchers. To
achieve identification, <cit.> utilized excluded
variables that can act as โdemand shiftersโ and โchoice-set shiftersโ.
By taking the derivatives of the conditional probability of joining
a group with respect to all the variables, excluded and non-excluded,
they derived a system of linear equations that relates the marginal
effects in demand and supply. Provided that the system has a unique
solution, ฮด is identified.] Hence, the raw indices ฯ_i=ฯ(z_i,ฮด) are identified.
To identify the extended indices ฯ_i^e=ฯ^e(z_i,g_i,ฮธ),
we need to identify p and ฮฑ in addition to ฮด, which
we discuss below. We start with the following assumption.
(i) p_1=ฮฑ_1=0. (ii) The joint cdf
of ฮพ_i and ฮท_i is strictly increasing.
Assumption <ref>(i) is a location normalization because
the joint distribution of ฮพ_i and ฮท_i is nonparametric.
Assumption <ref>(ii) ensures a one-to-one mapping between
the conditional probability of joining a group and the extended indices.
It is satisfied for a broad variety of distributions such as normal
distributions.
Suppose that ฮด is known.
Under Assumptions <ref>-<ref>, p and
ฮฑ are identified.
The idea of identification is that under exchangeability, the extended
indices for two distinct groups 1 and g have the same impact
on the conditional probability of joining a third group hโ g,1.
Thus, by monotonicity (Assumption <ref>(ii)) the two
extended indices leading to the same conditional probability of joining
h must be equal. We can then recover the difference between the
cutoffs of groups 1 and g, and thus identify p_g under
the location normalization (Assumption <ref>(i)). ฮฑ
can be identified similarly. <cit.> established
the identification of p in a similar setting. By imposing a large
support assumption to shut down the impact of other groups, they identified
p by leveraging the within-group variation. Our identification
result complements theirs by leveraging the between-group variation
under exchangeability.
ยง.ยง Identification of ฮณ
Now we turn to ฮณ. Let ฮฝ_i=ฯต_i-ฮป^e(ฯ_i^e)
represent the residual of ฯต_i after eliminating the selection
bias. Write the i^th equation in (<ref>)
as
y_i=w_iyฮณ_1+w_ixฮณ_2+x'_iฮณ_3+ฮป^e(ฯ_i^e)+ฮฝ_i.
This is a partially linear model <cit.>. By taking
the expectation of equation (<ref>) conditional on ฯ_i^e
and subtracting it from equation (<ref>) we obtain
แปน_i=w_iyฮณ_1+w_ixฮณ_2+xฬ'_iฮณ_3+ฮฝ_i,
where แปน_i=y_i-๐ผ[y_i|ฯ_i^e] and similar
notation applies to other variables. The identification of ฮณ
requires that the support of the regressors w_iy,
w_ix, and xฬ_i cannot be
contained in a proper linear subspace of โ^2d_x+1.
This rank condition is satisfied if and only if there is no linear
combination of w_iy, w_ix, and
x_i that is a function of ฯ_i^e almost surely (Lemma
<ref>). In particular, the condition fails if
w_iy, w_ix, and x_i are linearly
dependent, which is referred to as the reflection problem <cit.>.
Suppose that is a dร1
vector of random variablesThe support of X_i-๐ผ[X_i|ฯ_i^e]
is contained in a proper linear subspace of
if and only if there is a dร1 vector of constants kโ 0
such that k'X_i is a function of ฯ_i^e with probability
1.
ยง.ยง.ยง The reflection problem
To investigate the reflection problem, consider the social equilibrium
in equation (<ref>). Let ฮป^e(ฯ^e)
denote the nร1 vector that stacks ฮป^e(ฯ_i^e),
where ฯ^e=(ฯ_1^e',โฆ,ฯ_n^e')',
and ฮฝ the nร1 vector that stacks ฮฝ_i.
Assume |ฮณ_1|<1 and w_โ=max_iโ๐ฉโ_j=1^n|w_ij|=1,
so I-ฮณ_1w is invertible and (I-ฮณ_1w)^-1=โ_k=0^โฮณ_1^kw^k.
The social equilibrium is
w_iy=w_ixฮณ_3+โ_k=0^โฮณ_1^kw_i^k+2x(ฮณ_1ฮณ_3+ฮณ_2)+โ_k=0^โฮณ_1^kw_i^k+1ฮป^e+โ_k=0^โฮณ_1^kw_i^k+1ฮฝ,
where w_i^k denotes the i^th row of w^k.
The expression implies that w_iy, w_ix,
and x_i are linearly independent if (i) the support of x_i,w_ix,w_i^2x,w_i^3x,โฆ
is not contained in a proper linear subspace of โ^2d_x+1
and ฮณ_1ฮณ_3+ฮณ_2โ 0, or (ii) the support of
x_i,w_ix,w_iฮป^e,w_i^2ฮป^e,โฆ
is not contained in a proper linear subspace of โ^2d_x+1.
Sufficient conditions have been established in the existing literature
for case (i). For example, w_i^2x, w_ix,
and x_i are linearly independent if there is an intransitive
triad in each group <cit.>, or if there
is variation in group sizes when we consider group averages that exclude
oneself <cit.>. However, this identification
strategy fails for group averages that include oneself because w^2=w
<cit.>.[In large groups, the difference between group averages that include
or exclude oneself should be minimal. Consequently, identification
through variation in group sizes may become less effective as groups
increase in size.]
The presence of selection offers an alternative method of identification
through case (ii). It is evident from equation (<ref>) that
the rank condition holds if w_iฮป^e, w_ix,
and x_i are linearly independent and, in view of Lemma <ref>,
w_iฮป^e is not perfectly predictable by ฯ_i^e.
This identification strategy is applicable regardless of whether group
averages include or exclude oneself, or whether there are networks
within groups. The result is consistent with the insightful discovery
of <cit.> that identification
can be achieved through self-selection as long as there is variation
in the selection within a group. Because the selection bias acts as
an individual-level variable whose average is not included in the
contextual effect, its presence precludes w_iy and
w_ix from being linearly dependent.
Near multicollinearity
Although the theory predicts that w_iy and w_ix
are linearly independent in the presence of selection, for group averages
that include oneself we find in simulations that w_iy
and w_ix are nearly collinear, though not perfectly
collinear, when the number of groups is small (e.g., G=5). Note
that w^2=w in this case, so the social
equilibrium reduces to
w_iy = w_ixฮณ_2+ฮณ_3/1-ฮณ_1+w_iฮป^e1/1-ฮณ_1+w_iฮฝ1/1-ฮณ_1.
The near multicollinearity arises because both w_iฮป^e
and w_ix are group-level variables, so they appear
highly collinear when the number of groups is small.[The condition number is a commonly used measure for assessing the
degree of multicollinearity in a dataset. Specifically, the condition
number of a matrix is given by the ratio of its largest and smallest
singular values. Using our simulated samples, we calculate the condition
number of the matrix [wy,wx]'[wy,wx],
where we use the residuals of each w_iy and w_ix
after controlling for x_i and a sieve basis of ฯ_i^e.
For group averages that include oneself and with 5 groups, the
condition number is on average 845, indicating that w_iy
and w_ix are highly collinear.] This near multicollinearity imposes an ill-conditioned problem in
the estimation of ฮณ, thereby leading to a biased estimate.
Next, we propose a novel approach to resolve this issue.
ยง.ยง.ยง Instrumental variables
Our idea is to use the excluded variables z_i in group formation
as instruments for w_iy. The intuition is that z_i
affects the group that individual i joins and thus the average
outcome of i's peers. Because z_i brings in individual-level
variation, the predicted value of w_iy tends to
be linearly independent of w_ix, thereby alleviating
the reflection problem.
Instrument validity
To show that z_i is a valid instrument, let X_i=(w_iy,w_ix,x'_i)'โโ^d_X
denote a vector of regressors and Z_i=(z'_i,w_ix,x'_i)'โโ^d_Z
a vector of instruments.[If z_i overlaps with x_i, we only include the variables
excluded from x_i in z_i.] Rewrite equation (<ref>) as แปน_i=Xฬ'_iฮณ+ฮฝ_i.
Because ๐ผ[ฮฝ_i|x,z,g,w]=๐ผ[ฯต_i|x,z,g,w]-ฮป^e(ฯ_i^e)=0,
Z_i satisfies the exclusion restriction ๐ผ[Z_iฮฝ_i]=0,
thereby yielding the moment condition ๐ผ[Z_i(แปน_i-Xฬ'_iฮณ)]=0
or equivalently ๐ผ[Zฬ_i(แปน_i-Xฬ'_iฮณ)]=0.[Note that ๐ผ[Zฬ_iฮฝ_i]=๐ผ[Zฬ_i๐ผ[ฮฝ_i|x,z,g,w]]=0.]
The textbook literature on instrumental variables <cit.>
suggests that ฮณ is identified if Assumption <ref>
is satisfied because there is a unique ฮณ that satisfies ๐ผ[Zฬ_i(แปน_i-Xฬ'_iฮณ)]=0.
(i) ๐ผ[Zฬ_iZฬ'_i]
has full rank. (ii) ๐ผ[Zฬ_iXฬ'_i] has
full column rank.
Suppose that ฮด is known. Under Assumptions <ref>-<ref>,
ฮณ is identified.
Assumption <ref>(i) is the standard assumption that the
instruments are linearly independent.[The assumption requires that z_i contains at least two variables
that are excluded from x_i. Otherwise, after we partial out ฯ_i^e
and condition on x_i, there would be no additional variation
in z_i that could serve as an instrument.] Assumption <ref>(ii) is the standard rank condition
for identification. It requires that the support of Xฬ_i
is not contained in a proper linear subspace of โ^2d_x+1,
which is guaranteed in the presence of selection as discussed in Section
<ref>. Moreover, the rank condition requires that
the linear projection of w_iy on Zฬ_i
must have a nonzero coefficient for zฬ_i.[Suppose that the linear projection of w_iy
on Zฬ_i is given by ๐[w_iy|Zฬ_i]=zฬ'_iฮฒ_1+w_ixฮฒ_2+xฬ'_iฮฒ_3.
By the property of a linear projection, we have ๐ผ[Zฬ_i(w_iy-๐[w_iy|Zฬ_i])]=0
and thus ๐ผ[Zฬ_iw_iy]=๐ผ[Zฬ_izฬ'_i]ฮฒ_1+๐ผ[Zฬ_iw_ix]ฮฒ_2+๐ผ[Zฬ_ixฬ'_i]ฮฒ_3.
If ฮฒ_1=0, then ๐ผ[Zฬ_iw_iy]
is a linear function of ๐ผ[Zฬ_iw_ix]
and ๐ผ[Zฬ_ixฬ'_i], and the rank condition
in Assumption <ref>(ii) is violated.] In other words, conditional on the controls, the instrument zฬ_i
must be correlated with w_iy, the relevance
condition for an instrument to be valid.
The influence of z_i on w_iy is evident, given
that z_i affects the group that i joins, thereby influencing
the average outcome of i's peers. What is not immediately apparent
is that zฬ_i may still impact w_iy
even after the selection is accounted for. In fact, under exchangeability
the selection bias can be corrected for without fixing the group that
an individual joins. Below we provide an example that individual i
joins different groups under different z_i, but the selection
bias takes the same value regardless of which group i joins. Therefore,
even after correcting for the selection bias, zฬ_i can
still influence the group that i joins and thus w_iy.
Assume two groups (G=2). Suppose that u_ig=ฮฑ_g-z_ig+ฮพ_ig
and v_gi=z_ig+ฮท_gi, g=1,2, where (ฮฑ_1,ฮฑ_2)=(4,5).
The cutoffs are given by (p_1,p_2)=(2,3). Denote z_i=(z_i1,z_i2).
Consider two values z=(z_1,z_2)=(1,5) and z=(z_1,z_2)=(4,2).
Suppose the unobservables (ฮพ_i1,ฮพ_i2,ฮท_1i,ฮท_2i)
satisfy -3<ฮพ_i1-ฮพ_i2<3, and ฮท_1i,ฮท_2i>1, then
individual i joins group 1 if z_i=z and group 2 if z_i=z.
In this case, i's extended indices under z and z
are equal because ฯ^e=(ฮฑ_1-z_1,p_1-z_1,ฮฑ_2-z_2,p_2-z_2)=(3,1,0,-2)
and ฯฬ
^e=(ฮฑ_2-z_2,p_2-z_2,ฮฑ_1-z_1,p_1-z_1)=(3,1,0,-2).
Conditional on ฯ_i^e, z_i can still influence the group
i joins.
Example <ref> demonstrates the impact of zฬ_i
on w_iy. What we need further is that
the impact is not through w_ix only.
In fact, equation (<ref>) shows that zฬ_i
can impact w_iy through two channels:
the average characteristics w_ix and
the average selection w_iฮป^e.
Even after controlling for w_ix, zฬ_i
can still operate through w_iฮป^e.
By affecting the group that i joins, zฬ_i can influence
the average selection and outcome of i's peers.
Reducing near multicollinearity
Why does using zฬ_i as an instrument for w_iy
help with the reflection problem? The relevance condition implies
that the predicted value of w_iy is
a function of zฬ_i (see footnote <ref>).
Since zฬ_i contains individual-level variation, it is
unlikely for the predicted value of w_iy
to be collinear with w_ix, thereby alleviating
the reflection problem.
Equation (<ref>) may motivate one to use the average excluded
variables w_iz in group formation as an instrument
for w_iy. However, for group averages
that include oneself, because w_iz is group-specific,
there remains a high degree of collinearity between the predicted
value of w_iy and w_ix
if the number of groups is small. Therefore, using w_iz
as an instrument does not solve the problem.
ยง ESTIMATION
In this section, we propose semiparametric methods for estimating
the parameters. First, we develop a three-step kernel estimator for
the group formation parameters. Building on this, we then propose
a semiparametric three-step GMM estimator for the social interaction
parameters. The estimator involves estimating the selection bias using
sieve, followed by estimating the interaction parameters using GMM.
ยง.ยง Estimating the Group Formation Parameters
In order to construct the extended indices ฯ_i^e=ฯ^e(z_i,g_i,ฮธ_0),
we need to estimate the true parameter ฮธ_0=(ฮด'_0,p^*',ฮฑ'_0)',
where p^* represents the limiting cutoffs. <cit.>
proposed a semiparametric GMM estimator for ฮด_0=(ฮด_0,g^u,ฮด_0,g^v,gโ๐ข).[<cit.>'s estimator is based on average derivatives
<cit.>.] In the following, we propose an estimator for p^* and ฮฑ_0.
We develop a three-step kernel estimator for p^* and ฮฑ_0
that builds upon the identification result in Proposition <ref>.
For simplicity, we focus on p^*, and a similar estimator can
be established for ฮฑ_0. To estimate the cutoff p_g^*
for group gโ 1, we consider the probability that individual i
joins group hโ 1,g conditional on i's qualification index
ฯ_ig^v=z'_iฮด_0,g^v for group g, denoted by
ฯ_h|g(ฯ_ig^v)=(g_i=h|ฯ_ig^v). Similarly,
let ฯ_h|1(ฯ_j1^v)=(g_j=h|ฯ_j1^v) denote
the probability that individual j joins group h conditional
on j's qualification index ฯ_j1^v=z'_jฮด_0,1^v
for group 1. Proposition <ref> states that the
cutoff p_g^* can be identified by the difference ฯ_ig^v-ฯ_j1^v
if ฯ_h|g(ฯ_ig^v)=ฯ_h|1(ฯ_j1^v). Based
on this result, we propose a three-step kernel estimator for p_g^*.
In the first step, we estimate the qualification index ฯ_ig^v
by ฯฬ_ig^v=z'_iฮดฬ_g^v, where ฮดฬ_g^v
is an estimator of ฮด_0,g^v. In the second step, we estimate
the conditional choice probability ฯ_h|g(ฯ) for gโ h
and ฯโโ by a kernel estimator ฯฬ_h|g(ฯ)=โ_i=1^n1{g_i=h}K_1(ฯ-ฯฬ_ig^v/ฮถ_1n)/โ_i=1^nK_1(ฯ-ฯฬ_ig^v/ฮถ_1n),
where K_1(ยท) is a kernel function and ฮถ_1n is a
bandwidth. In the third step, we estimate the cutoff p_g^*
by a kernel estimator
pฬ_g=1/n(n-1)(G-2)โ_i=1^nโ_j=1
jโ i
^nโ_h=2
hโ g
^G1/ฮถ_2nK_2(ฯฬ_h|g(ฯฬ_ig^v)-ฯฬ_h|1(ฯฬ_j1^v)/ฮถ_2n)(ฯฬ_ig^v-ฯฬ_j1^v),
where K_2(ยท) is a kernel function and ฮถ_2n is a
bandwidth. The kernel function K_2(ยท) approximates the criterion
that the pair (i,j) satisfies ฯ_h|g(ฯ_ig^v)=ฯ_h|1(ฯ_j1^v).
We leverage all the qualifying pairs to achieve the desired โ(n)
rate. As any hโ 1,g can yield an estimator, we take the average
over h to improve efficiency. This estimator is similar in spirit
to a propensity-score matching estimator <cit.>.
To establish the asymptotic properties of pฬ_g, we impose
the following conditions.
[First-Step Kernel]
(i) โซ K_1(t)dt=1, and for a positive integer
s_1 and all j<s_1. (ii)
K_1(t) is twice continuously differentiable. (iii) K_1(t)
is zero outside a bounded set.
[Second-Step Kernel]
(i) โซ K_2(t)dt=1, and for a positive integer
s_2โฅ s_1 and all j<s_2, .
(ii) K_2(t) is twice continuously differentiable. (iii) K_2(t)
is zero outside a bounded set.
[Smoothness]
For any gโ hโ๐ข, (i) the pdf
of ฯ_ig^v is bounded away from zero on the support of ฯ_ig^v,
(ii) the pdf of ฯ_ig^v is (s_2+1)th continuously
differentiable, (iii) is (s_2+2)th
continuously differentiable.
[Bandwidth]
The bandwidths ฮถ_1n,ฮถ_2nโ0
satisfy (i)
(ii) ฮถ_2n^-3r_0โ0 and ฮถ_2n^-2r_1โ0,
(iii) n^1/2ฮถ_1nฮถ_2n^2โโ, (iv) n^1/2ฮถ_1n^s_1โ0,
and (v) n^1/2ฮถ_2n^s_2โ0, where r_0=(ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1
and r_1=(ln n)^1/2(nฮถ_1n^3)^-1/2+ฮถ_1n^s_1.
[Parameter ฮด^v]
(i) The true parameter ฮด_0^v lies in
the interior of a compact set. (ii)
[Bounded Covariates]
(i) The support of z_i is bounded. (ii)
The support of x_i is bounded.
Assumptions <ref> and <ref> impose regularity conditions
on the kernel functions. Assumption <ref> imposes smooth
assumptions on the density of qualification indices and the conditional
choice probabilities. Assumption <ref> assumes that the
covariates are bounded. These assumptions are standard in the kernel
literature <cit.>. Assumption <ref>
restricts the bandwidths to achieve the desired โ(n) rate.[The bandwidth rates in Assumption <ref> are used to
(a) guarantee that the biases introduced by the kernel estimators
are o_p(n^-1/2), (b) limit the impact of the estimation of
a conditional choice probability, and (c) establish a Hoeffding projection
for a V-statistic of degree 3 introduced by the kernel estimators.
Assumption <ref> is satisfied, for example, for s_1=3,
s_2=5, ฮถ_1n=O(n^-11/60), and ฮถ_2n=O(n^-5/48).] Assumption <ref> imposes a restriction on the estimator
ฮดฬ^v, which is satisfied by the estimator proposed
in <cit.>.[For the estimation of ฮฑ, we need to adapt Assumptions <ref>
and <ref> by replacing v with u.]
Theorem <ref> shows that pฬ_g is โ(n)
consistent and asymptotically normal.
Under Assumptions <ref>-<ref>,
<ref>-<ref>, and <ref>(i), we have
โ(n)(pฬ_g-p_g^*)dโN(0,V_g),
where V_g=Var[ฯ_g(z_i,g_i)] and ฯ_g(z_i,g_i)
is defined in the proof.
ยง.ยง Estimating the Social Interaction Parameters
ยง.ยง.ยง Accounting for the symmetry in the selection function
We will now focus on estimating ฮณ. Recall that the selection
function ฮป^e(ยท) exhibits symmetry in the extended indices
for groups other than g_i. By exploiting this symmetry, we can
equivalently represent the selection bias as a function of the elementary
symmetric functions of those extended indices. This allows us to reduce
the number of nuisance parameters and improve estimation efficiency.
Specifically, we denote the elementary symmetric functions of ฯ_i,-g_i^e
as ฯ_i,-g_i^s, which can be represented by the coefficients
of the polynomial function โ_hโ g_i(1+t'ฯ_ih^e)
in the indeterminates t=(t_1,t_2)' <cit.>.[Recall that ฯ_ig^e=(ฯ_ig^e,u,ฯ_ig^e,v)', where
we denote ฯ_ig^e,u=ฮฑ_g+z'_iฮด_g^u and ฯ_ig^e,v=p_g-z'_iฮด_g^v,
gโ๐ข. The first two orders of the elementary symmetric
functions are given by the sums of all individual terms (โ_hโ g_iฯ_ih^e,u
and โ_hโ g_iฯ_ih^e,v) and the sums of all pairwise
products (โ_(h_1,h_2)โ g_iฯ_ih_1^e,uฯ_ih_2^e,u,
โ_(h_1,h_2)โ g_iฯ_ih_1^e,vฯ_ih_2^e,v,
and โ_(h_1,h_2)โ g_iฯ_ih_1^e,uฯ_ih_2^e,v,
where โ_(h_1,h_2)โ g_i denotes the sum over all combinations
of distinct h_1 and h_2 in ๐ข\{g_i}).
The higher-order functions can be derived similarly. ] By the fundamental theorem of symmetric functions in conjunction
with the Weierstrass approximation theorem, any symmetric function
can be approximated arbitrarily closely by a polynomial function of
the elementary symmetric functions <cit.>. Therefore,
there is a function ฮป^s(ยท) such that ฮป^e(ฯ_i^e)=ฮป^s(ฯ_i^s),
where ฯ_i^s=(ฯ_ig_i^e,ฯ_i,-g_i^s).[To further reduce the dimension of ฯ_i^s,
we can drop the utility index ฯ_ig_i^e,u in ฯ_ig_i^e
and replace the utility index ฯ_ih^e,u in ฯ_ih^e
for any hโ g_i with the utility difference index ฯ_ih^e,u-ฯ_ig_i^e,u.]
Using the symmetric representation can reduce the number of nuisance
parameters in a sieve approximation. For instance, consider the ฯ_i^s
that is constructed using utility differences as discussed in footnote
<ref>. If we use linear basis functions, ฮป^e(ฯ_i^e)
has 2G-1 approximating functions, whereas ฮป^s(ฯ_i^s)
has only 3, including one for group g_i and two for the remaining
G-1 groups combined. If we consider basis functions of order two,
ฮป^e(ฯ_i^e) has (2G-1)G approximating functions,
whereas ฮป^s(ฯ_i^s) has 9.[Denote ฮฯ_ih^e,u=ฯ_ih^e,u-ฯ_ig_i^e,u
for hโ g_i. We can represent ฯ_i^e=(ฯ_ig_i^e,v,ฮฯ_ih^e,u,ฯ_ih^e,v,โ hโ g_i)โโ^2G-1.
For ฮป^e(ฯ_i^e), we have (2G-1)G functions of
order two: 2G-1 squared indices and (2G-1)(G-1) pairwise interactions.
For ฮป^s(ฯ_i^s), we have the following 9 functions
of order two: (ฯ_ig_i^e,v)^2, (โ_hโ g_iฮฯ_ih^e,u)^2,
(โ_hโ g_iฯ_ih^e,v)^2, ฯ_ig_i^e,vโ_hโ g_iฮฯ_ih^e,u,
ฯ_ig_i^e,vโ_hโ g_iฯ_ih^e,v, (โ_hโ g_iฮฯ_ih^e,u)(โ_hโ g_iฯ_ih^e,v),
โ_(h_1,h_2)โ g_iฮฯ_ih_1^e,uฮฯ_ih_2^e,u,
โ_(h_1,h_2)โ g_iฯ_ih_1^e,vฯ_ih_2^e,v,
and โ_(h_1,h_2)โ g_iฮฯ_ih_1^e,uฯ_ih_2^e,v.]
ยง.ยง.ยง Estimating ฮณ
Rewrite equation (<ref>) using the symmetric representation
ฮป^s(ฯ_i^s). Partialing out the selection bias yields
y_i-๐ผ[y_i|ฯ_i^s]=(X_i-๐ผ[X_i|ฯ_i^s])'ฮณ+ฮฝ_i.
The true parameter ฮณ_0 satisfies the moment condition
0 = ๐ผ[Z_i(y_i-๐ผ[y_i|ฯ_i^s]-(X_i-๐ผ[X_i|ฯ_i^s])'ฮณ)]
= ๐ผ[(Z_i-๐ผ[Z_i|ฯ_i^s])(y_i-X'_iฮณ)],
where the last equality follows from iterated expectations.[๐ผ[Z_i(y_i-๐ผ[y_i|ฯ_i^s]-(X_i-๐ผ[X_i|ฯ_i^s])'ฮณ]=๐ผ[Z_i(y_i-X'_iฮณ)]-๐ผ[Z_i๐ผ[y_i-X'_iฮณ|ฯ_i^s]]=๐ผ[Z_i(y_i-X'_iฮณ)]-๐ผ[๐ผ[Z_i|ฯ_i^s]๐ผ[y_i-X'_iฮณ|ฯ_i^s]]=๐ผ[(Z_i-๐ผ[Z_i|ฯ_i^s])(y_i-X'_iฮณ)].]
Based on the last moment condition, we propose a semiparametric three-step
GMM estimator for ฮณ_0.
In the first step, we estimate the symmetric indices ฯ_i^s=ฯ^s(z_i,g_i,ฮธ_0)
by ฯฬ_i^s=ฯ^s(z_i,g_i,ฮธฬ), where
ฮธฬ=(ฮดฬ',pฬ',ฮฑฬ')' is an estimator
of ฮธ_0=(ฮด'_0,p^*',ฮฑ'_0)'. In the second
step, we estimate the conditional expectation ฮผ_0^Z(ฯ_i^s)=๐ผ[Z_i|ฯ_i^s]
by a sieve estimator ฮผฬ^Z(ฯฬ_i^s). Specifically,
we construct a Kร1 vector of approximating functions, denoted
by b^K(ฯ_i^s)=(b_1K(ฯ_i^s),โฆ,b_KK(ฯ_i^s))',
for each individual i, and an nร K matrix of all approximating
functions, denoted by B_K(ฯ^s)=(b^K(ฯ_1^s),โฆ,b^K(ฯ_n^s))',
where ฯ^s=(ฯ_1^s',โฆ,ฯ_n^s')'.
We then estimate ฮผ_0^Z(ฯ_i^s) by the predicted value
ฮผฬ^Z(ฯฬ_i^s) obtained from regressing Z=(Z_1,โฆ,Z_n)'
on the estimated approximating functions Bฬ_K=(b^K(ฯฬ_1^s),โฆ,b^K(ฯฬ_n^s))',
that is, ฮผฬ^Z(ฯฬ_i^s)=Z'Bฬ_K(Bฬ_K'Bฬ_K)^-1b^K(ฯฬ_i^s).
In the third step, we estimate the parameter ฮณ_0 using GMM.
Define the moment function m(ฯ_i,ฮณ,ฮผ_0^Z(ฯ_i^s))=(Z_i-ฮผ_0^Z(ฯ_i^s))(y_i-X'_iฮณ),
where ฯ_i=(y_i,X'_i,Z'_i)'. We denote its population
mean and sample analogue by m_0(ฮณ,ฮผ_0^Z)=๐ผ[m(ฯ_i,ฮณ,ฮผ_0^Z(ฯ_i^s))]
and mฬ_n(ฮณ,ฮผฬ^Z)=1/nโ_i=1^nm(ฯ_i,ฮณ,ฮผฬ^Z(ฯฬ_i^s)).
Let W be a weighting matrix and ลด a consistent estimator
of W. We obtain a GMM estimator ฮณฬ by solving the
problem min_ฮณโฮmฬ_n(ฮณ,ฮผฬ^Z)'ลดmฬ_n(ฮณ,ฮผฬ^Z).
With the addition of the following assumptions, we show in Theorems
<ref> and <ref> that ฮณฬ
is โ(n) consistent and asymptotically normal.
[Parameter ฮธ]
(i) The true parameter ฮธ_0 lies in the
interior of a compact set ฮ. (ii) ฮธฬ-ฮธ_0=n^-1โ_i=1^nฯ_ฮธ(z_i,ฮธ_0)+o_p(n^-1/2),
where ๐ผ[ฯ_ฮธ(z_i,ฮธ_0)]=0 and ๐ผ[ฯ_ฮธ(z_i,ฮธ)^2]<โ.
[Sieve]
Let Kโโ and K/nโ0.
The basis functions b^K(ฯ)โโ^K satisfy the following
conditions. (i) ๐ผ[b^K(ฯ)b^K(ฯ)']=I_K. (ii)
There exist ฮฒ^Z and a constant a>0 such that sup_ฯ|ฮผ_0^Z(ฯ)-b^K(ฯ)'ฮฒ^Z|=O(K^-a).
(iii) sup_ฯb^K(ฯ)โคฯฑ_0(K)
for constants such that ฯฑ_0(K)^2K/nโ0.
(iv) sup_ฯโ b^K(ฯ)/โฯ'โคฯฑ_1(K)
for constants such that ฯฑ_1(K)/โ(n)โ0.
[Adjacency Matrix]
The adjacency matrix w=(w_ij)โโ_+^n^2
satisfies the following conditions. (i) w_โ=max_iโ๐ฉโ_j=1^n|w_ij|=1.
(ii) .
(iii) There exist i.i.d. ฯ_i, iโ๐ฉ, such
that (a) (x,z,g) is a function
of ฯ=(ฯ_i,iโ๐ฉ), (b)
๐ผ[ฮฝ_i|ฯ]=0, and (c) w
is independent of ฮฝ conditional on ฯ.
(iv) For h_ij=h(ฯ_i,ฯ_j)โโ such
that max_i,jโ๐ฉ|h_ij|<โ, max_r,rฬโฅ1max_i,j,k.lโ๐ฉ:{i,j}โฉ{k,l}=โ
|Cov(h_ij((w^t)'w^r)_ij,h_kl((w^t)'w^rฬ)_kl)|=o(n^-2),
t=0,1. (v) For K that satisfies Assumption <ref>,
max_i,j,k,lโ๐ฉ:{i,j}โฉ{k,l}=โ
๐ผ[Cov(w_ij,w_kl|ฯ)^2]=o(n^-4/K)
and .
(vi) For overlapping {i,j} and {k,l}, max_i,j,k,lโ๐ฉ:|{i,j}โฉ{k,l}|=1๐ผ[Cov(w_ij,w_kl|ฯ)^2]=o(n^-4).
[GMM]
(i) ลดpโW. W is
positive semi-definite and bounded, and Wm_0(ฮณ,ฮผ_0^Z)โ 0
for all ฮณโ ฮณ_0. (ii) The true parameter ฮณ_0
lies in the interior of a compact set ฮ. (iii) M'_nWM_n
is nonsingular for M_n=๐ผ[โmฬ_n(ฮณ_0,ฮผ_0^Z)/โฮณ'].
[Smoothness]
(i) The unobservable ฯต_i has finite
fourth moment. (ii) For any ฮธโฮ, ๐ผ[Z_i|ฯ^s(z_i,g_i,ฮธ)]
and ๐ผ[ฯต_i|ฯ^s(z_i,g_i,ฮธ)] are continuously
differentiable in ฯ^s(z_i,g_i,ฮธ).
Under Assumptions <ref>โ<ref>
and <ref>โ<ref>, ฮณฬ-ฮณ_0=o_p(1).
Under Assumptions <ref>โ<ref>
and <ref>โ<ref>, โ(n)ฮฃ_n^-1/2(ฮณฬ-ฮณ_0)dโN(0,I_d_ฮณ),
where the variance ฮฃ_n is defined in the proof.
Assumption <ref>(i) requires that the true parameter
ฮธ_0 is bounded. Although we set a cutoff to -โ if
the capacity is not binding, such a cutoff does not affect individuals'
choices and is excluded from ฮธ (footnote <ref>).[Strictly speaking, we also need to assume that any market-clearing
cutoff is bounded away from -โ. Although the demand and supply
for a group may be equal at a cutoff of -โ, the group's capacity
must take a particular value for that to occur. This is because a
cutoff of -โ no longer makes the group selective, and the
demand is solely determined by the number of individuals who do not
prefer or qualify for any other group. In general, a market-clearing
cutoff of -โ yields a system of equations with more market-clearing
equations than unknowns (cutoffs), which typically has no solution.
We assume that this unlikely case is ruled out.] Assumption <ref>(ii) imposes a restriction on the estimator
ฮธฬ, which is satisfied by Assumption <ref>
and Theorem <ref>. Assumption <ref> imposes standard
regularity conditions for the sieve estimation. Part (i) is a normalization.[Alternatively, we can assume that the smallest eigenvalue of ๐ผ[b^K(ฯ)b^K(ฯ)']
is bounded away from zero uniformly in K. Assuming this, let Q_0=๐ผ[b^K(ฯ)b^K(ฯ)']
and Q_0^-1/2 the symmetric square root of Q_0^-1. Then
bฬ^K(ฯ)=Q_0^-1/2b^K(ฯ) is a nonsingular transformation
of b^K(ฯ) that satisfies ๐ผ[bฬ^K(ฯ)bฬ^K(ฯ)']=I_K.
Notably, nonparametric series estimators are invariant under nonsingular
transformations of b^K(ฯ): let ฮฒฬ^Z=Q_0^1/2ฮฒ^Z
then bฬ^K(ฯ)'ฮฒฬ^Z=b^K(ฯ)'ฮฒ^Z.
Furthermore, bฬ^K(ฯ) satisfies Assumption <ref>(iii)(iv)
if and only if b^K(ฯ) does. Therefore, all parts of Assumption
<ref> are satisfied with b^K(ฯ) replaced by bฬ^K(ฯ)
<cit.>.] Parts (ii)โ(iv) impose rate conditions on the basis functions that
are similar to those used in the literature (;
). Assumption <ref> imposes the standard
conditions for GMM.[Both the identification condition in part (i) and part (iv) can be
satisfied if the weighting matrix W is nonsingular and if Assumption
<ref>(ii) is satisfied with ฯ_i^s in place
of ฯ_i^e.] Assumption <ref> is used to analyze the impact of the
sieve estimator.
To derive the asymptotic distribution of ฮณฬ, we apply
<cit.>'s approach to account for the impact of the first-
and second-step estimation. A major challenge in the asymptotic analysis
is that the adjacency matrix w is endogenous and can
be correlated with the observables x and z.
Unlike the literature that assumes a predetermined w,
we generalize the methods for weighted ๐-statistics <cit.>
to address the endogeneity in group memberships, where the weights
can be random due to networks within groups. To achieve the desired
โ(n) rate in the presence of random weights, we provide in
Assumption <ref> a set of sufficient conditions on w
to ensure that the dependence through networks vanishes sufficiently
fast as n grows large.[Specifically, part (i) is a row-sum normalization. Part (ii) assumes
that w is dense in the sense that each entry of w
vanishes at the rate of n^-1. Part (iii) is satisfied by construction
if ฯ=(x,z,g),
or if ฯ includes additional individual heterogeneity
in network formation that is independent of ฯต
(Appendix <ref>). Part (iv) restricts the dependence
between two entries of a higher-order polynomial of w
that have disjoint subscripts. Part (v) imposes further rate conditions
on the dependence between two entries of w with disjoint
subscripts, which are necessary to establish the consistency of the
sieve estimator. Part (vi) restricts the dependence between two entries
of w with overlapping subscripts, which is used to
obtain the first-order Hoeffding projection of a weighted ๐-statistic
in deriving the asymptotic distribution of ฮณฬ.] In Appendix <ref>, we verify that Assumption <ref>
is satisfied for group averages, dyadic networks with fixed effects
<cit.>, and strategic networks of incomplete information
under restrictions (; ).[It is possible that there exist alternative sufficient conditions
on the adjacency matrix that can accommodate sparsity and/or more
complicated strategic interactions in networks within groups <cit.>.
However, exploring these alternatives is outside the scope of this
paper and will be left for future research.]
ยง SIMULATIONS
ยง.ยง Setup
In this section, we evaluate our approach in a simulation study. We
generate a market of 2,000 individuals who interact based
on the linear-in-means model in (<ref>). We assume
that both x_i and ฯต_i are i.i.d., where x_iโผ N(5,25)
and ฯต_iโผ N(0,1), and they are independent of each other.
We set the parameter ฮณ=(ฮณ_1,ฮณ_2,ฮณ_3)
to (0.5,1,1).
The market consists of five potential groups, with capacities of 340,
320, 340, 320, and 340 seats, for a total of 1,660 seats. Individuals
choose which group to join as described in Section <ref>.
The utility and qualification of individual i for group g=1,โฆ,5
are specified as u_ig=ฮฑ_g+ฮด_1^uz_1,ig^u+ฮด_2^uz_2,i+ฮพ_ig and v_gi=ฮด_1^vz_1,ig^v+ฮด_2^vz_2,i+ฮท_gi,
where ฮฑ=(ฮฑ_1,ฮฑ_2,ฮฑ_3,ฮฑ_4,ฮฑ_5)=(9,6,4,2,0)
and ฮด=(ฮด_1^u,ฮด_2^u,ฮด_1^v,ฮด_2^v)=(-1,1,1,1).
We also allow for an individual to not join any group, in which case
her utility is u_i0=ฮพ_i0. The pair-specific characteristics
z_1,ig^u and z_1,ig^v are i.i.d. across i
and g with the distribution N(0,9), while the individual-specific
characteristic z_2,i is i.i.d. with a distribution that is correlated
with x_i. Specifically, we set z_2,i=log(x_i+q_i+20),
where q_i is i.i.d. N(0,4). Denote z_i=(z_1,ig^u,z_1,ig^v,z_2,i).
We assume that the utility shock ฮพ_ig is i.i.d. across i
and g=0,โฆ,5 with the type I extreme value distribution, and
the preference shock ฮท_gi is i.i.d. with the distribution
N(ฯต_i,1). Note that ฮท_gi is correlated with ฯต_i,
thereby leading to endogenous groups. We calculate stable groups using
the individual-proposing Deferred-Acceptance algorithm <cit.>.[Because the number of individuals is larger than the number of available
seats, in our simulated markets all the five groups have binding capacities.]
We consider two specifications of the adjacency matrix w.
The first specification takes the average among all group members
including oneself; the second specification takes the average among
friends in the same group, where friendships are generated independently
with a constant probability of 0.5. We estimate the parameter ฮณ
using data from one market and repeat the experiment independently
200 times to calculate the mean biases, standard errors, and root
MSE of the estimates.[Appendix <ref> discusses the estimation of the group
formation parameters.]
ยง.ยง Results
Table <ref> presents the results in the absence
of endogenous social interactions (ฮณ_1=0). For group averages
(Panel A), the OLS estimate of ฮณ_2 is upward biased (Column
1), indicating a selection bias. Although controlling for school fixed
effects (FE) reduces this bias, it cannot eliminate it entirely (Column
2). In Column 3, we construct a polynomial series of the symmetric
indices of group formation (up to order two) to correct for the selection
bias. The sieve OLS estimates of ฮณ are unbiased, suggesting
that the series based on group formation indices is an effective correction.
The specification with networks (Panel B) produces similar results:
both OLS and OLS with school FE yield biased estimates (Columns 4
and 5), whereas sieve OLS yields unbiased estimates (Column 6). In
addition, sieve OLS has smaller standard errors and RMSE compared
to OLS and OLS with school FE.
The results in the presence of endogenous social interactions (ฮณ_1โ 0)
are presented in Table <ref>. For group averages
(Panel A), both OLS and OLS with school FE yield heavily biased estimates
of ฮณ_1 and ฮณ_2 due to the selection and reflection
problem (Columns 1 and 2). Although sieve OLS using the polynomial
series corrects for the selection bias, it does not eliminate the
bias caused by the reflection problem (Column 3). To address this
issue, in addition to the selection correction, we use an instrument
for w_iy and estimate ฮณ by sieve 2SLS. Column
4 shows that using the average excluded variables in group formation
w_iz as instruments for w_iy does
not solve the problem: the estimates of ฮณ are almost identical
to those obtained from sieve OLS. In fact, the F-test for the instruments
w_iz indicates that w_iy and w_iz
are highly collinear, rendering sieve OLS and sieve 2SLS with instruments
w_iz equivalent. In contrast, using one's own excluded
variables in group formation z_i as instruments for w_iy
alleviates the reflection problem, yielding unbiased estimates of
ฮณ (Column 5). Moreover, the F-test suggests that z_i
is correlated with w_iy, and the over-identification
test confirms the exogeneity of z_i. These results support the
validity of using z_i as instruments for w_iy.[Sieve 2SLS with instruments z_i yields larger standard errors
and thus larger RMSE compared to both sieve OLS and sieve 2SLS with
instruments w_iz (Columns 3-5). This finding is
consistent with the commonly observed pattern that 2SLS tends to yield
larger standard errors than OLS.]
For networks (Panel B), the estimates obtained from OLS and OLS with
school FE remain biased (Columns 1 and 2). However, after applying
the selection correction, sieve OLS, sieve 2SLS with instruments w_iz,
and sieve 2SLS with instruments z_i all yield unbiased estimates
(Columns 3โ5).[The F-tests and over-identification tests confirm the validity of
w_iz and z_i as the instruments.] This finding suggests that correcting for the selection bias is sufficient
to produce unbiased estimates, as the individual-level variation in
w_iy and w_ix can effectively
resolve the reflection problem.
ยง CONCLUSION
In this paper, we explore linear-in-means social interactions with
endogenous group formation. We adopt a two-sided many-to-one matching
framework to characterize group formation and employ the limiting
approximation of a market to simplify the selection bias. By assuming
exchangeable unobservables in group formation, we establish that the
selection bias can be represented as a group-invariant nonparametric
function of an individual's preference and qualification indices in
group formation. Furthermore, we demonstrate that the excluded variables
in group formation can serve as instruments to solve the reflection
problem. Our approach is applicable to various adjacency matrices,
regardless of whether our interest lies in peer effects among all
group members or specific friends.
ยง APPENDIX
ยง.ยง Adjacency Matrix: Examples
In this section, we verify Assumption <ref> for adjacency matrices
that are commonly used in the literature. All the proofs are provided
in Appendix <ref>.
[Group averages that include oneself]
Suppose that w represents group
averages that include oneself and the group capacities are binding.
We can write w_ij=โ_g=1^G1/n_g1{g_i=g}1{g_j=g}.
By construction, w_โ=max_iโ๐ฉโ_j=1^n|w_ij|=1
and w_โ=max_i,jโ๐ฉ|w_ij|โค1/min_gโ๐ขn_g=1/nmin_gโ๐ขr_g,
where r_g=n_g/n>0 for gโ๐ข. Hence, ๐ผ[w_โ^4]โค1/n^4min_gโ๐ขr_g^4=O(n^-4)
and Assumption <ref>(i) and (ii) are satisfied. Because n_g
is a constant, w_ij is a function of g_i and g_j โ
once we know the groups that i and j join, we know w_ij.
In this case, w is a function of ฯ=(x,z,g)
and w_ij depends on ฯ only through ฯ_i
and ฯ_j. Therefore, Assumption <ref>(iii), (v),
and (vi) are satisfied. Furthermore, for group averages that include
oneself we have w'=w and w^2=w.[For any i,jโ๐ฉ, (w^2)_ij=โ_k=1^nw_ikw_kj=โ_k=1^n(โ_g=1^G1/n_g1{g_i=g}1{g_k=g})(โ_g=1^G1/n_g1{g_k=g}1{g_j=g})=โ_k=1^nโ_g=1^G1/n_g^21{g_i=g}1{g_j=g}1{g_k=g}=โ_g=1^G1/n_g1{g_i=g}1{g_j=g}=w_ij,
where we have used n_g=โ_k=1^n1{g_k=g}.] Because h_ijw_ij and h_klw_kl are independent for disjoint
{i,j} and {k,l}, Assumption <ref>(iv) is satisfied.
[Group averages that exclude oneself]
Suppose that w represents group
averages that exclude oneself and the group capacities are binding.
We have w_ij=โ_g=1^G1/n_g-11{g_i=g}1{g_j=g}
for iโ j and w_ii=0. Similarly as in Example <ref>,
we can show that Assumption <ref>(i)-(iii), (v) and (vi) are
satisfied with ฯ=(x,z,g).
To verify Assumption <ref>(iv), note that the (i,j) element
of w^r for rโฅ1 takes the form (w^r)_ij=โ_g=1^Gc_ij,g(r)1{g_i=g}1{g_j=g},
where c_ij,g(r) is a constant that only depends on r and n_g.[For example, c_ij,g(2)=n_g-2/(n_g-1)^2 for iโ j
and c_ii,g(2)=1/n_g-1.] This structure suggests that (w^r)_ij is a function
of g_i and g_j. Hence, h_ij(w^r)_ij
and h_kl(w^rฬ)_kl are independent for
disjoint {i,j} and {k,l} and Assumption <ref>(iv)
is satisfied.
[Dyadic networks]
Suppose that individuals in a group form additional
connections (for example, schoolmates make friends). Let d_ij,g
denote an indicator for whether individuals i and j are connected
in group g and d_i,g=โ_j=1^nd_ij,g1{g_j=g} the
number of connections that i has in group g. Suppose that no
individual is isolated so d_i,g_i>0 for all iโ๐ฉ.
Typically, we specify w_ij=โ_g=1^Gd_ij,g/d_i,g1{g_i=g}1{g_j=g}
โ if both i and j join group g, then j's weight on i
depends on whether j is connected to i, normalized by the number
of connections that i has in the group.
Following the literature on dyadic network formation with fixed effects
(; ), we specify
d_ij,g=1{f_g(x_i,x_j,a_i,a_j)โฅฯ_ij}, ,
and d_ii,g=0, where a_iโโ and ฯ_ijโโ
represent individual- and pair-specific unobserved heterogeneity.
Without loss of generality we normalize ฯ_ijโผ U[0,1] and
assume 0โค f_gโค1. Denote a=(a_i,iโ๐ฉ)
and ฯ=(ฯ_ij,i,jโ๐ฉ). Assume that
(a) both a_i and ฯ_ij are i.i.d., (b) ฯ
is independent of (x,z,g,a),
and (c) (a,ฯ) is independent of ฯต
conditional on (x,z,g).
The last is consistent with Assumption <ref> โ conditional
on (x,z,g), w
is a function of (a,ฯ) and is thus
independent of ฯต. Let ฯ=(x,z,g,a),
where ฯ_i=(x_i,z_i,g_i,a_i) is i.i.d. across
i. Note that 1/n-1๐ผ[d_i,g|ฯ_i]=1/n-1โ_jโ i๐ผ[d_ij,g1{g_j=g}|ฯ_i]=๐ผ[f_g(x_i,x_j,a_i,a_j)1{g_j=g}|ฯ_i].
We assume that min_iโ๐ฉmin_gโ๐ข๐ผ[f_g(x_i,x_j,a_i,a_j)1{g_j=g}|ฯ_i]โฅ c>0.
For w in Example <ref>,
Assumption <ref> is satisfied with ฯ=(x,z,g,a).
[Group averages, continued]
Examples <ref> and <ref> assume that
the group capacities are binding. If a group has an infinite capacity
(as in one-sided group formation) or does not reach its capacity,
then the number of members in that group is determined endogenously.
This setting can be regarded as a special case of Example <ref>,
where we set d_ij,g=1 for all i,jโ๐ฉ (including-oneself
averages) or d_ij,g=1 for all iโ j and d_ii,g=0 (excluding-oneself
averages). Similarly as in Lemma <ref>, we can show that
w satisfies Assumption <ref> with ฯ=(x,z,g).
[Strategic networks]
Follow the setup in Example <ref>.
To account for strategic network formation, we specify instead d_ij,g=1{f_g(x_i,x_j,a_i;x)โฅฯ_ij},
, and d_ii,g=0, where
a_i and ฯ_ij are specified as in Example <ref>.
The specification is motivated by strategic network formation with
incomplete information (; ),
where we assume that x is publicly observed by all
the individuals, and a_i and ฯ_i=(ฯ_ij,jโ i)
are privately observed by individual i. The presence of x
is to capture the equilibrium effect that results from strategic interactions.
We impose the same assumptions on a and ฯ
as in Example <ref> and set ฯ=(x,z,g,a).
Note that ๐ผ[d_i,g|ฯ]=โ_jโ if_g(x_i,x_j,a_i;x)1{g_j=g}.
We assume that min_i,jโ๐ฉmin_gโ๐ขf_g(x_i,x_j,a_i;x)โฅ c>0.
Suppose that the model has a limiting approximation in the sense that
for each gโ๐ข we have max_i,jโ๐ฉ๐ผ[(f_g(x_i,x_j,a_i;x)-f_g^*(x_i,x_j,a_i))^4]=O(n^-2)
for some function 0โค f_g^*โค1. We assume min_i,jโ๐ฉmin_gโ๐ขf_g^*(x_i,x_j,a_i)โฅ c^*>0.[<cit.> demonstrated the existence
of a limiting approximation.]
For w in Example <ref>,
Assumption <ref> is satisfied with ฯ=(x,z,g,a).
ยง.ยง Proofs
*Notation
Let ยท be the Euclidean norm. For an nร1 vector
xโโ^n and an nร n matrix Aโโ^n^2,
x=(โ_i=1^nx_i^2)^1/2 and A=(tr(AA'))^1/2=(_i=1^nโ_j=1^na_ij^2)^1/2.
ยง.ยง.ยง Proofs in Sections <ref>โ<ref>
Following <cit.>, we can show that p_npโp^*
as nโโ, and the limiting cutoffs p^* are deterministic.
The selection bias ๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p)]
is continuous in p because the cdf of the unobservables is continuous
under Assumption <ref>(ii). Therefore, by the continuous
mapping theorem, we have ๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p_n)]pโ๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p^*)].
The selection bias evaluated at p^* satisfies
๐ผ[ฯต_i|x,z,g(z,ฮพ,ฮท;p^*)] = ๐ผ[ฯต_i|x_i,x_-i,z_i,z_-i,g_i(z_i,ฮพ_i,ฮท_i;p^*),g_-i(z_-i,ฮพ_-i,ฮท_-i;p^*)]
= ๐ผ[ฯต_i|x_i,z_i,g_i(z_i,ฮพ_i,ฮท_i;p^*)],
where x_-i=(x_j,j i) and z_-i,
g_-i, ฮพ_-i, ฮท_-i
are defined analogously. The last equality follows because given deterministic
cutoffs p^*, g_j only depends on z_j, ฮพ_j, and
ฮท_j for all jโ i, which are independent of ฯต_i
under Assumption <ref>(i). Combining the results proves
the proposition.
Let f(ฯต_i,ฮพ_i,ฮท_i) denote the joint pdf of ฯต_i,
ฮพ_i, and ฮท_i, and f(ฮพ_i,ฮท_i) the joint
pdf of ฮพ_i and ฮท_i. By equation (<ref>) and
the exogeneity of x_i and z_i (Assumption <ref>(iii)),
individual i's selection bias from joining group g is ๐ผ[ฯต_i|x_i,z_i,g_i=g]=โซ_R_g(ฯ_i)ฯต_if(ฯต_i,ฮพ_i,ฮท_i)dฯต_idฮพ_idฮท_i/โซ_R_g(ฯ_i)f(ฮพ_i,ฮท_i)dฮพ_idฮท_i=:ฮป_g(ฯ_i),
where R_g(ฯ_i) denotes the set {(ฮพ'_i,ฮท'_i)'โโ^2G:ฮท_giโฅ p_g-z'_iฮด_g^v, and โ hโ g, ฮพ_ih-ฮพ_ig<z'_i(ฮด_g^u-ฮด_h^u) or ฮท_hi<p_h-z'_iฮด_h^v}.
We focus on p in the proof and the identification of ฮฑ
can be established similarly. Suppose that our goal is to identify
the cutoff p_g of group gโ 1. Take another group hโ g,1.
Let ฯ_h|g^e(p_g-ฯ_ig^v):=(g_i=h|p_g-ฯ_ig^v)=(g_i=h|ฯ_ig^v)
denote the conditional probability that individual i joins group
h given her qualification index for group g. The last equality
holds because p_g is a constant, so conditioning on p_g-ฯ_ig^v
is the same as conditioning on ฯ_ig^v. Similarly, let ฯ_h|1^e(p_1-ฯ_j1^v):=(g_j=h|p_1-ฯ_j1^v)=(g_j=h|ฯ_j1^v).
By equation (<ref>) and Assumption <ref>, ฯ_h|g^e(ยท)
and ฯ_h|1^e(ยท) have the same functional form, that
is, ฯ_h|g^e(ยท)=ฯ_h|1^e(ยท)=:ฯ_h^e(ยท).
Moreover, because the unobservables have a strictly increasing cdf
(Assumption <ref>(ii)), ฯ_h^e(ยท) is
strictly monotone. Therefore, for any i and j such that 0<(g_i=h|ฯ_ig^v)=(g_j=h|ฯ_j1^v)<1,
because (g_i=h|ฯ_ig^v)=ฯ_h^e(p_g-ฯ_ig^v)
and (g_j=h|ฯ_j1^v)=ฯ_h^e(p_1-ฯ_j1^v),
we obtain p_g-ฯ_ig^v=p_1-ฯ_j1^v. This, together
with Assumption <ref>(i), implies that p_g=ฯ_ig^v-ฯ_j1^v
is identified.
Suppose that the support of X_i-๐ผ[X_i|ฯ_i^e]
is contained in a proper linear subspace of โ^d. There
is a dร1 vector of constants kโ 0 such that k'(X_i-๐ผ[X_i|ฯ_i^e])=k'X_i-๐ผ[k'X_i|ฯ_i^e]=0
with probability 1. Because ๐ผ[k'X_i|ฯ_i^e]
is a function of ฯ_i^e, k'X_i is a function of ฯ_i^e
with probability 1.
To show the reverse, let kโ 0 be the dร1 vector of constants
such that k'X_i is a function of ฯ_i^e with probability
1. This implies that with probability 1 we have ๐ผ[k'X_i|ฯ_i^e]=k'X_i
and thus k'(X_i-๐ผ[X_i|ฯ_i^e])=0. Therefore,
the support of X_i-๐ผ[X_i|ฯ_i^e] is contained
in a proper linear subspace of โ^d.
Write pฬ_g=(G-2)^-1โ_h=2,hโ g^Gpฬ_g,h, where
pฬ_g,h = 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n1/ฮถ_2nK_2(ฯฬ_h|g(ฯฬ_ig^v)-ฯฬ_h|1(ฯฬ_j1^v)/ฮถ_2n)(ฯฬ_ig^v-ฯฬ_j1^v)
for hโ 1,g. To derive the asymptotic distribution of pฬ_g,
we derive the influence function of each pฬ_g,h, and take
their average to obtain the influence function of pฬ_g.
Define ฯฬ_i^v=(ฯฬ_ig^v,gโ๐ข),
ฯฬ_h=(ฯฬ_h|g,ฯฬ_h|1), and the
function
h_gn(ฯฬ_i^v,ฯฬ_j^v;ฯฬ_h) = 1/ฮถ_2nK_2(ฯฬ_h|g(ฯฬ_ig^v)-ฯฬ_h|1(ฯฬ_j1^v)/ฮถ_2n)(ฯฬ_ig^v-ฯฬ_j1^v).
We can view pฬ_g,h as a V-statistic with an asymmetric
kernel h_gn. Decompose
pฬ_g,h-p_g^* = 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(h_gn(ฯฬ_i^v,ฯฬ_j^v;ฯฬ_h(ฯฬ_i^v))-h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฯ_i^v)))
+1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฯ_i^v))-๐ผ[h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฯ_i^v))])
+๐ผ[h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฯ_i^v))]-p_g^*=:T_1n+T_2n+T_3n,
where ฯ_i^v=(ฯ_ig^v,gโ๐ข) and ฯ_h=(ฯ_h|g,ฯ_h|1).
T_1n captures the estimation error due to the first-step estimators
ฮดฬ^v and ฯฬ_h, T_2n captures the
estimation error in the second step if properly centered, and T_3n
captures the bias due to kernel smoothing.
Lemmas <ref> and <ref> show that T_1n=n^-1โ_i=1^nฯ_g,h,1(z_i,g_i)+o_p(n^-1/2)
and T_2n=n^-1โ_i=1^nฯ_g,h,2(z_i)+o_p(n^-1/2).
Lemma <ref> shows that T_3n=o(n^-1/2) and is thus
negligible. Averaging over hโ 1,g, we derive โ(n)(pฬ_g-p_g^*)=n^-1/2โ_i=1^nฯ_g(z_i,g_i)+o_p(1),
where ฯ_g(z_i,g_i)=(G-2)^-1โ_hโ 1,g(ฯ_g,h,1(z_i,g_i)+ฯ_g,h,2(z_i))
and ๐ผ[ฯ_g(z_i,g_i)]=0. By the Central Limit Theorem,
we have โ(n)(pฬ_g-p_g^*)dโN(0,V_g),
where V_g=Var(ฯ_g(z_i,g_i)).
Denote Q(ฮณ,ฮผ^Z)=m_0(ฮณ,ฮผ^Z)'Wm_0(ฮณ,ฮผ^Z)
and Qฬ_n(ฮณ,ฮผ^Z)=mฬ_n(ฮณ,ฮผ^Z)'ลดmฬ_n(ฮณ,ฮผ^Z).
Fix ฮด>0. Let โฌ_ฮด(ฮณ_0)={ฮณโฮ:ฮณ-ฮณ_0<ฮด}
be an open ฮด-ball centered at ฮณ_0. If Q(ฮณฬ,ฮผฬ^Z)<inf_ฮณโฮโโฌ_ฮด(ฮณ_0)Q(ฮณ,ฮผฬ^Z),
then ฮณฬโโฌ_ฮด(ฮณ_0). Therefore,
(ฮณ-ฮณ_0<ฮด)โฅ(Q(ฮณฬ,ฮผฬ^Z)<inf_ฮณโฮโโฌ_ฮด(ฮณ_0)Q(ฮณ,ฮผฬ^Z)).
From the triangle inequality and the optimality of ฮณฬ,
we obtain Q(ฮณฬ,ฮผ_0^Z)โคQฬ_n(ฮณฬ,ฮผฬ^Z)+|Qฬ_n(ฮณฬ,ฮผฬ^Z)-Q(ฮณฬ,ฮผ_0^Z)|โคsup_ฮณโฮ|Qฬ_n(ฮณ,ฮผฬ^Z)-Q(ฮณ,ฮผ_0^Z)|+o_p(1).
The uniform convergence of the moment in Lemma <ref> together
with ลด-W=o_p(1) and the boundedness of W and m_0(ฮณ,ฮผ^Z)
(Assumptions <ref>, <ref>(i), <ref>(i))
implies that sup_ฮณโฮ|Qฬ_n(ฮณ,ฮผฬ^Z)-Q(ฮณ,ฮผ_0^Z)|=o_p(1),
and hence Q(ฮณฬ,ฮผ_0^Z)=o_p(1).
By Assumption <ref>(i), Wm_0(ฮณ,ฮผ^Z)=0 if and
only if ฮณ=ฮณ_0 and therefore Q(ฮณ,ฮผ_0^Z)
has a unique minimizer at ฮณ=ฮณ_0. Hence, by the compactness
of ฮโโฌ_ฮด(ฮณ_0) and the continuity
of Q(ฮณ,ฮผ_0^Z), we have inf_ฮณโฮโโฌ_ฮด(ฮณ_0)Q(ฮณ,ฮผ_0^Z)=Q(ฮณฬ
,ฮผ_0^Z)>Q(ฮณ_0,ฮผ_0^Z)=0
for some ฮณฬ
โฮโโฌ_ฮด(ฮณ_0).
Combining the results we can see that the right-hand side of equation
(<ref>) goes to 1, and the consistency of ฮณฬ
is proved.
ฮณฬ satisfies the first-order condition โmฬ_n(ฮณฬ,ฮผฬ^Z)'/โฮณลดmฬ_n(ฮณฬ,ฮผฬ^Z)=0.
Expanding the LHS around ฮณ_0 and solving for โ(n)(ฮณฬ-ฮณ_0)
gives
โ(n)(ฮณฬ-ฮณ_0)= -(โmฬ_n(ฮณฬ,ฮผฬ^Z)'/โฮณลดโmฬ_n(ฮณฬ
,ฮผฬ^Z)/โฮณ')^-1โmฬ_n(ฮณฬ,ฮผฬ^Z)'/โฮณลดโ(n)mฬ_n(ฮณ_0,ฮผฬ^Z),
where ฮณฬ
is a mean value that lies between ฮณฬ
and ฮณ_0.
Consider the derivatives in equation (<ref>). We
have
โmฬ_n(ฮณ,ฮผฬ^Z)/โฮณ' = 1/nโ_i=1^nโ m(ฯ_i,ฮณ,ฮผฬ^Z(ฯฬ_i^s))/โฮณ'
= -1/nโ_i=1^n(Z_i-ฮผ_0^Z(ฯ_i^s))X'_i+1/nโ_i=1^n(ฮผฬ^Z(ฯฬ_i^s)-ฮผ_0^Z(ฯ_i^s))X'_i
= -1/nโ_i=1^n(Z_i-ฮผ_0^Z(ฯ_i^s))X'_i+o_p(1),
where the last equality follows from Lemma <ref>.
By definition M_n=๐ผ[โmฬ_n(ฮณ_0,ฮผ_0^Z)/โฮณ']=-n^-1โ_i=1^n๐ผ[(Z_i-ฮผ_0^Z(ฯ_i^s))X'_i].
Lemma <ref> shows that n^-1โ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i^s))X'_i-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i^s))X'_i])=o_p(1).
Therefore, โmฬ_n(ฮณ_0,ฮผฬ^Z)/โฮณ'-M_n=o_p(1).
By Lemmas <ref> and <ref>, the last
term in equation (<ref>) has the asymptotic distribution
โ(n)ฮฉ_n^-1/2mฬ_n(ฮณ_0,ฮผฬ^Z)dโN(0,I_d_Z),
where ฮฉ_n is defined in Lemma <ref>. By equation
(<ref>) and Slutsky's theorem, we obtain โ(n)ฮฃ_n^-1/2(ฮณฬ-ฮณ_0)dโN(0,I_d_ฮณ),
where ฮฃ_n=(M'_nWM_n)^-1M'_nWฮฉ_nWM_n(M'_nWM_n)^-1.
ยง.ยง.ยง Proofs in Appendix <ref>
By construction, w_โ=1
and w_โโค1/min_iโ๐ฉmin_gโ๐ขd_i,g.
Observe that 1/n-1(d_i,g-๐ผ[d_i,g|ฯ_i])=o_p(1)
by the law of large numbers. Because 1/n-1๐ผ[d_i,g|ฯ_i]>c
for all iโ๐ฉ and gโ๐ข, we have ๐ผ[1/min_iโ๐ฉmin_gโ๐ข(1/n-1d_i,g)^4]โ๐ผ[1/min_iโ๐ฉmin_gโ๐ข(1/n-1๐ผ[d_i,g|ฯ_i])^4]<โ
by dominated convergence and thus ๐ผ[w_โ^4]=O(n^-4).
Conditional on (x,z,g),
a is independent of ฮฝ, so we have
๐ผ[ฮฝ_i|ฯ]=๐ผ[ฮฝ_i|x,z,g]=0.
Moreover, conditional on ฯ, w
is a function of ฯ and is thus independent of ฮฝ.[Because (a,ฯ) is independent of ฮฝ
conditional on (x,z,g),
we can show that ฯ is independent of ฮฝ
conditional on ฯ.] Hence, Assumptions <ref>(i)โ(iii) are satisfied.
To verify Assumptions <ref>(iv)โ(vi), define w_ij,g=d_ij,g/d_i,g,
wฬ
_ij,g=d_ij,g/๐ผ[d_i,g|ฯ_i],
and e_ij,g=w_ij,g-wฬ
_ij,g. By Taylor expansion,
e_ij,g=-d_ij,g/๐ผ[d_i,g|ฯ_i]^2(d_i,g-๐ผ[d_i,g|ฯ_i])+d_ij,g/๐ผ[d_i,g|ฯ_i]^3(d_i,g-๐ผ[d_i,g|ฯ_i])^2-โฏ
It suffices to consider the leading term in e_ij,g. Recall that
d_i,g-๐ผ[d_i,g|ฯ_i]=โ_jโ ir_ij,g,
where r_ij,g=d_ij,g1{g_j=g}-๐ผ[d_ij,g1{g_j=g}|ฯ_i].
Note that |r_ij,g|โค1 and ๐ผ[r_ij,g|ฯ_i]=0.
For any jโ k, conditional on ฯ_i, r_ij,g is
a function of (x_j,a_j,ฯ_ij,g_j) and r_ik,g is
a function of (x_k,a_k,ฯ_ik,g_k), so r_ij,g and
r_ik,g are independent. Therefore,
๐ผ[(d_i,g-๐ผ[d_i,g|ฯ_i])^4|ฯ_i] = โ_j,k,l,mโ i๐ผ[r_ij,gr_ik,gr_il,gr_im,g|ฯ_i]
= โ_jโ i๐ผ[r_ij,g^4|ฯ_i]+โ_jโ k,j,kโ i๐ผ[r_ij,g^2r_ik,g^2|ฯ_i]โค O(n^2).
Combining the two displays yields max_i,jโ๐ฉmax_gโ๐ข๐ผ[e_ij,g^4]=O(n^-6).
Furthermore, summing over the groups we define wฬ
_ij=โ_g=1^Gwฬ
_ij,g1{g_i=g}1{g_j=g}
and e_ij=w_ij-wฬ
_ij. From the previous results we obtain
max_i,jโ๐ฉ|wฬ
_ij|โค1/min_iโ๐ฉmin_gโ๐ข๐ผ[d_i,g|ฯ_i]โค O(n^-1)
and max_i,jโ๐ฉ๐ผ[e_ij^4]โค Gmax_i,jโ๐ฉmax_gโ๐ข๐ผ[e_ij,g^4]=O(n^-6).
Assumption <ref>(v). For any disjoint {i,j} and {k,l},
we have Cov(w_ij,w_kl|ฯ)=Cov(w_ij,e_kl|ฯ)=Cov(e_ij,e_kl|ฯ).
The first equality holds because conditional on ฯ,
w_ij is a function of (ฯ_ijฬ,jฬโ๐ฉ_g_i\{i}),
where ๐ฉ_g_i={j:g_j=g_i}, and wฬ
_kl
is a function of ฯ_kl, so Cov(w_ij,wฬ
_kl|ฯ)=0.
The second equality follows similarly from Cov(wฬ
_ij,e_kl|ฯ)=0.
By Cauchy-Schwarz inequality and Jensen's inequality, ๐ผ[Cov(e_ij,e_kl|ฯ)^2]โค C๐ผ[e_ij^4].
Therefore, max_i,j,k,lโ๐ฉ:{i,j}โฉ{k,l}=โ
๐ผ[Cov(w_ij,w_kl|ฯ)^2]=O(n^-6)=o(n^-4/K)
because K/n^2โ0.
Moreover, for any i,jโ๐ฉ, ๐ผ[w_ij|ฯ]-๐ผ[w_ij|ฯ_i,ฯ_j]=๐ผ[e_ij|ฯ]-๐ผ[e_ij|ฯ_i,ฯ_j]
because wฬ
_ij depends on ฯ only
through ฯ_i and ฯ_j and thus ๐ผ[wฬ
_ij|ฯ]=๐ผ[wฬ
_ij|ฯ_i,ฯ_j].
We can bound ๐ผ[(๐ผ[e_ij|ฯ]-๐ผ[e_ij|ฯ_i,ฯ_j])^4]โค C๐ผ[e_ij^4].
Hence, max_i,jโ๐ฉ๐ผ[(๐ผ[w_ij|ฯ]-๐ผ[w_ij|ฯ_i,ฯ_j])^4]=O(n^-6)=o(n^-4/K^2)
because K/nโ0. Assumption <ref>(v) is satisfied.
Assumption <ref>(vi). For {i,j} and {k,l} that overlap
in one element, wฬ
_ij and wฬ
_kl are independent
conditional on ฯ. Therefore, Cov(w_ij,w_kl|ฯ)=Cov(e_ij,wฬ
_kl|ฯ)+Cov(wฬ
_ij,e_kl|ฯ)+Cov(e_ij,e_kl|ฯ).
By Cauchy-Schwarz inequality and Jensen's inequality, we can bound
๐ผ[Cov(e_ij,wฬ
_kl|ฯ)^2]โค C(๐ผ[e_ij^4]๐ผ[wฬ
_kl^4])^1/2=O(n^-5)=o(n^-4)
uniformly. Assumption <ref>(vi) thus holds.
Assumption <ref>(iv). For any rโฅ1, (w^r)_ij=โ_(t_0,โฆ,t_r):(t_0,t_r)=(i,j)แบ_t_0,โฆ,t_r,
where แบ_t_0,โฆ,t_r=โ_s=0^r-1w_t_st_s+1
and the sum is over tuples (t_0,โฆ,t_r) with t_0=i
and t_r=j. For any disjoint {i,j} and {k,l}, we can
write Cov(h_ij(w^r)_ij,h_kl(w^rฬ)_kl)
as
โ_(t_0,โฆ,t_r,tฬ_0โฆ,tฬ_rฬ):(t_0,t_r,tฬ_0,tฬ_rฬ)=(i,j,k,l),
{t_0,โฆ,t_r}โฉ{tฬ_0โฆ,tฬ_rฬ}โ โ
Cov(h_ijแบ_t_0,โฆ,t_r,h_klแบ_tฬ_0,โฆ,tฬ_rฬ)
+โ_(t_0,โฆ,t_r,tฬ_0โฆ,tฬ_rฬ):(t_0,t_r,tฬ_0,tฬ_rฬ)=(i,j,k,l),
{t_0,โฆ,t_r}โฉ{tฬ_0โฆ,tฬ_rฬ}=โ
Cov(h_ijแบ_t_0,โฆ,t_r,h_klแบ_tฬ_0,โฆ,tฬ_rฬ).
The first sum consists of O(n^(r+rฬ)-3) terms, and each
term can be bounded uniformly by C(๐ผ[w_โ^r+rฬ]+๐ผ[w_โ^r]๐ผ[w_โ^rฬ])=O(n^-(r+rฬ)).
Hence, the first sum in (<ref>) is O(n^(r+rฬ)-3)ยท O(n^-(r+rฬ))=o(n^-2),
uniformly in i, j, k, l, r, and rฬ. The second
sum in (<ref>) consists of O(n^(r+rฬ)-2) terms.
To derive a uniform bound on each term, for disjoint {t_0,โฆ,t_r}
and {tฬ_0,โฆ,tฬ_rฬ} with (t_0,t_r,tฬ_0,tฬ_rฬ)=(i,j,k,l),
we write
Cov(h_ijแบ_t_0,โฆ,t_r,h_klแบ_tฬ_0,โฆ,tฬ_r)
= ๐ผ[h_ijh_klCov(แบ_t_0,โฆ,t_r,แบ_tฬ_0,โฆ,tฬ_r|ฯ)]+Cov(h_ij๐ผ[แบ_t_0,โฆ,t_r|ฯ],h_kl๐ผ[แบ_tฬ_0,โฆ,tฬ_r|ฯ]).
Define แบฬ
ฬ_t_0,โฆ,t_r=โ_s=0^r-1wฬ
_t_st_s+1
and e_t_0,โฆ,t_r=แบ_t_0,โฆ,t_r-แบฬ
ฬ_t_0,โฆ,t_r.
For any t_0,โฆ,t_r, by continuous mapping and Slutsky's
theorem we can bound e_t_0,โฆ,t_r uniformly by o_p(n^-r).
Observe that Cov(แบ_t_0,โฆ,t_r,แบ_tฬ_0,โฆ,tฬ_rฬ|ฯ)=Cov(แบ_t_0,โฆ,t_r,e_tฬ_0,โฆ,tฬ_rฬ|ฯ)=Cov(e_t_0,โฆ,t_r,e_tฬ_0,โฆ,tฬ_rฬ|ฯ).
The first equality holds because conditional on ฯ,
แบ_t_0,โฆ,t_r is a function of (ฯ_t_sj,jโ๐ฉ_g_t_s\{t_s},s=0,โฆ,r-1)
and แบฬ
ฬ_tฬ_0,โฆ,tฬ_rฬ is
a function of (ฯ_tฬ_stฬ_s+1,s=0,โฆ,rฬ-1),
and thus Cov(แบ_t_0,โฆ,t_r,แบฬ
ฬ_tฬ_0,โฆ,tฬ_rฬ|ฯ)=0.
The second equality follows similarly from Cov(แบฬ
ฬ_t_0,โฆ,t_r,e_tฬ_0,โฆ,tฬ_rฬ|ฯ)=0.
Therefore, from the boundedness of h_ij and dominated convergence,
we obtain that ๐ผ[h_ijh_klCov(แบ_t_0,โฆ,t_r,แบ_tฬ_0,โฆ,tฬ_r|ฯ)]
has a uniform bound that is of the rate of o(n^-(r+rฬ)).
Moreover, because แบฬ
ฬ_t_0,โฆ,t_r depends on
ฯ only through ฯ_t_0,โฆ,ฯ_t_r,
๐ผ[แบฬ
ฬ_t_0,โฆ,t_r|ฯ]=๐ผ[แบฬ
ฬ_t_0,โฆ,t_r|ฯ_t_0,โฆ,ฯ_t_r].
For disjoint {t_0,โฆ,t_r} and {tฬ_0โฆ,tฬ_rฬ}
with (t_0,t_r,tฬ_0,tฬ_rฬ)=(i,j,k,l),
h_ij๐ผ[แบฬ
ฬ_t_0,โฆ,t_r|ฯ_t_0,โฆ,ฯ_t_r]
and h_kl๐ผ[แบฬ
ฬ_tฬ_0,โฆ,tฬ_rฬ|ฯ_tฬ_0,โฆ,ฯ_tฬ_rฬ]
are independent. Hence, Cov(h_ij๐ผ[แบ_t_0,โฆ,t_r|ฯ],h_kl๐ผ[แบ_tฬ_0,โฆ,tฬ_r|ฯ])=Cov(h_ij๐ผ[แบฬ
ฬ_t_0,โฆ,t_r|ฯ_t_0,โฆ,ฯ_t_r]),h_kl๐ผ[e_tฬ_0,โฆ,tฬ_r|ฯ])+Cov(h_ij๐ผ[e_t_0,โฆ,t_r|ฯ],h_kl๐ผ[แบฬ
ฬ_tฬ_0,โฆ,tฬ_r|ฯ_tฬ_0,โฆ,ฯ_tฬ_rฬ])+Cov(h_ij๐ผ[e_t_0,โฆ,t_r|ฯ],h_kl๐ผ[e_tฬ_0,โฆ,tฬ_r|ฯ]).
Similarly as before, we can derive that Cov(h_ij๐ผ[แบ_t_0,โฆ,t_r|ฯ],h_kl๐ผ[แบ_tฬ_0,โฆ,tฬ_r|ฯ])
has a uniform bound that is o(n^-(r+rฬ)). Combining the
results yields Assumption <ref>(iv).
For any jโ k, conditional on ฯ, d_ij,g
is a function of ฯ_ij and d_ik,g is a function of ฯ_ik,
so d_ij,g and d_ik,g are independent. Hence, we obtain ๐ผ[(d_i,g-๐ผ[d_i,g|ฯ])^2]=O(n)
and thus 1/n-1(d_i,g-๐ผ[d_i,g|ฯ])=o_p(1).
Note that c<1/n-1๐ผ[d_i,g|ฯ]โค1
for all iโ๐ฉ and gโ๐ข. Following the argument
in Lemma <ref>, we can show that Assumptions <ref>(i)โ(iii)
are satisfied.
To verify Assumptions <ref>(iv)โ(vi), modify wฬ
_ij,g
and e_ij,g in Lemma <ref> with wฬ
_ij,g=d_ij,g/๐ผ[d_i,g|ฯ]
and e_ij,g=w_ij,g-wฬ
_ij,g. Write d_i,g-๐ผ[d_i,g|ฯ]=โ_jโ ir_ij,g,
where r_ij,g=d_ij,g1{g_j=g}-๐ผ[d_ij,g1{g_j=g}|ฯ].
Observe that |r_ij,g|โค1 and ๐ผ[r_ij,g|ฯ]=0.
For any jโ k, r_ij,g and r_ik,g are independent conditional
on ฯ. Therefore, we have
๐ผ[(d_i,g-๐ผ[d_i,g|ฯ])^4|ฯ] = โ_j,k,l,mโ i๐ผ[r_ij,gr_ik,gr_il,gr_im,g|ฯ]
= โ_jโ i๐ผ[r_ij,g^4|ฯ]+โ_c
jโ k,j,kโ i๐ผ[r_ij,g^2r_ik,g^2|ฯ]โค O(n^2).
Combining a modification of equation (<ref>) where we replace
๐ผ[d_i,g|ฯ_i] with ๐ผ[d_i,g|ฯ]
and equation (<ref>) yields max_i,jโ๐ฉmax_gโ๐ข๐ผ[e_ij,g^4]=O(n^-6).
Furthermore, summing over the groups we define wฬ
_ij=โ_g=1^Gwฬ
_ij,g1{g_i=g}1{g_j=g}
and e_ij=w_ij-wฬ
_ij. From the previous results we obtain
max_i,jโ๐ฉ|wฬ
_ij|โค1/min_iโ๐ฉmin_gโ๐ข๐ผ[d_i,g|ฯ]โค O(n^-1)
and max_i,jโ๐ฉ๐ผ[e_ij^4]โค Gmax_i,jโ๐ฉmax_gโ๐ข๐ผ[e_ij,g^4]=O(n^-6).
Conditional on ฯ, wฬ
_ij is a function
of ฯ_ij.
With the modified wฬ
_ij, the rest of the proof in Lemma
<ref> still holds, except that wฬ
_ij also depends
on ฯ_k, kโ i,j, due to the presence of strategic
interactions. In fact, ๐ผ[wฬ
_ij|ฯ]=โ_g=1^G๐ผ[wฬ
_ij,g|ฯ]1{g_i=g}1{g_j=g}
and ๐ผ[wฬ
_ij,g|ฯ]=๐ผ[d_ij,g|ฯ]/๐ผ[d_i,g|ฯ]=f_ij,g/โ_kโ if_ik,g1{g_k=g},
where f_ij,g is shorthand for f_g(x_i,x_j,a_i;x).
To overcome this problem, we utilize the limiting approximation of
the model.
Define d_ij,g^*=1{f_ij,g^*โฅฯ_ij} for iโ j,
d_ii,g^*=0, and d_i,g^*=โ_j=1^nd_ij,g^*1{g_j=g},
where f_ij,g^* is shorthand for f_g^*(x_i,x_j,a_i).
Define wฬ
_ij,g^*=d_ij,g^*/๐ผ[d_i,g^*|ฯ_i],
e_ij,g^*=wฬ
_ij,g-wฬ
_ij,g^*, wฬ
_ij^*=โ_g=1^Gwฬ
_ij,g^*1{g_i=g}1{g_j=g},
and e_ij^*=wฬ
_ij-wฬ
_ij^*. Note that w_ij=e_ij+e_ij^*+wฬ
_ij^*.
Because ๐ผ[wฬ
_ij^*|ฯ]=๐ผ[wฬ
_ij^*|ฯ_i,ฯ_j],
we can write ๐ผ[w_ij|ฯ]-๐ผ[w_ij|ฯ_i,ฯ_j]=๐ผ[e_ij|ฯ]-๐ผ[e_ij|ฯ_i,ฯ_j]+๐ผ[e_ij^*|ฯ]-๐ผ[e_ij^*|ฯ_i,ฯ_j].
The proof in Lemma <ref> provides a bound for ๐ผ[e_ij|ฯ]-๐ผ[e_ij|ฯ_i,ฯ_j].
Here we derive a similar bound for ๐ผ[e_ij^*|ฯ]-๐ผ[e_ij^*|ฯ_i,ฯ_j].
By Taylor expansion,
e_ij,g^* = 1/๐ผ[d_i,g^*|ฯ_i](d_ij,g-d_ij,g^*)-d_ij,g^*/๐ผ[d_i,g^*|ฯ_i]^2(๐ผ[d_i,g|ฯ]-๐ผ[d_i,g^*|ฯ_i])
-1/๐ผ[d_i,g^*|ฯ_i]^2(d_ij,g-d_ij,g^*)(๐ผ[d_i,g|ฯ]-๐ผ[d_i,g^*|ฯ_i])
+d_ij,g^*/๐ผ[d_i,g^*|ฯ_i]^3(๐ผ[d_i,g|ฯ]-๐ผ[d_i,g^*|ฯ_i])^2-โฏ
It suffices to consider the first two leading terms. By the limiting
approximation, we derive ๐ผ[๐ผ[d_ij,g-d_ij,g^*|ฯ]^4]=๐ผ[(f_ij,g-f_ij,g^*)^4]=O(n^-2)
uniformly. Moreover, observe that ๐ผ[d_i,g|ฯ]=โ_jโ if_ij,g1{g_j=g},
๐ผ[d_i,g^*|ฯ]=โ_jโ if_ij,g^*1{g_j=g},
and ๐ผ[d_i,g^*|ฯ_i]=โ_jโ i๐ผ[f_ij,g^*1{g_j=g}|ฯ_i].
By Cauchy-Schwarz inequality and the limiting approximation, we can
bound ๐ผ[(๐ผ[d_i,g|ฯ]-๐ผ[d_i,g^*|ฯ])^4]โค(n-1)^4max_i,jโ๐ฉmax_gโ๐ข๐ผ[(f_ij,g-f_ij,g^*)^4]=O(n^2).
Next, note that for any jโ k, f_ij,g^*1{g_j=g} and
f_ik,g^*1{g_k=g} are independent conditional on ฯ_i.
Similarly as in equation (<ref>), we can thus derive ๐ผ[(๐ผ[d_i,g^*|ฯ]-๐ผ[d_i,g^*|ฯ_i])^4]=O(n^2)
uniformly. Therefore, by (a+b)^4โค8(a^4+b^4), we obtain
๐ผ[(๐ผ[d_i,g|ฯ]-๐ผ[d_i,g^*|ฯ_i])^4]=O(n^2)
uniformly. Combining these results with equation (<ref>)
yields max_i,jโ๐ฉmax_gโ๐ข๐ผ[๐ผ[e_ij,g^*|ฯ]^4]=O(n^-6)
and hence max_i,jโ๐ฉ๐ผ[๐ผ[e_ij^*|ฯ]^4]โค Gmax_i,jโ๐ฉmax_gโ๐ข๐ผ[๐ผ[e_ij,g^*|ฯ]^4]=O(n^-6).
By iterated expectations and Jensen's inequality ๐ผ[๐ผ[e_ij^*|ฯ_i,ฯ_j]^4]โค๐ผ[๐ผ[e_ij^*|ฯ]^4],
so we obtain the uniform bound max_i,jโ๐ฉ๐ผ[(๐ผ[e_ij^*|ฯ]-๐ผ[e_ij^*|ฯ_i,ฯ_j])^4]โค Cmax_i,jโ๐ฉ๐ผ[๐ผ[e_ij^*|ฯ]^4]=O(n^-6).
This proves that Assumption <ref>(vi) is satisfied.
Assumption <ref>(v) can be justified by the same proof as in
Lemma <ref>. As for Assumption <ref>(iv), the proof
in Lemma <ref> remains valid except the last paragraph.
Define แบฬ
ฬ_t_0,โฆ,t_r^*=โ_s=0^r-1wฬ
_t_st_s+1^*.
Because แบฬ
ฬ_t_0,โฆ,t_r^* depends on ฯ
only through ฯ_t_0,โฆ,ฯ_t_r, ๐ผ[แบฬ
ฬ_t_0,โฆ,t_r^*|ฯ]=๐ผ[แบฬ
ฬ_t_0,โฆ,t_r^*|ฯ_t_0,โฆ,ฯ_t_r].
For disjoint {t_0,โฆ,t_r} and {tฬ_0โฆ,tฬ_rฬ}
with (t_0,t_r,tฬ_0,tฬ_rฬ)=(i,j,k,l),
h_ij๐ผ[แบฬ
ฬ_t_0,โฆ,t_r^*|ฯ_t_0,โฆ,ฯ_t_r]
and h_kl๐ผ[แบฬ
ฬ_tฬ_0,โฆ,tฬ_rฬ^*|ฯ_tฬ_0,โฆ,ฯ_tฬ_rฬ]
are independent. Using a similar argument as in Lemma <ref>
with แบฬ
ฬ_t_0,โฆ,t_r^* in place of แบฬ
ฬ_t_0,โฆ,t_r,
we derive that Cov(h_ij๐ผ[แบ_t_0,โฆ,t_r|ฯ],h_kl๐ผ[แบ_tฬ_0,โฆ,tฬ_r|ฯ])
has a uniform bound that is o(n^-(r+rฬ)). Assumption
<ref>(iv) is thus satisfied.
1.2 ecta
Online Appendix to
Social Interactions with
Endogenous Group
Formation
Shuyang
Sheng Xiaoting Sun
appendix
equationsection assumptionsection
figuresection tablesection
ยง SIMULATIONS: GROUP FORMATION PARAMETERS
We use maximum simulated likelihood (MSL) to estimate the parameter
ฮธ=(ฮด',p',ฮฑ')'. To this end, we simulate R draws
from the multivariate standard normal distribution, denoted as ฮท_i^(1),โฆ,ฮท^(R),
and compute the simulated conditional probability
ฯฬ_g(z_i) = 1/Rโ_r=1^Rexp(ฮฑ_g+ฮด_1^uz_1,ig^u+ฮด_2^uz_2,i)ยท1{ฮด_1^vz_1,ig^v+ฮด_2^vz_2,i+ฮท_gi^(r)โฅ p_g}/โ_k=1^Gexp(ฮฑ_k+ฮด_1^uz_1,ik^u+ฮด_2^uz_2,i)ยท1{ฮด_1^vz_1,ik^v+ฮด_2^vz_2,i+ฮท_ki^(r)โฅ p_k}+1, โ gโ๐ข
ฯฬ_0(z_i) = 1/Rโ_r=1^R1/โ_k=1^Gexp(ฮฑ_k+ฮด_1^uz_1,ik^u+ฮด_2^uz_2,i)ยท1{ฮด_1^vz_1,ik^v+ฮด_2^vz_2,i+ฮท_ki^(r)โฅ p_k}+1.
The MSL estimator ฮธฬ maximizes L(ฮธ)=1/nโ_i=1^nโ_g=0^G1{g_i=g}logฯฬ_g(z_i).
To mitigate the numerical difficulties caused by the nonsmoothness
of the indicator functions, we replace 1{ฮด_1^vz_1,ig^v+ฮด_2^vz_2,i+ฮท_gi^(r)โฅ p_g}
with a smoothed A-R simulator <cit.> (1+exp(p_g-(ฮด_1^vz_1,ig^v+ฮด_2^vz_2,i+ฮท_gi^(r)))/ฮบ)^-1,
where ฮบ>0 is a scale factor specified as 0.05.[The smaller ฮบ is, the better the simulator approximates the
indicator function.] Table <ref> shows the estimation results.
ยง LEMMAS FOR THE ASYMPTOTICS OF Pฬ AND
ฮฬ
*Notation
Define ฯ_i^v(ฮด^v)=(ฯ_ig^v(ฮด_g^v),gโ๐ข).
Then ฯ_i^v=ฯ_i^v(ฮด_0^v) and ฯฬ_i^v=ฯ_i^v(ฮดฬ^v).
For hโ 1,g, define ฯ_h(ฮด^v)=(ฯ_h|g(ฯ_ig^v(ฮด_g^v)),ฯ_h|1(ฯ_i1^v(ฮด_1^v)))
and ฯฬ_h(ฮด^v)=(ฯฬ_h|g(ฯ_ig^v(ฮด_g^v)),
ฯฬ_h|1(ฯ_i1^v(ฮด_1^v))). For any gโ h,
define ฯ_h|g,i=ฯ_h|g(ฯ_ig^v) and ฯฬ_h|g,i=ฯฬ_h|g(ฯ_ig^v).
Under Assumption <ref>(ii), the inverse of ฯ_h|g(ยท)
exists and hence ฯ_ig^v=ฯ_h|g^-1(ฯ_h|g,i).[Under Assumption <ref>(ii), ฯ_h|g(ฯ) is
strictly monotone and thus its inverse function ฯ_h|g^-1(ฯ)
exists.] Denote the pdf of ฯ_h|g,i by f_ฯ_h|g. Let 0<C<โ
be a universal constant.
The term T_1n in equation (<ref>)
satisfies T_1n=n^-1โ_i=1^nฯ_g,h,1(z_i,g_i)+o_p(n^-1/2),
where ฯ_g,h,1(z_i,g_i) is defined in equation (<ref>).
We decompose T_1n as
T_1n = 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(h_gn(ฯฬ_i^v,ฯฬ_j^v;ฯฬ_h(ฮดฬ^v))-h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v)))
+1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))-h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v)))
=: T_1n^ฮด+T_1n^ฯ,
where T_1n^ฮด and T_1n^ฯ captures the estimation
error due to ฮดฬ^v and ฯฬ_h, respectively.
Step 1: we start with T_1n^ฯ in equation (<ref>).
Define the remainder term
R_n,ij = 1/ฮถ_2nK_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)-1/ฮถ_2nK_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)
-1/ฮถ_2n^2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)((ฯฬ_h|g,i-ฯ_h|g,i)-(ฯฬ_h|1,j-ฯ_h|1,j))
= 1/2ฮถ_2n^-3Kโ_2(_n,ij)((ฯฬ_h|g,i-ฯ_h|g,i)-(ฯฬ_h|1,j-ฯ_h|1,j))^2,
where the last equality follows by Taylor expansion, with _n,ij
an intermediate value between ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n
and ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n. We can write
T_1n^ฯ = 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n1/ฮถ_2n^2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)(ฯ_ig^v-ฯ_j1^v)
ยท((ฯฬ_h|g,i-ฯ_h|g,i)-(ฯฬ_h|1,j-ฯ_h|1,j))
+1/n(n-1)โ_i=1^nโ_j=1,jโ i^nR_n,ij(ฯ_ig^v-ฯ_j1^v).
For any gโ h, write ฯฬ_h|g,i=bฬ_h|g(ฯ_ig^v)/bฬ_g(ฯ_ig^v),
where bฬ_h|g(ฯ_ig^v)=1/nฮถ_1nโ_k=1^n1{g_k=h}K_1(ฯ_ig^v-ฯ_kg^v/ฮถ_1n)
and bฬ_g(ฯ_ig^v)=1/nฮถ_1nโ_k=1^nK_1(ฯ_ig^v-ฯ_kg^v/ฮถ_1n).
Moreover, let b_g(ฯ_ig^v) denote the pdf of ฯ_ig^v,
and define b_h|g(ฯ_ig^v)=ฯ_h|g,ib_g(ฯ_ig^v).
We express ฯฬ_h|g,i-ฯ_h|g,i as a linear functional
of the kernel estimators bฬ_h|g and bฬ_g. Specifically,
we follow <cit.> and <cit.>[Lemma B.3 in <cit.> holds by Assumptions <ref>,
<ref>(ii)(iii), <ref>(i), <ref>(i),
and n^1/2ฮถ_1n/ln nโโ. The last condition
is implied by Assumption <ref>(ii). To see this, note
that the second condition of Assumption <ref>(ii) implies
that nฮถ_1n^3โโ, or ฮถ_1n=c_nn^-1/3
with c_nโโ. Therefore, n^1/2ฮถ_1n/ln n=c_nn^1/2/ln nโโ,
and Lemma B.3 in <cit.> holds under the assumptions
we impose here.] and derive
max_i|ฯฬ_h|g,i-ฯ_h|g,i-1/b_g(ฯ_ig^v)(bฬ_h|g(ฯ_ig^v)-bฬ_g(ฯ_ig^v)ฯ_h|g,i)|
โค max_i1/(|bฬ_g(ฯ_ig^v)|b_g(ฯ_ig^v))(1+ฯ_h|g,i)((bฬ_h|g(ฯ_ig^v)-b_h|g(ฯ_ig^v))^2+(bฬ_g(ฯ_ig^v)-b_g(ฯ_ig^v))^2)
โค O_p(1)sup_ฯ((bฬ_h|g(ฯ)-b_h|g(ฯ))^2+(bฬ_g(ฯ)-b_g(ฯ))^2)
= O_p(((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)^2),
where the second inequality holds because b_g(ยท) is bounded
away from zero under Assumption <ref>(i) and bฬ_g(ยท)
is uniformly close to b_g(ยท). Under Assumption <ref>(i),
we can bound the linearization error as ฮถ_2n^-2max_i|ฯฬ_h|g,i-ฯ_h|g,i-1/b_g(ฯ_ig^v)(bฬ_h|g(ฯ_ig^v)-bฬ_g(ฯ_ig^v)ฯ_h|g,i)|=o_p(n^-1/2)
and similarly ฮถ_2n^-2max_i,j|ฯฬ_h|1,i-ฯ_h|1,i-1/b_1(ฯ_i1^v)(bฬ_h|1(ฯ_i1^v)-bฬ_1(ฯ_i1^v)ฯ_h|1,i)|=o_p(n^-1/2).
Applying the boundedness of K_2'(ยท) and ฯ_ig^v
(Assumptions <ref>(ii)(iii), <ref>(i), and <ref>(i))
we can see that the overall linearization error is o_p(n^-1/2).
Furthermore, observe that max_i|1/b_g(ฯ_ig^v)(bฬ_h|g(ฯ_ig^v)-bฬ_g(ฯ_ig^v)ฯ_h|g,i)|โค Csup_ฯ(|bฬ_h|g(ฯ)-b_h|g(ฯ)|+|bฬ_g(ฯ)-b_g(ฯ)|)=O_p((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)
by <cit.>. Combining this with equation
(<ref>) yields max_i|ฯฬ_h|g,i-ฯ_h|g,i|=O_p((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)
and similarly for ฯฬ_h|1,j-ฯ_h|1,j. Hence, by Assumptions
<ref>(ii)(iii), Assumption <ref>(i), <ref>(i),
<ref>(i), and <ref>(i), the remainder term
is negligible, that is, 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n|R_n,ij(ฯ_ig^v-ฯ_j1^v)|=ฮถ_2n^-3O_p(((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)^2)=o_p(n^-1/2).
Overall, we obtain
T_1n^ฯ = 1/n(n-1)โ_i=1^nโ_j=1,jโ i^n1/ฮถ_2n^2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)(ฯ_ig^v-ฯ_j1^v)
ยท(bฬ_h|g(ฯ_ig^v)-bฬ_g(ฯ_ig^v)ฯ_h|g,i/b_g(ฯ_ig^v)-bฬ_h|1(ฯ_j1^v)-bฬ_1(ฯ_j1^v)ฯ_h|1,j/b_1(ฯ_j1^v))+o_p(n^-1/2).
Let ฯ_i=(ฯ_i^v,g_i). Plugging in the expressions
of the kernel estimators, we can write
T_1n^ฯ = 1/n^2(n-1)โ_i=1^nโ_j=1^nโ_k=1^n1{iโ j}q_n(ฯ_i,ฯ_j,ฯ_k)+o_p(n^-1/2)
= 1/n(n-1)(n-2)โ_i=1^nโ_j=1^nโ_k=1^n1{iโ jโ k}q_n(ฯ_i,ฯ_j,ฯ_k)+o_p(n^-1/2)
=: Q_n+o_p(n^-1/2),
where q_n(ฯ_i,ฯ_j,ฯ_k)
= 1/ฮถ_1nฮถ_2n^2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)(ฯ_ig^v-ฯ_j1^v)
ยท(K_1(ฯ_ig^v-ฯ_kg^v/ฮถ_1n)1{g_k=h}-ฯ_h|g,i/b_g(ฯ_ig^v)-K_1(ฯ_j1^v-ฯ_k1^v/ฮถ_1n)1{g_k=h}-ฯ_h|1,j/b_1(ฯ_j1^v)).
The second equality in (<ref>) follows because the
terms with i=k or j=k are negligible.[The terms with i=k are negligible because 1/n^2(n-1)โ_i=1^nโ_j=1^n1{iโ j}q_n(ฯ_i,ฯ_j,ฯ_i)โค(nฮถ_1nฮถ_2n^2)^-1sup_t|K_1(t)|sup_t|K'_2(t)|1/n(n-1)โ_i=1^nโ_j=1^n1{iโ j}|ฯ_ig^v-ฯ_j1^v|(1/b_g(ฯ_ig^v)+1/b_1(ฯ_j1^v))=O_p((nฮถ_1nฮถ_2n^2)^-1)=o_p(n^-1/2)
by Assumptions <ref>(ii)(iii), <ref>(ii)(iii), <ref>(i),
<ref>(iii), <ref>(i), and <ref>(i).
A similar argument shows that the terms with j=k are negligible.] Q_n is a V-statistic of degree 3 with an asymmetric kernel
function.
We derive an asymptotically linear representation of Q_n using
the idea of Hoeffding projection <cit.>. Let
I=(i_1,i_2,i_3) with i_1โ i_2โ i_3 and ฯ_I=(ฯ_i:iโ I).
We can write Q_n=1/n(n-1)(n-2)โ_Iq_n(ฯ_I).
Define q_n^*(ฯ_i)=1/(n-1)(n-2)โ_I:iโ I๐ผ[q_n(ฯ_I)|ฯ_i]
and Q_n^*=1/nโ_i=1^nq_n^*(ฯ_i)-2๐ผ[q_n(ฯ_I)].
Observe that ๐ผ[q_n^*(ฯ_i)]=3๐ผ[q_n(ฯ_I)]
and thus ๐ผ[Q_n-Q_n^*]=0.[The sum over I with iโ I consists of 3(n-1)(n-2) terms.]
By construction, we obtain Cov(Q_n,Q_n^*)=n^-1โ_i=1^nCov(Q_n,q_n^*(ฯ_i))
and for each i,
Cov(Q_n,q_n^*(ฯ_i)) = 1/n(n-1)(n-2)โ_I:iโ ICov(q_n(ฯ_I),q_n^*(ฯ_i))
= 1/n(n-1)(n-2)โ_I:iโ ICov(๐ผ[q_n(ฯ_I)|ฯ_i],q_n^*(ฯ_i))
= 1/nVar(q_n^*(ฯ_i)),
where the first equality holds because for iโ I we have Cov(q_n(ฯ_I),q_n^*(ฯ_i))=0,
and the second equality holds by iterated expectations. It then follows
that Cov(Q_n,Q_n^*)=n^-2โ_i=1^nVar(q_n^*(ฯ_i))=Var(Q_n^*)
and hence ๐ผ[(Q_n-Q_n^*)^2]=Var(Q_n)-Var(Q_n^*).
By Markov inequality, we obtain Q_n=Q_n^*+o_p(n^-1/2)
if Var(Q_n)-Var(Q_n^*)=o(n^-1).
To show the last result, note that q_n(ฯ_I) and q_n(ฯ_J)
are independent for disjoint I and J. Therefore,
Var(Q_n) = 1/(n(n-1)(n-2))^2โ_(I,J):|Iโฉ J|=1Cov(q_n(ฯ_I),q_n(ฯ_J))
+1/(n(n-1)(n-2))^2โ_(I,J):|Iโฉ J|>1Cov(q_n(ฯ_I),q_n(ฯ_J)).
For comparison, because Var(Q_n^*)=n^-2โ_i=1^nVar(q_n^*(ฯ_i))
we can write
Var(Q_n^*)
= 1/(n(n-1)(n-2))^2โ_i=1^nโ_(I,J):{i}=Iโฉ JCov(๐ผ[q_n(ฯ_I)|ฯ_i],๐ผ[q_n(ฯ_J)|ฯ_i])
+1/(n(n-1)(n-2))^2โ_i=1^nโ_(I,J):{i}โ Iโฉ JCov(๐ผ[q_n(ฯ_I)|ฯ_i],๐ผ[q_n(ฯ_J)|ฯ_i]).
For I and J such that {i}=Iโฉ J, q_n(ฯ_I)
and q_n(ฯ_J) are independent conditional on ฯ_i
and thus Cov(q_n(ฯ_I),q_n(ฯ_J))=Cov(๐ผ[q_n(ฯ_I)|ฯ_i],๐ผ[q_n(ฯ_J)|ฯ_i]).
This implies that the first sum in Var(Q_n) is equal to
the first sum in Var(Q_n^*). Moreover, the second sums
in both Var(Q_n) and Var(Q_n^*) consist
of O(n^4) terms. For any I and J, by Assumptions <ref>(ii)(iii),
<ref>(ii)(iii), <ref>(i), <ref>(i),
and <ref>(i), we can bound both Cov(q_n(ฯ_I),q_n(ฯ_J))
and Cov(๐ผ[q_n(ฯ_I)|ฯ_i],๐ผ[q_n(ฯ_J)|ฯ_i])
uniformly by O(ฮถ_1n^-2ฮถ_2n^-4). Therefore, the
second sums in Var(Q_n) and Var(Q_n^*) are
both (n(n-1)(n-2))^-2ยท O(n^4)ยท O(ฮถ_1n^-2ฮถ_2n^-4)=O(n^-2ฮถ_1n^-2ฮถ_2n^-4)=o(n^-1)
by Assumption <ref>(iii). Combining the results yields
Var(Q_n)-Var(Q_n^*)=o(n^-1) and thus Q_n=Q_n^*+o_p(n^-1/2).
Now we calculate the influence function of Q_n^*. Because ฯ_i=(ฯ_i^v,g_i)
are i.i.d., to calculate q_n^*(ฯ_i) it is sufficient
to consider ๐ผ[q_n(ฯ_I)|ฯ_i] for the three
cases where i appears as the first, second, or third element of
I. Fix iโ jโ k. We start with the case where i appears
as the third element (i.e., ๐ผ[q_n(ฯ_j,ฯ_k,ฯ_i)|ฯ_i]).
Write q_n(ฯ_j,ฯ_k,ฯ_i)=q_gn(ฯ_j,ฯ_k,ฯ_i)-q_1n(ฯ_j,ฯ_k,ฯ_i).
Note that conditional on ฯ_i, ฯ_ig^v and ฯ_h|g,i=ฯ_h|g(ฯ_ig^v)
are constants. By the change of variables t=ฮถ_2n^-1(ฯ_h|g,j-ฯ_h|1,k)
and using ฯ_k1^v=ฯ_h|1^-1(ฯ_h|1,k), we obtain
๐ผ[q_gn(ฯ_j,ฯ_k,ฯ_i)|ฯ_i,ฯ_j] = 1/ฮถ_1nK_1(ฯ_jg^v-ฯ_ig^v/ฮถ_1n)1{g_i=h}-ฯ_h|g,j/b_g(ฯ_jg^v)
ยทโซ1/ฮถ_2nK'_2(t)(ฯ_h|1^-1(ฯ_h|g,j-tฮถ_2n)-ฯ_jg^v)f_ฯ_h|1(ฯ_h|g,j-tฮถ_2n)dt.
Moreover, by Assumptions <ref>(i)(iii), <ref>(ii)(iii),
<ref>(i), and <ref>(i),[For any gโ๐ข, by definition ฯ_h|g,i=ฯ_h|g(ฯ_ig^v)
and thus f_ฯ_h|g(ฯ_h|g,i)=b_g(ฯ_h|g^-1(ฯ_h|g,i))|(ฯ_h|g^-1)'(ฯ_h|g,i)|=b_g(ฯ_ig^v)|(ฯ_h|g^-1)'(ฯ_h|g,i)|.
By the chain rule, the (s_2+2)th order continuous differentiability
of ฯ_h|g(ฯ) (Assumption <ref>(iii)) implies
that ฯ_h|g^-1(ฯ) is (s_2+2)th continuously differentiable.
Because b_g is (s_2+1)th continuously differentiable (Assumption
<ref>(ii)), we derive that f_ฯ_h|g is (s_2+1)th
continuously differentiable. The boundedness of ฯ_ig^v (Assumptions
<ref>(i) and <ref>(i)) then implies that the
(s_2+1)th derivative of f_ฯ_h|g is bounded.] we obtain
โซ1/ฮถ_2nK'_2(t)(ฯ_h|1^-1(ฯ_h|g,j-tฮถ_2n)-ฯ_jg^v)f_ฯ_h|1(ฯ_h|g,j-tฮถ_2n)dt
= โซ K_2(t)โ((ฯ_jg^v-ฯ_h|1^-1(ฯ_h|g,j-tฮถ_2n))f_ฯ_h|1(ฯ_h|g,j-tฮถ_2n))/โฯ_h|g,jdt
= โ((ฯ_jg^v-ฯ_h|1^-1(ฯ_h|g,j))f_ฯ_h|1(ฯ_h|g,j))/โฯ_h|g,j+r_gn(ฯ_jg^v),
with max_i,j|r_gn(ฯ_jg^v)|โค Cฮถ_2n^s_2
and hence
๐ผ[q_gn(ฯ_j,ฯ_k,ฯ_i)|ฯ_i]
= ๐ผ[๐ผ[q_gn(ฯ_j,ฯ_k,ฯ_i)|ฯ_i,ฯ_j]|ฯ_i]
= โซ K_1(t)(1{g_i=h}-ฯ_h|g(ฯ_ig^v+tฮถ_1n))
ยท(โ(ฯ_ig^v+tฮถ_1n-ฯ_h|1^-1(ฯ_h|g(ฯ_ig^v+tฮถ_1n)))f_ฯ_h|1(ฯ_h|g(ฯ_ig^v+tฮถ_1n))/โฯ_h|g((ฯ_ig^v+tฮถ_1n))+r_gn(ฯ_ig^v+tฮถ_1n))dt
= (1{g_i=h}-ฯ_h|g,i)โ(ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)/โฯ_h|g,i+r_gn,i,
with max_i|r_gn,i|โค C(ฮถ_1n^s_1+ฮถ_2n^s_2).
The second equality follows from the change of variables t=ฮถ_1n^-1(ฯ_jg^v-ฯ_ig^v),[Recall that b_g(ฯ_jg^v) is the density of ฯ_jg^v,
so the two cancel out.] and the last equality follows from Taylor expansion and Assumptions
<ref>(i), <ref>(ii)(iii), <ref>(i),
and <ref>(i). The form of ๐ผ[q_1n(ฯ_j,ฯ_k,ฯ_i)|ฯ_i]
can be derived similarly. Overall, we obtain ๐ผ[q_n(ฯ_j,ฯ_k,ฯ_i)|ฯ_i]=ฯ_g,h^ฯ(ฯ_i)+r_n,i,
where
ฯ_g,h^ฯ(ฯ_i) = (1{g_i=h}-ฯ_h|g,i)โ(ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)/โฯ_h|g,i
-(1{g_i=h}-ฯ_h|1,i)โ(ฯ_i1^v-ฯ_h|g^-1(ฯ_h|1,i))f_ฯ_h|g(ฯ_h|1,i)/โฯ_h|g,i,
and r_n,i satisfies max_i|r_n,i|โค C(ฮถ_1n^s_1+ฮถ_2n^s_2).
Using similar arguments, we can show that both max_i|๐ผ[q_n(ฯ_i,ฯ_j,ฯ_k)|ฯ_i]|
and max_i|๐ผ[q_n(ฯ_j,ฯ_i,ฯ_k)|ฯ_i]|
are bounded by Cฮถ_1n^s_1. We obtain q_n^*(ฯ_i)=๐ผ[q_n(ฯ_j,ฯ_k,ฯ_i)|ฯ_i]+๐ผ[q_n(ฯ_i,ฯ_j,ฯ_k)|ฯ_i]+๐ผ[q_n(ฯ_j,ฯ_i,ฯ_k)|ฯ_i]=ฯ_g,h^ฯ(ฯ_i)+r_n,i^*,
where max_i|r_n,i^*|โค C(ฮถ_1n^s_1+ฮถ_2n^s_2).
Note that ๐ผ[1{g_i=h}-ฯ_h|g,i|ฯ_ig^v]=0,
so ๐ผ[ฯ_g,h^ฯ(ฯ_i)]=0. This together
with max_i|r_n,i^*|โค C(ฮถ_1n^s_1+ฮถ_2n^s_2)
and Assumption <ref>(iv)(v) yields ๐ผ[q_n^*(ฯ_i)]=o(n^-1/2).
It follows that T_1n^ฯ=Q_n^*+o_p(n^-1/2)=1/nโ_i=1^nฯ_g,h^ฯ(ฯ_i)+o_p(n^-1/2).
Step 2: now we examine T_1n^ฮด in equation (<ref>).
Under Assumption <ref>(ii) ฯฬ_h is twice differentiable
in ฮด^v, so by applying the Taylor expansion we obtain h_gn(ฯฬ_i^v,ฯฬ_j^v;ฯฬ_h(ฮดฬ^v))-h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))=โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด^v'(ฮดฬ^v-ฮด_0^v)+O_p(ฮดฬ^v-ฮด_0^v^2).
Because ฮดฬ^v-ฮด_0^v=n^-1โ_i=1^nฯ^ฮด^v(z_i)+o_p(n^-1/2),
with ๐ผ[ฯ^ฮด^v(z_i)]=0 (Assumption <ref>(ii)),
we have
T_1n^ฮด = 1/nโ_k=1^n(1/n(n-1)โ_i=1^nโ_j=1,jโ i^nโ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด^v')ฯ^ฮด^v(z_k)+o_p(n^-1/2).
The double sum term is a V-statistic with a nested kernel estimator
ฯฬ_h(ฮด_0^v).
Observe that โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด^v'=(โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด_1^v',โฆ,โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด_G^v'),
where
โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด_1^v' = -1/ฮถ_2n^2K'_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)ฯฬ'_h|1(ฯ_j1^v)(ฯ_ig^v-ฯ_j1^v)z'_j-1/ฮถ_2nK_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)z'_j
โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด_g^v' = 1/ฮถ_2n^2K'_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)ฯฬ'_h|g(ฯ_ig^v)(ฯ_ig^v-ฯ_j1^v)z'_i+1/ฮถ_2nK_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)z'_i,
and the remaining G-2 subvectors equal to zero. We focus on the
gth subvector, and the first subvector can be analyzed similarly.
Applying the mean-value theorem (with _n,ij an intermediate
value), we obtain the bound
ฮถ_2n^-1max_i,j|K_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)-K_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)| โค ฮถ_2n^-2max_i,j|K'_2(ฮ_n,ij)||ฯฬ_h|g,i-ฯ_h|g,i-(ฯฬ_h|1,j-ฯ_h|1,j)|
= ฮถ_2n^-2O_p((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)=o_p(1)
by Assumptions <ref>(ii)(iii), <ref>(ii),
and <cit.>. Similarly, we have
ฮถ_2n^-2max_i,j|K'_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)ฯฬ'_h|g(ฯ_ig^v)-K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)ฯ'_h|g(ฯ_ig^v)|
โค ฮถ_2n^-2max_i,j|K'_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)|sup_ฯ|ฯฬ'_h|g(ฯ_ig^v)-ฯ'_h|g(ฯ_ig^v)|
+ฮถ_2n^-2max_i,j|K'_2(ฯฬ_h|g,i-ฯฬ_h|1,j/ฮถ_2n)-K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)|sup_ฯ|ฯ'_h|g(ฯ_ig^v)|
= ฮถ_2n^-2O_p((ln n)^1/2(nฮถ_1n^3)^-1/2+ฮถ_1n^s_1)+ฮถ_2n^-3O_p((ln n)^1/2(nฮถ_1n)^-1/2+ฮถ_1n^s_1)=o_p(1).
by Assumptions <ref>(ii)(iii), <ref>(i)(ii),
<ref>(ii), <ref>(i), <ref>(i),
<cit.> and <cit.>.
Therefore, by the boundedness of ฯ_ig^v and z_i (Assumptions
<ref>(i) and <ref>(i)), we have the approximation
โ1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(โ h_gn(ฯ_i^v,ฯ_j^v;ฯฬ_h(ฮด_0^v))/โฮด_g^v'-โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v')โ
โค o_p(1)1/n(n-1)โ_i=1^nโ_j=1,jโ i^n(||(ฯ_ig^v-ฯ_j1^v)z_i+||z_i)=o_p(1).
Following the law of large numbers for U-statistics <cit.>,
we show
1/n(n-1)โ_i=1^nโ_j=1,jโ i^nโ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v'=๐ผ[โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v']+o_p(1).
Now we derive the form of ๐ผ[โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v'].
Recall that
โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v'=1/ฮถ_2n^2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)ฯ'_h|g(ฯ_ig^v)(ฯ_ig^v-ฯ_j1^v)z'_i+1/ฮถ_2n^1K_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)z'_i.
By the change of variables t=ฮถ_2n^-1(ฯ_h|g,i-ฯ_h|1,j),
integration by parts, Taylor expansion, and Assumption <ref>(i),
<ref>(i)(iii), and <ref>(ii)(iii), <ref>(i),
and <ref>(i), we have
|โซฮถ_2n^-2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)f_ฯ_h|1(ฯ_h|1,j)dฯ_h|1,j+f'_ฯ_h|1(ฯ_h|g,i))| โค Cฮถ_2n^s_2
|โซฮถ_2n^-2K'_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)ฯ_j1^vf_ฯ_h|1(ฯ_h|1,j)dฯ_h|1,j+(ฯ_h|1^-1(ฯ_h|g,i)f_ฯ_h|1(ฯ_h|g,i))')| โค Cฮถ_2n^s_2
|โซฮถ_2n^-1K_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)f_ฯ_h|1(ฯ_h|1,j)dฯ_h|1,j+f_ฯ_h|1(ฯ_h|g,i)| โค Cฮถ_2n^s_2.
Thus, from Assumption <ref>(v), ๐ผ[โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_g^v']=D_g,h^ฮด_g^v+o(n^-1/2),
where
D_g,h^ฮด_g^v=๐ผ[(((ฯ_h|1^-1(ฯ_h|g,i)f_ฯ_h|1(ฯ_h|g,i))'-f'_ฯ_h|1(ฯ_h|g,i)ฯ_ig^v)ฯ'_h|g(ฯ_ig^v)+f_ฯ_h|1(ฯ_h|g,i))z'_i].
Similarly we can derive ๐ผ[โ h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))/โฮด_1^v']=D_g,h^ฮด_1^v+o(n^-1/2),
where
D_g,h^ฮด_1^v=๐ผ[(((ฯ_h|g^-1(ฯ_h|1,i)f_ฯ_h|g(ฯ_h|1,i))'-f'_ฯ_h|g(ฯ_h|1,i)ฯ_i1^v)ฯ'_h|1(ฯ_i1^v)+f_ฯ_h|g(ฯ_h|1,i))z'_i].
Combining the results yields T_1n^ฮด=1/nโ_i=1^n(D_g,h^ฮด_g^vฯ^ฮด_g^v(z_i)-D_g,h^ฮด_1^vฯ^ฮด_1^v(z_i))+o_p(n^-1/2),
where ฯ^ฮด_1^v(z_i) and ฯ^ฮด_g^v(z_i)
are the influence functions for ฮดฬ_1^v and ฮดฬ_g^v,
respectively. Define
ฯ_g,h,1(z_i,g_i)=D_g,h^ฮด_g^vฯ^ฮด_g^v(z_i)-D_g,h^ฮด_1^vฯ^ฮด_1^v(z_i)+ฯ_g,h^ฯ(ฯ_i).
We thus have T_1n=1/nโ_i=1^nฯ_g,h,1(z_i,g_i)+o_p(n^-1/2).
Recall that T_2n is a V-statistic of degree 2 with an asymmetric
kernel function and ๐ผ[T_2n]=0. Similarly as in Lemma
<ref>, we use the idea of Hoeffding projection and derive
an asymptotically linear representation of T_2n. Let I=(i_1,i_2)
with i_1โ i_2 and ฯ_I^v=(ฯ_i^v:iโ I).
We can write T_2n=1/n(n-1)โ_I(h_gn(ฯ_I^v)-๐ผ[h_gn(ฯ_I^v)]).
Define h_gn^*(ฯ_i^v)=1/n-1โ_I:iโ I(๐ผ[h_gn(ฯ_I^v)|ฯ_i^v]-๐ผ[h_gn(ฯ_I^v)])
and T_2n^*=1/nโ_i=1^nh_gn^*(ฯ_i^v).
Observe that ๐ผ[T_2n^*]=0. Following Lemma <ref>,
we can show that Cov(T_2n,T_2n^*)=Var(T_2n^*)[This is because Cov(T_2n,T_2n^*)=1/nโ_i=1^nCov(T_2n,h_gn^*(ฯ_i^v))
and for each i, Cov(T_2n,h_gn^*(ฯ_i^v))=1/n(n-1)โ_I:iโ ICov(h_gn(ฯ_I^v),h_gn^*(ฯ_i^v))=1/nVar(h_gn^*(ฯ_i^v)).] and hence ๐ผ[(T_2n-T_2n^*)^2]=Var(T_2n)-Var(T_2n^*).
By Markov inequality, we obtain T_2n=T_2n^*+o_p(n^-1/2)
if Var(Q_n)-Var(Q_n^*)=o(n^-1).
To show the last result, we express Var(T_2n) and Var(T_2n^*)
similarly as in equations (<ref>) and (<ref>).
Follow the argument that compares the two equations and note that
for any I and J, by Assumptions <ref>(ii)(iii), <ref>(i),
and <ref>(i), we can bound both Cov(h_gn(ฯ_I^v),h_gn(ฯ_J^v))
and Cov(๐ผ[h_gn(ฯ_I^v)|ฯ_i^v],๐ผ[h_gn(ฯ_J^v)|ฯ_i^v])
uniformly by O(ฮถ_2n^-2). Therefore, Var(Q_n)-Var(Q_n^*)=(n(n-1))^-2ยท O(n^2)ยท O(ฮถ_2n^-2)=O(n^-2ฮถ_2n^-2)=o(n^-1)
by Assumption <ref>(iii)[Assumption <ref>(iii) (i.e., n^1/2ฮถ_1nฮถ_2n^2โโ)
implies n^1/2ฮถ_2nโโ.] and then T_2n=T_2n^*+o_p(n^-1/2) follows.
Next we calculate the influence function of T_2n^*. Conditional
on ฯ_i^v, both ฯ_ig^v and ฯ_h|g,i=ฯ_h|g(ฯ_ig^v)
are constants. Under Assumption <ref>(i), we obtain
๐ผ[h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))|ฯ_i^v] = โซ1/ฮถ_2nK_2(ฯ_h|g,i-ฯ_h|1,j/ฮถ_2n)(ฯ_ig^v-ฯ_h|1^-1(ฯ_h|1,j))f_ฯ_h|1(ฯ_h|1,j)dฯ_h|1,j
= โซ K_2(t)(ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i-tฮถ_2n))f_ฯ_h|1(ฯ_h|g,i-tฮถ_2n)dt
= (ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)+rฬ_n,i,
with max_i|rฬ_n,i|โค Cฮถ_2n^s_2. The second
equality follows from the change of variables, and the third equality
holds by Assumptions <ref>(i) and <ref>. Similarly,
we can derive |๐ผ[h_gn(ฯ_j^v,ฯ_i^v;ฯ_h(ฮด_0^v))|ฯ_i^v]-(ฯ_h|g^-1(ฯ_h|1,i)-ฯ_i1^v)f_ฯ_h|g(ฯ_h|1,i)|โค Cฮถ_2n^s_2.
By Assumption <ref>(v), O(ฮถ_2n^s_2)=o(n^-1/2).
All together, we obtain T_2n^*=1/nโ_i=1^nฯ_g,h,2(z_i)+o_p(n^-1/2),
where
ฯ_g,h,2(z_i)= (ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)+(ฯ_h|g^-1(ฯ_h|1,i)-ฯ_i1^v)f_ฯ_h|g(ฯ_h|1,i)
-๐ผ[(ฯ_ig^v-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)+(ฯ_h|g^-1(ฯ_h|1,i)-ฯ_i1^v)f_ฯ_h|g(ฯ_h|1,i)].
The third term T_3n in equation (<ref>)
satisfies T_3n=o(n^-1/2).
Equation (<ref>) implies
๐ผ[h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))]=โซ(ฯ_h|g^-1(ฯ_h|g,i)-ฯ_h|1^-1(ฯ_h|g,i))f_ฯ_h|1(ฯ_h|g,i)f_ฯ_h|g(ฯ_h|g,i)dฯ_h|g,i+O(ฮถ_2n^s_2).
From the identification result in Section <ref>,
we can represent p_g^* as
p_g^*=๐ผ[1{ฯ_h|g,i=ฯ_h|1,j}(ฯ_ig^v-ฯ_j1^v)]=โซ[ฯ_h|g^-1(ฯ_h|g,i)-ฯ_h|1^-1(ฯ_h|g,i)]f_ฯ_h|g(ฯ_h|g,i)f_ฯ_h|1(ฯ_h|g,i)dฯ_h|g,i.
Hence, T_3n=๐ผ[h_gn(ฯ_i^v,ฯ_j^v;ฯ_h(ฮด_0^v))]-p_g^*=O(ฮถ_2n^s_2)=o(n^-1/2)
by Assumption <ref>(v).
ยง LEMMAS FOR THE ASYMPTOTICS OF ฮฬ
Notation
Let A=(a_ij)โโ^n^2 denote an nร n matrix
and x=(x_1,โฆ,x_n)'โโ^n denote an nร1
vector. Denote by A_โ=max_1โค iโค nโ_j=1^n|a_ij|
the maximum row sum norm and A_1=max_1โค jโค nโ_i=1^n|a_ij|
the maximum column sum norm. Note that A'_โ= A_1
and A'_1= A_โ.
Denote by ยท_โ and ยท_1 the l_โ
and l_1 norms. That is, A_โ=max_1โค i,jโค n|a_ij|,
A_1=โ_i,j=1^n|a_ij|, x_โ=max_1โค iโค n|x_i|,
and x_1=โ_i=1^n|x_i|. For nร n matrices
A and B and nร1 vectors x and y, we can show AB_โโค A_โB_โ,
AB_1โค A_1B_1, Ax_โโค A_โx_โ,
and Ax_1โค A_1x_1. Moreover,
|x'Ay|โค A_โx_โy_1โค n A_โx_โy_โ.[These results can be found in Horn and Johnson (1985, Section 5.6)
or proved similarly.] Let 0<C<โ denote a universal constant. For simplicity, we
write ฯ_i^s as ฯ_i.
ยง.ยง Consistency of ฮณฬ
Because y_i-X'_iฮณ=y_i-X'_iฮณ_0-X'_i(ฮณ-ฮณ_0)=ฯต_i-X'_i(ฮณ-ฮณ_0),
we have
mฬ_n(ฮณ,ฮผฬ^Z)-m_0(ฮณ,ฮผ_0^Z)
= 1/nโ_i=1^n(Z_i-ฮผฬ^Z(ฯฬ_i))(y_i-X'_iฮณ)-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i))(y_i-X'_iฮณ)]
= -1/nโ_i=1^n(ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))+1/nโ_i=1^n(ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))X'_i(ฮณ-ฮณ_0)
+1/nโ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))ฯต_i-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i))ฯต_i])
-1/nโ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))X'_i-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i))X'_i])(ฮณ-ฮณ_0).
In the last expression, the first two average terms are o_p(1)
by Lemma <ref>, and the last two average terms
are o_p(1) by Lemma <ref>. By the compactness
of ฮ (Assumption <ref>(ii)) we have sup_ฮณโฮmฬ_n(ฮณ,ฮผฬ^Z)-m_0(ฮณ,ฮผ_0^Z)=o_p(1).
For t_i=(X'_i,ฯต_i)', we
have
Recall that X_i=(w_iy,w_ix,x'_i)'.
Because t_iโโ^2d_x+2 is finite dimensional, we
can prove equation (<ref>) for each component of t_i
separately. For r=1,โฆ,2d_x+2, let t_ir denote the rth
component of t_i, and t_r=(t_1r,โฆ,t_nr)'.
By construction, we have ฮผฬ^Z(ฯฬ_i)=ฮฒฬ^Z(ฯฬ)'b^K(ฯฬ_i)โโ^d_Z
and ฮผฬ^Z(ฯ_i)=ฮฒฬ^Z(ฯ)'b^K(ฯ_i)โโ^d_Z,
where ฮฒฬ^Z(ฯฬ)=(Bฬ_K'Bฬ_K)^-1Bฬ_K'Z
and ฮฒฬ^Z(ฯ)=(B_K'B_K)^-1B_K'Z,
with Bฬ_K=B_K(ฯฬ) and B_K=B_K(ฯ).
Denote ฮผ_0^Z=(ฮผ_0^Z(ฯ_1),โฆ,ฮผ_0^Z(ฯ_n))'.
The left-hand side of equation (<ref>) for component t_ir
satisfies
1/nโ_i=1^n(ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))t_ir^2 = n^-2t'_r(Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z)(Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z)'t_r
โค n^-2Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z^2t'_rt_r,
where the inequality follows because we can bound the largest eigenvalue
of the matrix (Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z)(Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z)'
by Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z^2.
For t_ir that represents a component of w_ix
or x_i, we have max_i|t_ir|<โ (Assumptions <ref>(ii),
<ref>(i)). Therefore, n^-1t'_rt_r=n^-1โ_i=1^nt_ir^2โคmax_it_ir^2<โ.
For t_ir=ฯต_i, because ฯต_i is i.i.d., by
the law of large numbers and Assumption <ref>(i) n^-1t'_rt_r=n^-1โ_i=1^nฯต_i^2=๐ผ[ฯต_i^2]+o_p(1)=O_p(1).
For t_ir=w_iy, we have n^-1(wy)'wy=O_p(1)
by Lemma <ref>. We conclude that n^-1t'_rt_r=O_p(1).
By the triangle inequality and (a+b+c)^2โค3(a^2+b^2+c^2),
n^-1Bฬ_Kฮฒฬ^Z(ฯฬ)-ฮผ_0^Z^2
โค n^-1((Bฬ_K-B_K)ฮฒฬ^Z(ฯฬ)+B_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)+B_Kฮฒ^Z-ฮผ_0^Z)^2
โค 3n^-1(Bฬ_K-B_K^2ฮฒฬ^Z(ฯฬ)^2+B_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)^2+B_Kฮฒ^Z-ฮผ_0^Z^2).
It suffices to show that the last three terms are o_p(1).
By equation (<ref>), n^-1Bฬ_K-B_K^2=O_p(ฯฑ_1(K)^2/n).
Moreover,
ฮฒฬ^Z(ฯฬ)^2 =tr(Z'Bฬ_K(Bฬ'_KBฬ_K)^-2Bฬ'_KZ)
โค O_p(n^-1)tr(Z'Bฬ_K(Bฬ'_KBฬ_K)^-1Bฬ'_KZ)
โค O_p(n^-1)tr(Z'Z)=O_p(1).
The first inequality follows from Lemmas <ref> and <ref>.[By Lemmas <ref> and <ref>, the smallest eigenvalue
of Qฬ_K=Bฬ_K'Bฬ_K/n converges to one in probability
and hence (Bฬ_K'Bฬ_K/n)^-1โค CI_K with probability
approaching one.] The second inequality follows because Bฬ_K(Bฬ'_KBฬ_K)^-1Bฬ'_K
is idempotent and thus Bฬ_K(Bฬ'_KBฬ_K)^-1Bฬ'_Kโค I_K.
The last equality holds because n^-1tr(Z'Z)=n^-1โ_i=1^nZ_i^2<โ
by the boundedness of Z_i. We conclude that n^-1Bฬ_K-B_K^2ฮฒฬ^Z(ฯฬ)^2=o_p(1).
Observe that n^-1B_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)^2=n^-1B_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)^2=n^-1tr((ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)'B'_KB_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z))โค O_p(1)ฮฒฬ^Z(ฯฬ)-ฮฒ^Z^2,
where the inequality holds because by Lemma <ref> B'_KB_K/nโค CI_K
with probability approaching one. By the triangle inequality, ฮฒฬ^Z(ฯฬ)-ฮฒ^Zโคฮฒฬ^Z(ฯฬ)-ฮฒฬ^Z(ฯ)+ฮฒฬ^Z(ฯ)-ฮฒ^Z.
Lemma <ref> shows that ฮฒฬ^Z(ฯฬ)-ฮฒฬ^Z(ฯ)=O_p(ฯฑ_1(K)/โ(n)).
Moreover, by Lemma 15.3 in <cit.> for x_i and
z_i in Z_i and Lemma <ref> for w_ix
in Z_i, we have ฮฒฬ^Z(ฯ)-ฮฒ^Z=o_p(1).
Combining these results yields n^-1B_K(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)^2=o_p(1).
Finally, n^-1B_Kฮฒ^Z-ฮผ_0^Z^2=n^-1โ_i=1^nฮฒ^Z'b^K(ฯ_i)-ฮผ_0^Z(ฯ_i)^2โคsup_ฯฮฒ^Z'b^K(ฯ)-ฮผ_0^Z(ฯ)^2=O(K^-2a)
by Assumption <ref>(ii).
For t_i=(X'_i,ฯต_i)', we have
1/nโ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))t'_i-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i))t'_i])=o_p(1).
Recall that Z_i=(w_ix,x'_i,z'_i)' and X_i=(w_iy,w_ix,x'_i)'.
Because both Z_i and t_i are finite dimensional, we can
prove equation (<ref>) component by component. For simplicity,
we assume that both z_i and x_i are scalars. Depending on
which components of Z_i and t_i under consideration, we
divide the proof into six cases.
Case (a): Since (x_i,z_i,ฯต_i) are i.i.d., the result
follows by the law of large numbers.
Case (b): Take x_i in t_i as an example and ฯต_i
in t_i can be proved similarly.
n^-1โ_i=1^n((w_ix-ฮผ_0^w_ix(ฯ_i))x_i-๐ผ[(w_ix-ฮผ_0^w_ix(ฯ_i))x_i])
= n^-1โ_i=1^n(w_ixx_i-๐ผ[w_ixx_i])-n^-1โ_i=1^n(ฮผ_0^w_ix(ฯ_i)x_i-๐ผ[ฮผ_0^w_ix(ฯ_i)x_i]).
Because ฮผ_0^w_ix(ฯ_i)x_i is independent
across i, the second term on the right-hand side is o_p(1)
by the law of large numbers. Write the first term on the right-hand
side as n^-1(x'wx-๐ผ[x'wx]).
Applying Lemma <ref> with a=b=x
and q=w yields n^-1(x'wx-๐ผ[x'wx])=o_p(1).
Equation (<ref>) thus holds for case (b).
Case (c): Without loss of generality we take z_i in Z_i
as an example. Denote zฬ_i=z_i-ฮผ_0^z(ฯ_i)
and zฬ=(zฬ_1,โฆ,zฬ_n)'.
Write n^-1โ_i=1^n((z_i-ฮผ_0^z(ฯ_i))w_ix-๐ผ[(z_i-ฮผ_0^z(ฯ_i))w_ix])=n^-1(zฬ'wx-๐ผ[zฬ'wx]).
Applying Lemma <ref> with a=zฬ,
b=x, and q=w,
we prove equation (<ref>) for case (c).
Case (d): Write
n^-1โ_i=1^n((w_ix-ฮผ_0^w_ix(ฯ_i))w_ix-๐ผ[(w_ix-ฮผ_0^w_ix(ฯ_i))w_ix])
= n^-1โ_i=1^n(w_ixw_ix-๐ผ[w_ixw_ix])-n^-1โ_i=1^n(ฮผ_0^w_ix(ฯ_i)w_ix-๐ผ[ฮผ_0^w_ix(ฯ_i)w_ix])
= n^-1(x'w'wx-๐ผ[x'w'wx])-n^-1(ฮผ_0^wx(ฯ)'wx-๐ผ[ฮผ_0^wx(ฯ)'wx]),
where ฮผ_0^wx(ฯ)=(ฮผ_0^w_1x(ฯ_1),โฆ,ฮผ_0^w_nx(ฯ_n))'.
Applying Lemma <ref> twice, one with a=b=x
and q=w'w, and the other
with a=ฮผ_0^wx(ฯ),
b=x and q=w,
we obtain that the last two terms are both o_p(1). Equation (<ref>)
holds for case (d).
Case (e): Recall that y=s(wxฮณ_2+xฮณ_3+ฯต),
where s=(I_n-ฮณ_1w)^-1, and
ฮป=ฮป(ฯ) is
short for ฮป^s(ฯ^s). Take
z_i in Z_i as an example. Write
n^-1โ_i=1^n((z_i-ฮผ_0^z(ฯ_i))w_iy-๐ผ[(z_i-ฮผ_0^z(ฯ_i))w_iy])
= n^-1(zฬ'wswx-๐ผ[zฬ'wswx])ฮณ_2+n^-1(zฬ'wsx-๐ผ[zฬ'wsx])ฮณ_3
+n^-1(zฬ'wsฯต-๐ผ[zฬ'wsฯต]).
Applying Lemma <ref> to each term in the last line
proves equation (<ref>).
Case (f): Write
n^-1โ_i=1^n((w_ix-ฮผ_0^w_ix(ฯ_i))w_iy-๐ผ[(w_ix-ฮผ_0^w_ix(ฯ_i))w_iy])
= n^-1โ_i=1^n(w_ixw_iy-๐ผ[w_ixw_iy])-n^-1โ_i=1^n(ฮผ_0^w_ix(ฯ_i)w_iy-๐ผ[ฮผ_0^w_ix(ฯ_i)w_iy])
= n^-1(x'w'wswx-๐ผ[x'w'wswx])ฮณ_2+n^-1(x'w'wsx-๐ผ[x'w'wsx])ฮณ_3
+n^-1(x'w'wsฯต-๐ผ[x'w'wsฯต])-n^-1(ฮผ_0^wx(ฯ)'wswx-๐ผ[ฮผ_0^wx(ฯ)'wswx])ฮณ_2
-n^-1(ฮผ_0^wx(ฯ)'wsx-๐ผ[ฮผ_0^wx(ฯ)'wsx])ฮณ_3-n^-1(ฮผ_0^wx(ฯ)'wsฯต-๐ผ[ฮผ_0^wx(ฯ)'wsฯต]).
Applying Lemma <ref> to each term in the last line
proves equation (<ref>).
n^-1(wy)'wy=O_p(1).
Let T=w^2xฮณ_2+wxฮณ_3
be an nร1 vector, and recall that wy=s(w^2xฮณ_2+wxฮณ_3+wฯต)=s(T+wฯต).
We can write
n^-1(wy)'wy = n^-1(T+wฯต)'s's(T+wฯต)
= n^-1T^'s'sT+2n^-1T^'s'swฯต+n^-1ฯต'w's'swฯต.
We show that w and s are uniformly
bounded in both row and column sums.[See <cit.> for similar results.]
In fact, under Assumption <ref>(i), we have w_โ=1
and thus s_โโคโ_r=0^โ|ฮณ_1|^rw_โ^r<โ.
For any rโฅ1, we can bound w^r_โโคw_โ^r-1w_โ=w_โ.
Therefore, w^r_1=max_jโ๐ฉโ_i=1^n|(w^r)_ij|โค nw^r_โโค nw_โ
and thus s_1โคโ_r=0^โ|ฮณ_1|^rw^r_1โคโ_r=0^โ|ฮณ_1|^rnw_โ=O_p(1).
By the boundedness of x_i and ฮณ, we can bound T_โโคw_โ^2xฮณ_2_โ+w_โxฮณ_3_โ<โ.
Therefore, the first term in the last line of (<ref>) is
n^-1T^'s'sTโคs's_โT_โ^2โคs_1s_โT_โ^2=O_p(1).
The second to last term in equation (<ref>) satisfies n^-1|T^'s'swฯต|โคs'sw_โT_โฯต/n_1=O_p(1),
because ฯต/n_1=n^-1โ|ฯต_i|=๐ผ[|ฯต_i|]+o_p(1)=O_p(1)
by the law of large numbers and Assumption <ref>(i) and
s'sw_โโคs_1s_โw_โ=O_p(1).
Finally, the last term in (<ref>) satisfies n^-1ฯต'w's'swฯตโค n^-1ฮป_max(w's'sw)ฯต'ฯต=O_p(1),
because n^-1ฯต'ฯต=n^-1โ_i=1^nฯต_i^2=๐ผ[ฯต_i^2]+o_p(1)=O_p(1)
and ฮป_max(w's'sw)โคw's'sw_โโคw_1s_1s_โw_โ=O_p(1).
Combining the three terms, we complete the proof.
The result follows from Lemma 15.2 in <cit.>.
Let Qฬ_K=Bฬ_K'Bฬ_K/n.
Then Qฬ_K-Q_K=O_p(ฯฑ_1(K)/โ(n)).
Because Bฬ_K'Bฬ_K-B_K'B_K=(Bฬ_K-B_K)^2+B_K'(Bฬ_K-B_K)+(Bฬ_K-B_K)'B_K,
we have Qฬ_K-Q_K=Bฬ_K'Bฬ_K-B_K'B_K/nโคBฬ_K-B_K^2/n+2(Bฬ_K-B_K)'B_K/n.
The โ(n)-consistency of ฮธฬ and boundedness of
z (Assumptions <ref>(i) and <ref>(ii))
imply that max_iฯฬ_i-ฯ_i=O_p(n^-1/2).
Therefore,
Bฬ_K-B_K=(โ_i=1^nb^K(ฯฬ_i)-b^K(ฯ_i)^2)^1/2โค n^1/2ฯฑ_1(K)max_iฯฬ_i-ฯ_i=O_p(ฯฑ_1(K)),
by the mean-value theorem and Assumption <ref>(iv). Moreover,
(Bฬ_K-B_K)'B_K/n = tr((Bฬ_K-B_K)'B_KB_K'(Bฬ_K-B_K))^1/2/n
โค O_p(1)tr((Bฬ_K-B_K)'B_K(B_K'B_K)^-1B_K'(Bฬ_K-B_K))^1/2/โ(n)
โค O_p(1)Bฬ_K-B_K/โ(n)=O_p(ฯฑ_1(K)/โ(n)).
The first inequality above holds because by Lemma <ref> I_Kโค C(B'_KB_K/n)^-1
with probability approaching one. The second inequality follows by
B_K(B_K'B_K)^-1B_K' idempotent. The last equality follows
from equation (<ref>). We conclude that Qฬ_K-Q_Kโค O_p(ฯฑ_1(K)^2/n)+O_p(ฯฑ_1(K)/โ(n))=O_p(ฯฑ_1(K)/โ(n)).
ฮฒฬ^Z(ฯฬ)-ฮฒฬ^Z(ฯ)=O_p(ฯฑ_1(K)/โ(n)).
Recall that ฮฒฬ^Z(ฯฬ)=Qฬ_K^-1Bฬ'_KZ/n
and ฮฒฬ^Z(ฯ)=Q_K^-1B'_KZ/n.
We have
ฮฒฬ^Z(ฯฬ)-ฮฒฬ^Z(ฯ) = tr(Z'(Qฬ_K^-1Bฬ'_K-Q_K^-1B'_K)'(Qฬ_K^-1Bฬ'_K-Q_K^-1B'_K)Z/n^2)^1/2
โค (Qฬ_K^-1Bฬ'_K-Q_K^-1B'_K)/โ(n)tr(Z'Z/n)^1/2.
By the boundedness of Z_i, tr(Z'Z/n)=n^-1โ_i=1^nZ_i^2<โ.
Moreover, (Qฬ_K^-1Bฬ'_K-Q_K^-1B'_K)/โ(n)โค(Qฬ_K^-1-Q_K^-1)Bฬ'_K/โ(n)+Q_K^-1(Bฬ_K-B_K)'/โ(n).
Observe
(Qฬ_K^-1-Q_K^-1)Bฬ'_K/โ(n) = tr((Qฬ_K^-1-Q_K^-1)Bฬ'_KBฬ_K(Qฬ_K^-1-Q_K^-1)/n)^1/2
= tr(Q_K^-1(Q_K-Qฬ_K)Qฬ_K^-1(Q_K-Qฬ_K)Q_K^-1)^1/2
โค O_p(1)tr((Q_K-Qฬ_K)Q_K^-2(Q_K-Qฬ_K))^1/2
โค O_p(1)Q_K-Qฬ_K=O_p(ฯฑ_1(K)/โ(n)),
where the inequalities follow from Lemmas <ref> and <ref>.[By Lemmas <ref> and <ref>, the smallest eigenvalue
of Qฬ_K converges to one in probability and hence the largest
eigenvalue of Qฬ_K^-1 is bounded with probability approaching
one. Similarly, by Lemma <ref>, the largest eigenvalue of
Q_K^-2 is bounded with probability approaching one.] The last equality follows from Lemma <ref>. As for the
second term, by equation (<ref>), we have Q_K^-1(Bฬ_K-B_K)'/โ(n)=tr((Bฬ_K-B_K)Q_K^-2(Bฬ_K-B_K)'/n)^1/2โค O_p(1)(Bฬ_K-B_K)/โ(n)=O_p(ฯฑ_1(K)/โ(n)).
ฮฒฬ^wx(ฯ)-ฮฒ^wx=o_p(1).
We can write ฮฒฬ^wx(ฯ)-ฮฒ^wx=Q_K^-1B'_K(wx-B_Kฮฒ^wx)/n.
Observe that
ฮฒฬ^wx(ฯ)-ฮฒ^wx = tr((wx-B_Kฮฒ^wx)'B_KQ_K^-2B'_K(wx-B_Kฮฒ^wx)/n^2)^1/2
โค O_p(1)B'_K(wx-B_Kฮฒ^wx)/n,
where we used that the largest eigenvalue of Q_K^-2 is bounded
with probability approaching one. Write wx-B_Kฮฒ^wx=(wx-ฮผ_0^wx)+(ฮผ_0^wx-B_Kฮฒ^wx).
We derive
B'_K(ฮผ_0^wx-B_Kฮฒ^wx)/n
= tr((ฮผ_0^wx-B_Kฮฒ^wx)'B_KB'_K(ฮผ_0^wx-B_Kฮฒ^wx)/n^2)^1/2
โค O_p(1)tr((ฮผ_0^wx-B_Kฮฒ^wx)'B_K(B'_KB_K)^-1B'_K(ฮผ_0^wx-B_Kฮฒ^wx)/n)^1/2
โค O_p(1)(ฮผ_0^wx-B_Kฮฒ^wx)/โ(n)=O_p(K^-a).
The first inequality holds by Lemma <ref>.[By Lemma <ref>, the largest eigenvalue of Q_K=B'_KB_K/n
converges to one in probability and hence CI_Kโค(B'_KB_K/n)^-1
with probability approaching one.] The second inequality follows by B_K(B'_KB_K)^-1B'_K
idempotent, and the last equality follows by Assumption <ref>(ii).[Under Assumption <ref>(ii), we have (ฮผ_0^wx-B_Kฮฒ^wx)/โ(n)=(n^-1โ_i=1^nฮผ_0^wx(ฯ_i)-ฮฒ^wx'b^K(ฯ_i)^2)^1/2โคsup_ฯฮผ_0^wx(ฯ)-ฮฒ^wx'b^K(ฯ)=O(K^-a).]
If we can show B'_K(wx-ฮผ_0^wx)/n=o_p(1),
then combining the results completes the proof. Because x_i is
finite dimensional, we can prove the equation for each component of
x_i separately. For simplicity, assume that x_i is a scalar.
Write B'_K(wx-ฮผ_0^wx)/n=n^-1โ_iโ_jb^K(ฯ_i)(w_ijx_j-๐ผ[w_ijx_j|ฯ_i])=n^-1โ_iโ_jr_ij,
where r_ij=b^K(ฯ_i)(w_ijx_j-๐ผ[w_ijx_j|ฯ_i]).
Since ๐ผ[r_ij|ฯ_i]=0, we have ๐ผ[r_ij]=0.
Then
๐ผB'_K(wx-ฮผ_0^wx)/n^2=n^-2โ_(i,j)โ_(k,l):{i,j}โฉ{k,l}โ โ
๐ผ[r'_ijr_kl]+n^-2โ_(i,j)โ_(k,l):{i,j}โฉ{k,l}=โ
๐ผ[r'_ijr_kl].
For any i,j,k,lโ๐ฉ, we have |๐ผ[r'_ijr_kl]|โค๐ผ|b^K(ฯ_i)'b^K(ฯ_k)(w_ijx_j-๐ผ[w_ijx_j|ฯ_i])(w_klx_l-๐ผ[w_klx_l|ฯ_k])|โค O(n^-2)(๐ผ[(b^K(ฯ_i)'b^K(ฯ_k))^2])^1/2=O(n^-2โ(K)).
The second inequality follows from Assumptions <ref>(ii)
and <ref>(ii).[By Cauchy-Schwarz inequality, (a+b)^4โค8(a^4+b^4), Jensen's
inequality, and iterated expectations, we have (๐ผ[(w_ijx_j-๐ผ[w_ijx_j|ฯ_i])^2(w_klx_l-๐ผ[w_klx_l|ฯ_k])^2])^1/2โค(๐ผ[(w_ijx_j-๐ผ[w_ijx_j|ฯ_i])^4])^1/4(๐ผ[(w_klx_l-๐ผ[w_klx_l|ฯ_k])^4])^1/4โค4(๐ผ[(w_ijx_j)^4])^1/4(๐ผ[(w_klx_l)^4])^1/4โค C๐ผ[w_โ^4]^1/2=O(n^-2).] The last equality follows because by Assumptions <ref>(i)
and <ref>(i), ๐ผ[(b^K(ฯ_i)'b^K(ฯ_k))^2]=๐ผ[b^K(ฯ_i)'b^K(ฯ_k)b^K(ฯ_k)'b^K(ฯ_i)]=๐ผ[tr(b^K(ฯ_i)b^K(ฯ_i)'b^K(ฯ_k)b^K(ฯ_k)')]=tr(๐ผ[b^K(ฯ_i)b^K(ฯ_i)']๐ผ[b^K(ฯ_k)b^K(ฯ_k)'])=tr(I_K)=K.
The sum over overlapping {i,j} and {k,l} contains O(n^3)
terms. Therefore, the first term in equation (<ref>) is
n^-2ยท O(n^3)ยท O(n^-2โ(K))=O(โ(K)/n).
Moreover, for disjoint {i,j} and {k,l}, we have
๐ผ[r'_ijr_kl|ฯ] = b^K(ฯ_i)'b^K(ฯ_k)๐ผ[(w_ijx_j-๐ผ[w_ijx_j|ฯ_i])(w_klx_l-๐ผ[w_klx_l|ฯ_k])|ฯ]
= b^K(ฯ_i)'b^K(ฯ_k)(Cov(w_ij,w_kl|ฯ)x_jx_l
+(๐ผ[w_ij|ฯ]x_j-๐ผ[w_ijx_j|ฯ_i])(๐ผ[w_kl|ฯ]x_l-๐ผ[w_klx_l|ฯ_k]))
= b^K(ฯ_i)'b^K(ฯ_k)(๐ผ[w_ij|ฯ_i,ฯ_j]x_j-๐ผ[w_ijx_j|ฯ_i])(๐ผ[w_kl|ฯ_k,ฯ_l]x_l-๐ผ[w_klx_l|ฯ_k])
+b^K(ฯ_i)'b^K(ฯ_k)e_ij,kl,
where e_ij,kl=Cov(w_ij,w_kl|ฯ)x_jx_l+(๐ผ[w_ij|ฯ]-๐ผ[w_ij|ฯ_i,ฯ_j])x_j(๐ผ[w_kl|ฯ]x_l-๐ผ[w_klx_l|ฯ_k])+(๐ผ[w_kl|ฯ]-๐ผ[w_kl|ฯ_k,ฯ_l])x_l(๐ผ[w_ij|ฯ_i,ฯ_j]x_j-๐ผ[w_ijx_j|ฯ_i]).
By Assumptions <ref>(ii), <ref>(ii)(v), and Cauchy-Schwarz
inequality, we derive ๐ผ[(e_ij,kl)^2]โค o(n^-4/K).
Observe that the terms b^K(ฯ_i)(๐ผ[w_ij|ฯ_i,ฯ_j]x_j-๐ผ[w_ijx_j|ฯ_i])
and b^K(ฯ_k)(๐ผ[w_kl|ฯ_k,ฯ_l]x_l-๐ผ[w_klx_l|ฯ_k])
are independent, both with mean zero. Therefore, we have the uniform
bound |๐ผ[r'_ijr_kl]|โค(๐ผ[(b^K(ฯ_i)'b^K(ฯ_k))^2])^1/2(๐ผ[(e_ij,kl)^2])^1/2=โ(K)ยท o(n^-2/โ(K))=o(n^-2).
The sum over disjoint {i,j} and {k,l} contains O(n^4)
terms. Hence, the second term in (<ref>) can be bounded
by n^-2ยท O(n^4)ยท o(n^-2)=o(1). Combining the results
we prove ๐ผB'_K(wx-ฮผ_0^wx)/n^2=o(1)
and thus B'_K(wx-ฮผ_0^wx)/n=o_p(1).
Suppose that a=(a_1,โฆ,a_n)'
and b=(b_1,โฆ,b_n)' are nร1 vectors
in such that (i) (a_i,b_i) is independent
across i; (ii) a is a function of ฯ
with <โ;
(iii) b is either a function of ฯ
with <โ or
independent of w conditional on ฯ
with max_iโ๐ฉ|๐ผ[b_i|ฯ_i]|<โ.
Let q be a matrix that takes the form of (a) w,
(b) w'w, (c) wsw^t,
t=0,1, or (d) w'wsw^t,
t=0,1, where s=(I_n-ฮณ_1w)^-1.
Then
By Markov's inequality, it suffices to show that the second moment
of n^-1(a'qb-๐ผ[a'qb])
is o(1). The second moment is
n^-2๐ผ(a'qb-๐ผ[a'qb])^2 = n^-2Cov(โ_i=1^nโ_j=1^na_ib_jq_ij,โ_i=1^nโ_j=1^na_ib_jq_ij)
= n^-2โ_(i,j,k,l):{i,j}โฉ{k,l}โ โ
Cov(a_ib_jq_ij,a_kb_lq_kl)
+n^-2โ_(i,j,k,l):{i,j}โฉ{k,l}=โ
Cov(a_ib_jq_ij,a_kb_lq_kl).
In the last expression, the first term sums over all the indices i,
j, k, and l such that {i,j} and {k,l} have at least
one common element, and the second term sums over all the indices
i, j, k, and l such that {i,j} and {k,l} do
not overlap.
Because a is a function of ฯ
and q is a function of w, if b
is independent of w conditional on ฯ,
we can write the covariance as
Cov(a_ib_jq_ij,a_kb_lq_kl)
= ๐ผ[a_ia_k๐ผ[b_j|ฯ_j]๐ผ[b_l|ฯ_l]๐ผ[q_ijq_kl|ฯ]]-๐ผ[a_i๐ผ[b_j|ฯ_j]๐ผ[q_ij|ฯ]]๐ผ[a_k๐ผ[b_l|ฯ_l]๐ผ[q_kl|ฯ]]
= ๐ผ[a_ia_k๐ผ[b_j|ฯ_j]๐ผ[b_l|ฯ_l]q_ijq_kl]-๐ผ[a_i๐ผ[b_j|ฯ_j]q_ij]๐ผ[a_k๐ผ[b_l|ฯ_l]q_kl]
= Cov(a_i๐ผ[b_j|ฯ_j]q_ij,a_k๐ผ[b_l|ฯ_l]q_kl),
where the first equality used ๐ผ[b_jb_l|ฯ]=๐ผ[b_j|ฯ_j]๐ผ[b_l|ฯ_l]
and ๐ผ[b_j|ฯ]=๐ผ[b_j|ฯ_j]
because (b_i,ฯ_i) is i.i.d.. Let h_ij=a_ib_j
(if b is a function of ฯ)
or h_ij=a_i๐ผ[b_j|ฯ_j] (if b
is independent of w conditional on ฯ).
Conditions (ii) and (iii) then imply max_i,jโ๐ฉ|h_ij|<โ.
The first sum in (<ref>) consists of O(n^3) terms.
By Lemma <ref>(i), each covariance term can be bounded by O(n^-2)
uniformly in i, j, k, and l. Hence, the first sum is n^-2ยท O(n^3)ยท O(n^-2)=o(1).
The second sum in (<ref>) consists of O(n^4) terms.
Applying Lemma <ref>(ii) yields max_i,j,k,lโ๐ฉ:{i,j}โฉ{k,l}=โ
|Cov(a_ib_jq_ij,a_kb_lq_kl)|=o(n^-2).
Hence, the last sum in equation (<ref>) is n^-2ยท O(n^4)ยท o(n^-2)=o(1).
Let q be a matrix that takes the form
of (a) w, (b) w'w, (c)
wsw^t, t=0,1, or (d)
w'wsw^t,
r=0,1, where s=(I_n-ฮณ_1w)^-1.
For h_ij=h(ฯ_i,ฯ_j)โโ such that
max_i,jโ๐ฉ|h_ij|<โ, q satisfies
(i)
and (ii) max_i,j,k,lโ๐ฉ:{i,j}โฉ{k,l}=โ
|Cov(h_ijq_ij,h_klq_kl)|=o(n^-2).
Part (i). Write Cov(h_ijq_ij,h_klq_kl)=๐ผ[h_ijh_klq_ijq_kl]-๐ผ[h_ijq_ij]๐ผ[h_klq_kl].
By the boundedness of h_ij and Cauchy-Schwarz inequality, it
is sufficient to show that ๐ผ[q_โ^2]=O(n^-2).
For case (a) with q=w, the result follows
immediately from Assumption <ref>(ii). For case (b) with q=w'w,
we can bound q_โโคw_1w_โโค nw_โ^2.
Hence, ๐ผ[q_โ^2]โค n^2๐ผ[w_โ^4]=O(n^-2)
by Assumption <ref>(ii). For case (c), because s=(I_n-ฮณ_1w)^-1=โ_r=0^โฮณ_1^rw^r,
we have q=wsw^t=โ_r=t+1^โฮณ_1^r-(t+1)w^r
and (q_ij)^2=โ_r=t+1^โโ_rฬ=t+1^โฮณ_1^r+rฬ-2(t+1)(w^r)_ij(w^rฬ)_ij,
t=0,1. For any rโฅ1, we have w^r_โโคw_โw^r-1_โโคโฏโคw_โ^r-1w_โ=w_โ.
Therefore, ๐ผ[q_โ^2]โคโ_r=t+1^โโ_rฬ=t+1^โฮณ_1^r+rฬ-2(t+1)๐ผ[w_โ^2]=O(n^-2)
by Assumption <ref>(ii). Similarly as in cases (b)(c), we can
show that the result holds for case (d).
Part (ii). For case (a) with q=w and
case (b) with q=w'w, the
statement follows immediately from Assumption <ref>(iv). For
case (c), consider i,j,k,lโ๐ฉ such that {i,j}โฉ{k,l}=โ
.
We have
Cov(h_ijq_ij,h_klq_kl)=โ_r=t+1^โโ_rฬ=t+1^โฮณ_1^r+rฬ-2(t+1)Cov(h_ij(w^r)_ij,h_kl(w^rฬ)_kl).
By Assumption <ref>(iv), each term in the sum has an uniform
bound o(n^-2) that does not depend on i, j, k, l,
r, and rฬ. The statement is thus satisfied for case (c).
Case (d) can be proved similarly.
ยง.ยง Asymptotic Distribution of ฮณฬ
We have
1/โ(n)โ_i=1^nm(ฯ_i,ฮณ_0,ฮผฬ^Z(ฯฬ_i))=1/โ(n)โ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))ฮฝ_i+M_ฮธฯ_ฮธ(z_i,ฮธ_0))+o_p(1),
where M_ฮธ=-๐ผ[(๐ผ[Z_i|z_i]-ฮผ_0^Z(ฯ_i))โฮป_0(ฯ_i)/โฯโฯ(z_i,g_i,ฮธ_0)/โฮธ].
Consider the decomposition
1/โ(n)โ_i=1^nm(ฯ_i,ฮณ_0,ฮผฬ^Z(ฯฬ_i))
= 1/โ(n)โ_i=1^nm(ฯ_i,ฮณ_0,ฮผ_0^Z(ฯ_i))+โ(n)โซ D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ^Z(ฯฬ_i))dF(z_i,g_i,ฯต_i)
+โ(n)โซ D(ฯต_i,ฮผ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))dF(z_i,g_i,ฯต_i)
+1/โ(n)โ_i=1^n(D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))-โซ D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))dF(z_i,g_i,ฯต_i)),
where D(ฯต_i,ฮผ)=-ฮผฯต_i for any ฮผโโ^d_Z,
ฮผ^Z(ฯฬ_i)=๐ผ[Z_i|ฯ(z_i,g_i,ฮธฬ)],
and F(z_i,g_i,ฯต_i) denotes the cdf of (z_i,g_i,ฯต_i).
The first term is a leading term. The second term is to adjust for
the estimation of ฮผ_0^Z, and the third term is to adjust
for the estimation of ฮธ_0 <cit.>, Both terms
contribute to the asymptotic distribution of ฮณฬ. The
last term is o_p(1) by Lemma <ref>.
The second term in equation (<ref>) can be analyzed following
<cit.>. For an arbitrary mean square integrable
function ฮผ(ฯ(z_i,g_i,ฮธ))โโ^d_Z that
is continuously differentiable in ฯ, by iterated expectations
we have ๐ผ[D(ฯต_i,ฮผ(ฯ(z_i,g_i,ฮธ))]=-๐ผ[ฮผ(ฯ(z_i,g_i,ฮธ))ฮผ^ฯต(ฯ(z_i,g_i,ฮธ))],
where ฮผ^ฯต(ฯ(z_i,g_i,ฮธ))=๐ผ[ฯต_i|ฯ(z_i,g_i,ฮธ)].
Hence, the correction term in <cit.>
takes the form ฮฑ^Z(ฯ_i,ฯ(z_i,g_i,ฮธ))=-(Z_i-ฮผ^Z(ฯ(z_i,g_i,ฮธ)))ฮผ^ฯต(ฯ(z_i,g_i,ฮธ))
and thus โ(n)โซ D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ^Z(ฯฬ_i))dF(z_i,g_i,ฯต_i)=n^-1/2โ_i=1^nฮฑ_0^Z(ฯ_i,ฯฬ_i).
Also recall that ฯฬ_i=ฯ(z_i,g_i,ฮธฬ)
and ฯ_i=ฯ(z_i,g_i,ฮธ_0). Define ฮฑ_0^Z(ฯ_i,ฯ_i)=-(Z_i-ฮผ_0^Z(ฯ_i))ฮป_0(ฯ_i).
Under Assumption <ref>(ii), expanding ฮฑ^Z(ฯ_i,ฯฬ_i)
around ฮธ_0 yields ฮฑ^Z(ฯ_i,ฯฬ_i)=ฮฑ_0^Z(ฯ_i,ฯ_i)+โฮฑ^Z(ฯ_i,ฯ_i)/โฮธ'(ฮธฬ-ฮธ_0)+o_p(ฮธฬ-ฮธ_0).
By Lemma <ref> and Assumption <ref>(ii), n^-1โ_i=1^nโฮฑ^Z(ฯ_i,ฯ_i)/โฮธ'=o_p(1)
and โ(n)(ฮธฬ-ฮธ_0)=O_p(1). We thus have โ(n)โซ D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ^Z(ฯฬ_i))dF(z_i,g_i,ฯต_i)=n^-1/2โ_i=1^nฮฑ_0^Z(ฯ_i,ฯ_i)+o_p(1).
The third term in equation (<ref>) can be analyzed following
<cit.>. Observe that โ D(ฯต_i,ฮผ_0^Z(ฯ_i))/โฮผ^Z=-ฯต_i
and ๐ผ[โ D(ฯต_i,ฮผ_0^Z(ฯ_i))/โฮผ^Z|ฯ_i=ฯ]=-ฮป_0(ฯ).
The first term in <cit.> takes the form -๐ผ[(ฯต_i-ฮป_0(ฯ_i))โฮผ_0^Z(ฯ_i)/โฯโฯ(z_i,g_i,ฮธ_0)/โฮธ]=0,
where we used ฯต_i-ฮป_0(ฯ_i)=ฮฝ_i and ๐ผ[ฮฝ_i|z_i,g_i]=0.
Therefore, by <cit.>,
โ(n)โซ D(ฯต_i,ฮผ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))dF(z_i,g_i,ฯต_i)
= -๐ผ[(๐ผ[Z_i|z_i]-ฮผ_0^Z(ฯ_i))โฮป_0(ฯ_i)/โฯโฯ(z_i,g_i,ฮธ_0)/โฮธ]โ(n)(ฮธฬ-ฮธ_0)=M_ฮธโ(n)(ฮธฬ-ฮธ_0).
Because โ(n)(ฮธฬ-ฮธ_0)=1/โ(n)โ_i=1^nฯ_ฮธ(z_i,ฮธ_0)+o_p(1),
we can represent โ(n)โซ D(ฯต_i,ฮผ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))dF(z_i,g_i,ฯต_i)=n^-1/2โ_i=1^nM_ฮธฯ_ฮธ(z_i,ฮธ_0)+o_p(1).
Note that m(ฯ_i,ฮณ_0,ฮผ_0^Z(ฯ_i))+ฮฑ^Z(ฯ_i,ฯ_i)=(Z_i-ฮผ_0^Z(ฯ_i))ฮฝ_i.
Combining the results we obtain equation (<ref>).
Let ฮฆ_n=n^-1/2โ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))ฮฝ_i+M_ฮธฯ_ฮธ(z_i,ฮธ_0)).
Then ฮฉ_n^-1/2ฮฆ_ndโN(0,I_d_Z),
where ฯ_n(x_i,z_i,ฮฝ_i)โโ^d_Z is
defined in equation (<ref>), ฮฉ_n=n^-1โ_i=1^n๐ผ[ฯ_n(x_i,z_i,ฮฝ_i)ฯ_n(x_i,z_i,ฮฝ_i)'],
and I_d_Z is the d_Zร d_Z identity matrix.
Recall that Z_i=(w_ix,x'_i,z'_i)'. While x_i
and z_i are i.i.d., w_ix are correlated across
i. Lemma <ref> establishes the Hoeffding projection
n^-1/2โ_i=1^n(w_ix)'ฮฝ_i=n^-1/2โ_i=1^nh_n^*(x_i,ฮฝ_i)+o_p(1),
where h_n^*(x_i,ฮฝ_i)=โ_j(๐ผ[w_ijx_jฮฝ_i|x_i,ฮฝ_i]+๐ผ[w_jiฮฝ_jx_i|x_i,ฮฝ_i])โโ^d_x.
Define the function ฯ_n(x_i,z_i,ฮฝ_i)โโ^d_Z
by
ฯ_n(x_i,z_i,ฮฝ_i) = n^-1/2((h_n^*(x_i,ฮฝ_i)'-๐ผ[(w_ix)'|ฯ_i]ฮฝ_i,(x'_i-๐ผ[x'_i|ฯ_i])ฮฝ_i,(z'_i-๐ผ[z'_i|ฯ_i])ฮฝ_i)'
+M_ฮธฯ_ฮธ(z_i,ฮธ_0)).
Then ฮฆ_n=โ_i=1^nฯ_n(x_i,z_i,ฮฝ_i)+o_p(1).
Because ๐ผ[h_n^*(x_i,ฮฝ_i)]=0, ๐ผ[ฮฝ_i|ฯ]=0,
and ๐ผ[ฯ_ฮธ(z_i,ฮธ_0)]=0, we obtain ๐ผ[ฯ_n(x_i,z_i,ฮฝ_i)]=0.
Write ฯ_ni=ฯ_n(x_i,z_i,ฮฝ_i). Observe that
{ฯ_ni,i=1,โฆ,n} forms a triangular array. We apply
the Lindeberg-Feller CLT to derive the asymptotic distribution of
โ_i=1^nฯ_ni. By the Cramer-Wold device it suffices
to show that a^'โ_i=1^nฯ_ni satisfies the
Lindeberg condition for any d_Zร1 vector of constants aโโ^d_Z.
The Lindeberg condition is that for any ฮบ>0, lim_nโโโ_i=1^n๐ผ[(a^'ฯ_ni)^2/a^'ฮฉ_na1{|a^'ฯ_ni|โฅฮบโ(a^'ฮฉ_na)}]=0.
The sum is bounded by ๐ผ[โ_i(a^'ฯ_ni)^2/a^'ฮฉ_na1{max_i|a^'ฯ_ni|โฅฮบโ(a^'ฮฉ_na)}],
where the random variable โ_i(a^'ฯ_ni)^2/a^'ฮฉ_na
has a finite expectation and is therefore O_p(1). Moreover, we
can derive max_i|a^'ฯ_ni|=o_p(1).[Because x_i and z_i are bounded, ๐ผ[w_โ^4]=O(n^-4),
and ๐ผ[ฮฝ_i^4]<โ, we can bound ๐ผ[max_i(a^'ฯ_ni)^2]โคa^2๐ผ[max_iฯ_ni^2]โค O(n^-1)=o(1).] Therefore โ_i(a^'ฯ_ni)^2/a^'ฮฉ_na1{max_i|a^'ฯ_ni|โฅฮบโ(a^'ฮฉ_na)}=O_p(1)o_p(1)=o_p(1).
This random variable is bounded by โ_i(a^'ฯ_ni)^2/a^'ฮฉ_na
which has a finite expectation. By dominated convergence, the Lindeberg
condition is satisfied. By Lindeberg-Feller CLT, ฮฉ_n^-1/2ฮฆ_n=ฮฉ_n^-1/2โ_i=1^nฯ_n(x_i,z_i,ฮฝ_i)+o_p(1)dโN(0,I_d_Z).
Let W_n=n^-1/2โ_iโ_jw_ijx_jฮฝ_i.
Define W_n^*=n^-1/2โ_ih_n^*(x_i,ฮฝ_i), where
h_n^*(x_i,ฮฝ_i)=โ_j(๐ผ[w_ijx_jฮฝ_i|x_i,ฮฝ_i]+๐ผ[w_jix_iฮฝ_j|x_i,ฮฝ_i]).
Then W_n-W_n^*=o_p(1).
Our proof builds on <cit.> for weighted U-statistics.
Specifically, we generalize the Hoeffding projection to allow for
random weight w_ij that is correlated with x,
in contrast to <cit.>'s assumption of constant weights.
Let I={i_1,i_2} be an ordered 2-subset of ๐ฉ
and t_i=(x_i,ฮฝ_i). Define w_I=w_i_1i_2 and h(t_I)=h(t_i_1,t_i_2)=x_i_2ฮฝ_i_1.
We can write W_n=n^-1/2โ_Iw_Ih(t_I) and h_n^*(x_i,ฮฝ_i)=h_n^*(t_i)=โ_I:iโ I๐ผ[w_Ih(t_I)|t_i].
We have [w_Ih(t_I)]=[[w_I|ฯ][h(t_I)|ฯ]]=0
and [h_n^*(t_i)]=โ_I:iโ I๐ผ[w_Ih(t_I)]=0
under Assumption <ref>(iii). By Markov's inequality, it suffices
to show ๐ผW_n-W_n^*^2=o(1).
By definition, ๐ผ[W'_nW_n^*]=n^-1/2โ_i๐ผ[W'_nh_n^*(t_i)],
and for each i, ๐ผ[W'_nh_n^*(t_i)]=n^-1/2โ_I:iโ I๐ผ[w_Ih(t_I)'h_n^*(t_i)]=n^-1/2๐ผ[h_n^*(t_i)'h_n^*(t_i)],
where the first equality holds because for iโ I, ๐ผ[w_Ih(t_I)'h_n^*(t_i)]=๐ผ[๐ผ[w_I|ฯ]๐ผ[h(t_I)'h_n^*(t_i)|ฯ]]=0
under Assumption <ref>(iii), and the second equality follows
by iterated expectations. It then follows that ๐ผ[W'_nW_n^*]=n^-1โ_i๐ผ[h_n^*(t_i)'h_n^*(t_i)]=๐ผW_n^*^2
and thus ๐ผW_n-W_n^*^2=๐ผW_n^2-๐ผW_n^*^2.
It remains to show that ๐ผW_n^2-๐ผW_n^*^2=o(1).
To show the last result, note that for disjoint I and J, we
have ๐ผ[w_Iw_Jh(t_I)'h(t_J)]=๐ผ[๐ผ[w_Iw_J|ฯ]๐ผ[h(t_I)'h(t_J)|ฯ]]=0,
where the first equality holds by Assumption <ref>(iii), and
the last equality follows from ๐ผ[h(t_I)'h(t_J)|ฯ]=0
because (ฮฝ_i,ฯ_i) is i.i.d.. Hence,
๐ผW_n^2=n^-1โ_(I,J):|Iโฉ J|=1๐ผ[w_Iw_Jh(t_I)'h(t_J)]+n^-1โ_(I,J):|Iโฉ J|=2๐ผ[w_Iw_Jh(t_I)'h(t_J)].
For comparison, because ๐ผW_n^*^2=n^-1โ_i=1^n๐ผh_n^*(t_i)^2
we can write
๐ผW_n^*^2 = n^-1โ_i=1^nโ_(I,J):{i}=Iโฉ J๐ผ[๐ผ[w_Ih(t_I)'|t_i]๐ผ[w_Jh(t_J)|t_i]]
+n^-1โ_i=1^nโ_(I,J):{i}โ Iโฉ J๐ผ[๐ผ[w_Ih(t_I)'|t_i]๐ผ[w_Jh(t_J)|t_i]].
The first sums in ๐ผW_n^2 and ๐ผW_n^*^2
consist of the same number of terms. Consider I and J such that
|Iโฉ J|=1. Because w and ฮฝ
are independent conditional on ฯ,
๐ผ[w_Iw_Jh(t_I)'h(t_J)] = ๐ผ[๐ผ[w_Iw_J|ฯ,ฮฝ]h(t_I)'h(t_J)]
= ๐ผ[๐ผ[w_Iw_J|ฯ]h(t_I)'h(t_J)]
= ๐ผ[(Cov(w_I,w_J|ฯ)+๐ผ[w_I|ฯ]๐ผ[w_J|ฯ])h(t_I)'h(t_J)]
= ๐ผ[๐ผ[w_I|ฯ_I]๐ผ[w_J|ฯ_J]h(t_I)'h(t_J)]+o(n^-2),
where the o(n^-2) term does not depend on I and J. To see
the last equality, note that h(t_I)'h(t_J) is square integrable
and ๐ผ[ฮฝ_i^4]<โ under Assumption <ref>
and <ref>(i). The last equality then follows from Assumption
<ref>(ii)(v)(vi).[Note that Assumption <ref>(v) implies max_Iโ๐ฉ๐ผ[(๐ผ[w_I|ฯ]-๐ผ[w_I|ฯ_I])^4]=o(n^-4/K^2)โค o(n^-4).]
Similarly, we can derive
๐ผ[๐ผ[w_Ih(t_I)'|t_i]๐ผ[w_Jh(t_J)|t_i]] = ๐ผ(๐ผ[๐ผ[w_I|ฯ,ฮฝ]h(t_I)'|t_i]๐ผ[๐ผ[w_J|ฯ,ฮฝ]h(t_J)|t_i]]
= ๐ผ(๐ผ[๐ผ[w_I|ฯ]h(t_I)'|t_i]๐ผ[๐ผ[w_J|ฯ]h(t_J)|t_i]]
= ๐ผ[๐ผ[๐ผ[w_I|ฯ_I]h(t_I)'|t_i]๐ผ[๐ผ[w_J|ฯ_J]h(t_J)|t_i]]+o(n^-2)
= ๐ผ[๐ผ[w_I|ฯ_I]๐ผ[w_J|ฯ_J]h(t_I)'h(t_J)]+o(n^-2),
where the last equality follows because for I and J with {i}=Iโฉ J,
๐ผ[w_I|ฯ_I]h(t_I) and ๐ผ[w_J|ฯ_J]h(t_J)
are independent conditional on t_i. The covariance in equations
(<ref>) and (<ref>) differs by o(n^-2)
uniformly in I and J. Since the first sums in ๐ผW_n^2
and ๐ผW_n^*^2 consist of O(n^3) terms, they
differ by n^-1ยท O(n^3)ยท o(n^-2)=o(1).
The second sums in ๐ผW_n^2 and ๐ผW_n^*^2
consist of O(n^2) terms. For any I and J, both ๐ผ[w_Iw_Jh(t_I)'h(t_J)]
and ๐ผ[๐ผ[w_Ih(t_I)'|t_i]๐ผ[w_Jh(t_J)|t_i]]
can be uniformly bounded by O(n^-2) (Assumption <ref>(ii)).
Therefore, the second sums in ๐ผW_n^2 and ๐ผW_n^*^2
are both n^-1ยท O(n^2)ยท O(n^-2)=o(1). We conclude
that ๐ผW_n^2-๐ผW_n^*^2=o(1).
1/โ(n)โ_i=1^n(D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))-โซ D(ฯต_i,ฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))dF(z_i,g_i,ฯต_i))=o_p(1).
Let ฮผ=ฮผ(ฯ(z_i,g_i,ฮธ))โโ^d_Z be a
function of ฯ(z_i,g_i,ฮธ). Define the empirical process
๐พ_n(ฮผ)=1/โ(n)โ_i(D(ฯต_i,ฮผ)-๐ผ[D(ฯต_i,ฮผ)])
indexed by ฮผ. We can represent the left-hand side of the above
equation as ๐พ_n(ฮผฬ^Z(ฯฬ))-๐พ_n(ฮผ_0^Z(ฯ)).
Observe that D(ฯต_i,ฮผ)=-ฮผฯต_i is linear in
ฮผ. This together with the boundedness of Z_i and ๐ผ[ฯต_i^2]<โ
(Assumptions <ref>, <ref>(i), and <ref>(i))
implies that the empirical process ๐พ_n(ฮผ) is stochastically
equicontinuous under L_2 norm <cit.>.
It remains to show that โซฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))^2dF(z_i,g_i)=o_p(1),
where F(z_i,g_i) denotes the cdf of (z_i,g_i). We prove
it following <cit.>.
By the triangle inequality and (a+b+c)^2โค3(a^2+b^2+c^2),
we derive
โซฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))^2dF(z_i,g_i)
โค 3โซ(ฮฒฬ^Z(ฯฬ)'(b^K(ฯฬ_i)-b^K(ฯ_i))^2+(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)'b^K(ฯ_i)^2
ฮฒ^Z'b^K(ฯ_i)-ฮผ_0^Z(ฯ_i)^2)dF(z_i,g_i).
Consider the three terms in the last equation. The first
term satisfies
โซฮฒฬ^Z(ฯฬ)'(b^K(ฯฬ_i)-b^K(ฯ_i))^2dF(z_i,g_i)โค O_p(ฯฑ_1(K)^2)โซmax_1โค iโค nฯฬ_i-ฯ_i^2dF(z_i,g_i)=O_p(ฯฑ_1(K)^2/n),
where the inequality holds by equation (<ref>),
the mean-value theorem and Assumption <ref>(iv), and the
equality holds because the โ(n)-consistency of ฮธฬ
and boundedness of z imply that max_1โค iโค nฯฬ_i-ฯ_i=O_p(n^-1/2).
As for the second term in (<ref>), by ๐ผ[b^K(ฯ_i)b^K'(ฯ_i)]=I_K
we obtain
โซ(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)'b^K(ฯ_i)^2dF(z_i,g_i)
= tr((ฮฒฬ^Z(ฯฬ)-ฮฒ^Z)'โซ b^K(ฯ_i)b^K'(ฯ_i)dF(z_i,g_i)(ฮฒฬ^Z(ฯฬ)-ฮฒ^Z))
= ฮฒฬ^Z(ฯฬ)-ฮฒ^Z^2=O_p(ฯฑ_1(K)^2/n)+o_p(1),
where the last equality follows from ฮฒฬ^Z(ฯฬ)-ฮฒ^Z^2โค2(ฮฒฬ^Z(ฯฬ)-ฮฒฬ^Z(ฯ)^2+ฮฒฬ^Z(ฯ)-ฮฒ^Z^2),
Lemmas <ref> and <ref>, and <cit.>.
The third term in (<ref>) has the bound โซฮฒ^Z'b^K(ฯ_i)-ฮผ_0^Z(ฯ_i)^2dF(z_i,g_i)โคsup_ฯฮฒ^Z'b^K(ฯ)-ฮผ_0^Z(ฯ)=O(K^-2a)
by Assumption <ref>(ii). Combining the results yields โซฮผฬ^Z(ฯฬ_i)-ฮผ_0^Z(ฯ_i))^2dF(z_i,g_i)=o_p(1)
and ๐พ_n(ฮผฬ^Z(ฯฬ))-๐พ_n(ฮผ_0^Z(ฯ))=o_p(1).
1/nโ_i=1^nโฮฑ^Z(ฯ_i,ฯ_i)/โฮธ'=o_p(1).
Recall that ฮฑ^Z(ฯ_i,ฯ(z_i,g_i,ฮธ))=-(Z_i-ฮผ^Z(ฯ(z_i,g_i,ฮธ)))ฮผ^ฯต(ฯ(z_i,g_i,ฮธ)),
where ฮผ^Z(ฯ(z_i,g_i,ฮธ))=๐ผ[Z_i|ฯ(z_i,g_i,ฮธ)]
and ฮผ^ฯต(ฯ(z_i,g_i,ฮธ))=๐ผ[ฯต_i|ฯ(z_i,g_i,ฮธ)].
By the law of iterated expectations we have ๐ผ[ฮฑ^Z(ฯ_i,ฯ(z_i,g_i,ฮธ))]=0,
so ๐ผ[โฮฑ^Z(ฯ_i,ฯ_i)/โฮธ']=โ๐ผ[ฮฑ^Z(ฯ_i,ฯ(z_i,g_i,ฮธ))]/โฮธ'=0.
Differentiating ฮฑ^Z(ฯ_i,ฯ(z_i,g_i,ฮธ))
with respect to ฮธ at ฮธ_0 yields
โฮฑ^Z(ฯ_i,ฯ_i)/โฮธ'=(โฮผ^Z(ฯ_i)/โฯ_iฮผ^ฯต(ฯ_i)-(Z_i-ฮผ_0^Z(ฯ_i))โฮผ^ฯต(ฯ_i)/โฯ_i)โฯ(z_i,g_i,ฮธ_0)/โฮธ'.
Because ฯ_i=ฯ(z_i,g_i,ฮธ_0) is bounded and ฮผ^Z(ฯ_i)
and ฮผ^ฯต(ฯ_i) are continuously differentiable in
ฯ_i (Assumptions <ref>(i), <ref>(i),
and <ref>(ii)), ฮผ^Z(ฯ_i), ฮผ^ฯต(ฯ_i),
โฮผ^Z(ฯ_i)/โฯ_i, and โฮผ^ฯต(ฯ_i)/โฯ_i
are bounded. Observe that (z_i,ฯ_i) is i.i.d.. By the law
of large numbers, we have
1/nโ_i=1^n(โฮผ^Z(ฯ_i)/โฯ_iฮผ^ฯต(ฯ_i)โฯ(z_i,g_i,ฮธ_0)/โฮธ'-๐ผ[โฮผ^Z(ฯ_i)/โฯ_iฮผ^ฯต(ฯ_i)โฯ(z_i,g_i,ฮธ_0)/โฮธ'])=o_p(1).
Moreover, following Lemma <ref> we can show that
1/nโ_i=1^n((Z_i-ฮผ_0^Z(ฯ_i))โฮผ^ฯต(ฯ_i)/โฯ_iโฯ(z_i,g_i,ฮธ_0)/โฮธ'-๐ผ[(Z_i-ฮผ_0^Z(ฯ_i))โฮผ^ฯต(ฯ_i)/โฯ_iโฯ(z_i,g_i,ฮธ_0)/โฮธ'])=o_p(1).
Combining the above two equations proves the lemma.
|
http://arxiv.org/abs/2306.12236v1
|
20230621125338
|
Critical Multi-Cubic Lattices: A Novel Implication Algebra for Infinite Systems of Qudit Gates
|
[
"Morrison Turnansky"
] |
quant-ph
|
[
"quant-ph",
"math.FA",
"math.OA",
"06B15, 47A80, 46L40, 46L60, 03G12"
] |
From Saturated Embedding Tests to Explicit Algorithms
Henry Towsner
July 31, 2023
=====================================================
We introduce a new structure, the critical multi-cubic lattice. Notably the critical multi-cubic lattice is the first true generalization of the cubic lattice to higher dimensional spaces. We then introduce the notion of a homomorphism in the category of critical multi-cubic lattices, compute its automorphism group, and construct a Hilbert space over which we represent the group. With this unitary representation, we re-derive the generalized Pauli matrices common in quantum computation while also defining an algebraic framework for an infinite system of qudits. We also briefly explore the critical multi-cubic lattice as a novel implication algebra serving as a logical framework for qudit gates.
Keywords and Phrases: Hilbert Lattice, Infinite Tensor Product, Symmetry Group Representation, Propositional Logic
MSC Subject Classifications: 06B15, 47A80, 46L40, 46L60, 03G12
ยง ACKNOWLEDGEMENTS
empty
AcknowledgementsAcknowledgements
I would like to thank Professor B. Hayes for the many discussions about this paper and the thesis on which it is based. Also, I am grateful to Dr. J. S. Oliveira for introducing me to the topic of multi-cubic lattices.
ยง INTRODUCTION
The multicubic lattice definitionmulti-cubic lattice <cit.> has long been seen as a successor to the cubic lattice <cit.> because it maintains many of the qualities of the cubic lattice. However, the multi-cubic lattice is not really a generalization as there is no set of parameters such that a multi-cubic lattice is a cubic lattice. In section 2), we introduce a proper generalization of the cubic lattice, namely the definition critical multi-cubic latticecritical multi-cubic lattice. Our intuitive inspiration for such a construction is as follows. Just as the cubic lattice can be viewed as the lattice of the faces of a n dimensional cube with an operator, , representing antipodal symmetry, the critical multi-cubic lattice can be viewed as the lattice of the faces of an n dimensional square grid with operators preserving translational symmetry. As a result of this formulation, we are able to generalize the results of <cit.>. The cubic lattice and its automorphism group were seen to be similar to the spin 1/2 system of an arbitrary cardinal of qubits, so it is reasonable to consider higher order spin systems as we generalize the results of the cubic lattice.
Section 3) begins by defining the morphisms in the category of critical multi-cubic lattices. In the sense that the critical multi-cubic lattice is the generalization of the antipodal symmetry of a cube, an action of Aut(_3) โ
_2, we have that Aut(_2k+1) enforces this new symmetry.
[Theorem <ref>]
Let M be an |I|-critical multi-cubic lattice over โค_2k+1, and โ denote an unrestricted wreath product. We have that permutations of coatoms definitionPer_Aut(_2k+1)(C_M) โ
centralizer of automorphism definitionC_S_2k(Aut(_2k+1)) โ S_I. Let Aut(_2k+1) be generated by {ฯ_i}_i=1^k, then C_S_2k(Aut(_2k+1)) = โฉ_i=1^k C_S_2k(ฯ_i), and C_S_2k(ฯ_i) โ
ฮ _j=1^l_i (_j_iโ S_N_j_i).
We highlight that this result reduces to the case of the cubic lattice when 2k+1 = 3, and the group reduces to an infinite version of the Coxeter group B_n โ
_2 โ S_n. As C_S_2k(Aut_2k+1) is cyclic if and only if 2k+1 is prime, we are able to simplify our results significantly for the prime case.
[Corollary <ref>]
If 2k+1 is prime, _2kโ S_I โ
Aut(M) definitionAut(M) โ
Per_Aut(_2k+1)(C_M).
In <cit.>, we embedded a cubic lattice into a specifically constructed Hilbert space, namely a potentially infinite tensor product of ^2. Section 4) answers the question: are there other similar lattice embeddings into a Hilbert space?
[Theorem <ref>]
Let H be Hilbert space constructed as an infinite tensor product for vector spaces of dimension 2k, k โ over an index set I. For the Hilbert lattice HL, there exists a critical multi-cubic lattice M such that M โ HL, and the atoms of M are projections onto subspaces forming an orthonormal basis of H.
It is worth noting that to our knowledge, no attempt of any analytic structure has been attempted on the multi-cubic lattice, much less something as strong as an embedding into a Hilbert space. As is standard in quantum logic, we now have projections to serve as propositions. The remainder of Section 4) investigates this.
[Remark <ref>]
A critical multi-cubic lattice M(, , , )ฬ is an implication algebra, with a weak form of complementation and propositions faithfully embedded as projections onto a Hilbert space.
In section 5), we consider unitary actions on the critical multi-cubic lattice. With proper representation, the automorphisms of the critical multi-cubic lattice can be seen as set of well known quantum gates acting on an infinite set of qudits, which we demonstrate with this example.
[Example <ref>]
As noted if M is an |1| critical multi-cubic lattice over _3, then U_2k is the Hadamard matrix up to a normalization constant. If M is an |1| critical multi-cubic lattice over _5, then
U_2k = 1/2[ 1 1 1 1; 1 i -1 -i; 1 -1 1 -1; 1 -i -1 i; ],
X_2k = [ 0 1 0 0; 0 0 1 0; 0 0 0 1; 1 0 0 0; ], and
D_2k = [ 1 0 0 0; 0 i 0 0; 0 0 -1 0; 0 0 0 -i; ]
.
In general up to a normalization constant, [U_2k]_ij = ฯ_j^i-1 where ฯ_j is jth element of the 2k roots of unity with counterclockwise ordering starting at 1. We recognize U_2k as the quantum Fourier transform, X_2k as the shift matrix, and D_2k as the clock matrix.
Originally introduced by <cit.>, it is well known that these matrices form the framework of quantum mechanical dynamics in finite dimensions. Furthermore, X_2k and D_2k are known as Generalized Pauli matrices, which are a representation of the respective Heisenberg-Weyl group <cit.>.
As further evidence of the utility of this generalization, we are able to generalize that the Pauli matrices form an orthonormal basis of M_2() to infinite systems of qudits.
[Theorem <ref>]
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime, C be the coatoms of M, and H be constructed as in Theorem <ref>.
Then B(H) = W^โ({quantum fourier transform definitionU_Hprojection subscript definitionp_cU^โ_H}_c โ C, {p_c}_c โ C) = W^โ({representation definitionฯ_i(C_S_2k(Aut(_2k+1)))}_i โ I, {p_c}_c โ C) if and only if 2k+ 1 is prime.
ยง DEFINING THE CRITICAL MULTI-CUBIC LATTICE AND FOUNDATIONAL RESULTS
For the sake of completeness, we first introduce the multi-cubic lattice as it exists in the literature. We then proceed to define the critical multi-cubic lattice as a quotient of the multi-cubic lattice in Definition <ref>. Lastly, we conclude with some general properties such as atomisticity, Proposition <ref>, and coatomisticty, Proposition <ref>, that will be used in later sections and validate the utility of our newly introduced structure.
Let ฮฉ = {1,2,โฆ, n}. For each i โฮฉ, let M_i = {-n_i, โฆ, 0, โฆ, n_i} be a finite -module of odd size. We define ๐ฑ = ฮ _i=1^n M_i. Elements of ๐ฑ will be denoted by vโ with the i^th component (vโ)_i. Note also that M_i being of odd size is equivalent to the property 2m = 0 implies m = 0. <cit.>
We have now described the set ๐ฑ over which our lattice will be defined. We begin with defining the poset on ๐ฑ.
For A โฮฉ define X_A = โจ e_i | i โ A โฉ, the submodule generated by the standard basis indexed by the set A. We now have a poset P = ({aโ + X_A | aโโ๐ฑ, A โฮฉ}, โ), henceforth P will be known as a multi-cube. <cit.>
To be clear, when we say the ordering is determined by โ, we mean as the poset of the submodule, aโ + X_A โคbโ + X_B if and only if A โ B and bโ - aโโ X_B.
We now make P into a join lattice and meet semi lattice. We define the partial meet as follows:
multicubic lattice definition
Let P be a multi-cube and define (aโ + X_A) (bโ + X_B) = cโ + X_A โฉ B if there exists c โ (aโ + X_A )โฉ (bโ + X_B) and undefined otherwise. Now let us define ฮด_bโaโ = {i โฮฉ | b_i a_i}, this is analogous to a support function as ฮด_aโ0โ = {i โฮฉ | a_i 0}, precisely the non-zero coordinates of aโ. We can now turn P into a join lattice by defining join as (aโ + X_A) (bโ + X_B) = aโ + X_A โช B โชฮด_aโbโ. For a given indexing set, I, and odd k โ, we say that M is a I-multi-cubic lattice over โค_k when M is the lattice constructed from the multi-cube P for ๐ฑ = ฮ _iโ I_k. <cit.>
Hence M when considered as a lattice is only complete under the join operation but not meet. In order to move to a critical mulit-cubic lattice, we first need to define a notion of a critical element.
Let M be a multi-cubic lattice. A vector xโโaโ + X_A is critical if x_i 0 implies that e_i โ X_A. Let the critical elements of M be denoted by ฮ(M).
There exists a well defined function ฮ: Mฮฬ(M) defined by xโโฆinf{m โฮ(M): xโโค m } <cit.>.
In order to have the multi-cubic lattice be a true generalization of the cubic lattice, we will need to make some slight modifications.
We generalize the notion of a signed set from a three valued set { -1, X , 1} to an 2n+1 valued set {-n, -n-1, โฆ, -1, X, 1, 2 ,โฆ n-1, n}. With the mild change of definition, the 0 of the original multi-cubic lattice is now referred to as X. The importance of this will become clear as we move from a Cartesian product of modules to a tensor product of modules. The algebraic structure in terms of scalar multiplication, addition, and multiplication will be of course different, but more importantly consistent with an embedding into a von Neumann algebra of similar structure to <cit.>.
With the above discussion, we remove the ambiguity of non-critical elements.
definition critical multi-cubic lattice
We define a critical multi-cubic lattice, M, as a multi-cubic lattice, M, modulo the equivalence relation where for any m,n โM, we have m โผ n if and only if ฮ(m) - ฮ(n) โ X_ฯ(m) and X_ฯ(m) = X_ฯ(n). From this point forward, we also allow a critical multi-cubic lattice to be a potentially infinite direct product of algebras. If M is a critical multi-cubic lattice indexed by a set I over a ring _2k+1, we say that M is an |I| critical multi-cubic lattice over _2k+1.
For a critical multi-cubic lattice, M, if we let n = ฮ(m) + X_ฯ(m), we see that with this new notion of equality, all elements are critical. In addition, we have the property that for m,n โ M, m n, then ฮ(m) - ฮ(n) 0.
One can view a critical multi-cubic lattice as a mixture between a direct product of finite algebras, and the lattice of submodules defined by the closure of {{ e_i}_i โ I, 0} under meets and joins. By reducing to the equivalence class, we lose addition of the product ring, preserve to some degree both ring multiplication and scalar multiplication, and gain a structure of non-zero valued outcomes with an additional indeterminate value seen in the cubic lattice.
Before going forward we show that our preserved operations are well defined. The essential concept is that two elements in a multi-cubic lattice map to the same equivalence class in a critical multi-cubic lattice if their value of the indices which are neither 0 nor X are equal.
Let M be a multi-cubic lattice over _2k+1. Then the operations of scalar multiplication by units in the base ring and ring multiplication by units are independent of equivalence class representative in the corresponding critical multi-cubic lattice M.
Let P: MMฬ be the natural projection map, and m,n โM such that P(m) = P(n). Since P(m) = P(n), m_i = n_i for all m_i, n_i 0. For all m_i =0, P_i(m_i) = X, and similarly for all n_j = 0, P_j(n_j) = X. As we assumed that all non-zero entries of m, n are identical, we now have that P(m) and P(n) have identical nonzero entries and all other values as X.
For nonzero divisor scalar multiplication, c โ Aut(_2k+1). We have c ยท m = c ยทฮ _i โ I - ฯ(m) m_i + X_ฯ(m) = ฮ _i โ I - ฯ(m) cm_i + X_ฯ(m), m_i โ_2k+1, and c ยท n = c ยทฮ _i โ I - ฯ(n) n_i + X_ฯ(n) = ฮ _i โ I - ฯ(n) cn_i + X_ฯ(n), n_i โ_2k+1. Then cm_i = cn_i for all m_i, n_i 0, so c P(m) = P(cm) = P(cn) = c P(n) as nonzero divisor scalar multiplication does not change which indices had 0 values.
Similarly we show multiplication of units is well defined. Let ฮผโM, and further all nonzero induces of m and ฮผ are units in _2k+1. As m_i = n_i for all nonzero entries, we have m_iฮผ_i + X_ฯ(m) โชฯ(ฮผ) = n_iฮผ_i + X_ฯ(n) โชฯ(ฮผ). Then P(mฮผ) = P(nฮผ) as they agree on all nonzero entries.
The atoms and coatoms will play a significant role in our results, so we offer some intuition.
Let M be a 2 critical multi-cubic lattice over โค_5. The atoms of M are of the form {a, b} where a,b โ{-2,-1,1,2}. This generalizes the two dimensional cubic lattice case, where atoms are of the form (A^-,A^+), |A^- โช A^+| = |S| = 2. We assign each index in S to a either the set A^- or the set A^+. We can equivalently assign each index in S to either value -1 or the value 1, and see that this is the reduction to the 2 critical multi-cubic lattice over _3. In general, an |I| critical multi-cubic lattice has (2k)^|I| atoms of the form ฮ _i โ I a_i, a_i โ_2k+1 - {0}.
Let M be a 2 critical multi-cubic lattice over โค_5. The coatoms of M are of the form {a, b} where exactly one a,b โ{-2,-1,1,2} and all others are assigned to X. This generalizes the two dimensional cubic lattice case, where atoms are of the form (A^-,A^+), |A^- โช A^+| = |1|. We assign exactly one index in S to a either the set A^- or the set A^+. We can equivalently assign exactly one index in S to either value -1 or the value 1, and see that this is the reduction to the 2 critical multi-cubic lattice over _3. In general, an |I| critical multi-cubic lattice has 2k|I| coatoms of the form ฮ _iโ I V, where V = X for all but exactly one i โ I and V โ_2k+1- {0} otherwise. One may find it helpful to think of X as the submodule _2k+1 on each index.
The fact that the atoms and coatoms in the 2 critical multi-cubic lattice of _3 are the same as the 2 dimensional cubic lattice is not a coincidence. We show that the case of the cubic lattice is just a specific case of the critical multi-cubic lattice with an automorphism.
Let M be a |S|-critical multi-cubic lattice over โค_3. Then M is a cubic lattice.
We need only show that M โ
L(S) as lattices. For each m โ M, we have that m = ฮ _i โ I - ฯ(m) a_i + X_ฯ(m), a_i โ_3^* = {-1, 1}, is mapped to an element a = (A^+, A^-) โ L(S) defined by i โ A^+ if a_i = 1 and i โ A^- if a_i = -1, so that A^+ โฉ A^- = โ
. We denote this mapping by f:M Lฬ(S), and one can see that f is a bijection.
It remains to show that we have an order homomorphism. If m โค n, f(m) = (A^+ , A^-), and f(n) = (B^+ ,B^-), then m - n โ X_ฯ(n), which implies that B^+ โ A^+ and B^- โ A^-. Therefore, f(n) = (B^+ , B^-) โ (A^+ , A^-) = f(m). As order homomorphism induce lattice homomorphism we have that f is also a lattice homomorphism.
We have shown many similarities between cubic lattices and multi-cubic lattices, we now highlight that they are in fact a weaker object as general critical multi-cubic lattices do not meet the axiomatic definition of cubic lattices.
Let M be a critical multi-cubic lattices over _2k+1, k โฅ 2. Then M does not meet axiom 2 of Definition <cit.>.
Let ฯ be any order automorphism of M. Then for any coatoms a,b โ M such that a b = 0 implies a, b โ{a_je_i: a_j โ_2k+1} for some fixed i โ I. Since ฯ is an order automorphism, ฯ(b) is a coatom as well, and a ฯ(b) <1 implies ฯ(b) = a because the join of any two distinct coatoms is equal to 1.
Then for a fixed a, ฯ(b) = a for all b โ_2k+1 - {a, 0}, so ฯ is not injective.
In summation, the failure of a general multi-cubic lattice comes down to the fact that |_2k+1 - {a, 0}| > 1 for k โฅ 2. However, other axiomatic properties of the cubic lattice hold.
Let M be a critical multi-cubic lattice. Then M is a atomistic.
Let M be an |I| critical multi-cubic lattice over _2k+1, and ฮ _i โ I - ฯ(m) a_i + X_ฯ(m) = m โ M, a_i โ_2k+1 - {0}. Then m = _j โฯ(m) m_j, where m_j_i = m_i for all i โ I -ฯ(m), m_j_j = -1, and m_j_i = 1 for all i โฯ(m) - {j}. We have that each m_j is an atom of M, so the result follows.
Let M be a critical multi-cubic lattice. Then M is a coatomistic.
As M is a critical multi-cubic lattice, for all m โ M, m_i โ_2k+1 - {0} or m_i = _2k+1. Therefore, for any m โ M, we have that m = ฮ _i โ I- ฯ(m) a_i + X_ฯ(m) where a_i โ_2k+1 - {0}, so m = _i โ I c_i(a) where c_i(a) denotes the coatom in the ith index equal to a.
ยง AUTOMORPHISMS OF THE CRITICAL MULTI-CUBIC LATTICE
As is often the case with the introduction of a new structure, one often asks what are the important properties that should be preserved by homomorphisms? This leads to another natural question: given a characterization of homomorphisms, can we classify the respective automorphism group up to isomorphism? These two basic questions will be the focus of this section, and they will allow us to study their unitary representations in later sections.
Informally, a critical multi-cubic homomorphism should preserve any of its well defined operations of which there are four, namely , , scalar multiplication by units of the module, and multiplication of units in the ring. Since order homomorphisms are lattice homomorphisms, and our lattice is coatomistic, it is necessary that a critical multi-cubic lattice homomorphism maps coatoms to coatoms. In addition, any homomorphism of a critical multi-cubic lattice should also preserve scalar and ring multiplication. We formalize the above in the following definition.
Let kโ, I_M, I_N be indexing sets, and M be an |I_M| critical multi-cubic lattice over _2k+1 with coatoms C_M, and N be an |I_N| critical multi-cubic lattice over _2k + 1 with coatoms C_N. A map ฯ: M Nฬ is a critical multi-cubic homomorphism if ฯ a scalar multiplication homomorphism over the units of the base ring such that for all c โ C_M either ฯ(c) โ C_N or ฯ(c) = 0 . In particular if N = M and ฯ is a module automorphism, then ฯโ Per(C_M).
Sufficiency of the conditions follows as we are guaranteed to keep the linear structure, and as a function of coatoms between coatomistic lattices, we have an order homomorphism, which is necessarily a lattice homomorphism as well. In fact, ring multiplication is preserved by any permutation of indices, so we do not include in the definition. We will show that this notion is a generalization of a cubic homomorphism.
Aut(M) definition
Let k โ and M be a critical multi-cubic lattice over _2k+1. We denote Aut(M) as the automorphism group of M.
We show by example that the homomorphism conditions are not equivalent.
Let M be a 1 critical multi-cubic lattice over _3 i.e. a cubic lattice. Now we demonstrate that there are module automorphisms that are not order automorphisms.
A = [ 1 1; 0 1; ].
A[0,1]^t = [1, 1]^t โ C_M, so A is a linear transformation that is not even a permutation of coatoms and therefore not an order homomorphism.
We will show that there are order automorphisms that are not module automorphisms. Let N be a 2 critical multi-cubic lattice over _3. The atoms of N are the standard basis. Let P_ij denote the projection onto the sum of standard basis vectors e_i, e_j, so that C_N = {P_12, P_13, P_13, P_34}, and with this ordering let ฯ = (1234) โ S_4 โ
Per(C_N). We claim that ฯ is not representable as an invertible linear transformation. Recall that permutations act on the lattice of projections by inner automorphism. Then ฯ(P_12)ฯ^-1 = P_13, and ฯ(P_34)ฯ^-1 = P_12, but ฯ(P_12 + P_34) ฯ^-1 = ฯ I ฯ^-1 = I P_13 + P_12 = ฯ P_12ฯ^-1 + ฯ P_34ฯ^-1.
With this view in mind we have a new and equivalent notion of the poset of a critical multi-cubic lattice.
Let ฮ _i be the coordinate-wise projection onto the ith index and ฮ _J be the projection onto the J โ I coordinates. Implicitly, if m = ฮ(m) + X_ฯ(m), we define ฮ _J as ฮ _J^m, J^m โ I - ฯ(m) where J^m = J - ฯ(m).
Let M be a critical multi-cubic lattice and m, n โ M such that m โค n. Then there exists ฮ _J, J โ I - ฯ(n) such that ฮ _J(m) = n.
Recall that m โค n, then m - n โ X_ฯ(n), so let J = ฯ(n) - ฯ(m).
Because we fix a generating set of the algebra, we can view critical multi-cubic homomorphisms as an automorphism composed with a projection as defined above. With the importance of the critical multi-cubic automorphisms now highlighted, we proceed to classify them as a generalization of <cit.>.
permutations of coatoms definition
Let Per_Aut(_2k+1)(C_M) be the permutations of coatoms of M that commute with the action of Aut(_2k+1) defined by c โฆ ac for c โ C_M and a โ Aut(_2k+1).
Let M be an |I| critical multi-cubic lattice over โค_2k+1. Then Aut(M) โ
Per_Aut(_2k+1)(C_M).
By definition, we have an injective group homomorphism i :Aut(M) Pฬer(C_M) where i is inclusion map. We want to show the range of i is strictly contained in Per_Aut(_2k+1)(C_M). Any ฯโ Aut(M) is a module homomorphism of _2k+1^I, so we have that for any a โ Aut(_2k+1), a induces an automorphism on โค_2k+1 by multiplication. Now by the property of module homomorphisms over commutative rings for any m โ M, ฯ(a) โฯ(m) = a ฯ(m) = ฯ(am) = ฯ(ma) = ฯ(m) โฯ(a). Thus, Aut(M) โค Per_Aut(_2k+1)(C_M).
For the reverse direction, we show that any ฯโ Per_Aut(_2k+1)(C_M) defines a critical multi-cubic automorphism. Firstly, it is a permutation of coatoms of M by definition, and thus bijective. Secondly, for the linearity condition, let ฯ_a be multiplication by a โ Aut(_2k+1), and m โ M. Then aฯ(m) = ฯ_a โฯ(m) = ฯโฯ_a(m)= ฯ(am).
Unlike the case of the cubic lattice, the permutations that commute with Aut(_2k+1) are in general a larger group than just the automorphism group, so we must consider some additional factors. As we will show, it is for this reason that the automorphism groups of the cubic lattice and critical multi-cubic lattice become more similar when 2k+1 is prime.
centralizer of automorphism definition
As Aut(_2k+1) fixes the identity, by relabeling, we can consider the action on {a:1 โค a โค 2k} by left multiplication. With this action, let C_S_2k(Aut(_2k+1)) denote the centralizer of Aut(_2k+1) in S_2k.
Let M be an |I| critical multi-cubic lattice over โค_2k+1, and โ denote an unrestricted wreath product. Then Per_Aut(_2k+1)(C_M) โ
C_S_2k(Aut(_2k+1)) โ S_I, Let Aut(_2k+1) be generated by {ฯ_i}_i=1^k, then C_S_2k(Aut(_2k+1)) = โฉ_i=1^k C_S_2k(ฯ_i), and C_S_2k(ฯ_i) โ
ฮ _j=1^l_i (_j_iโ S_N_j_i).
Any ฯโ Per_Aut(_2k+1)(C_M) is determined by the image of its coatoms. Note that for any two distinct coatoms their meet is empty if and only if they are of the form, ae_i and be_i for distinct a,b โ_2k+1 - {0} and some i โ S. By Lemma <ref>, ฯ defines a critical multi-cubic automorphism, so we have that for any i โ S and distinct a, b โ_2k+1- {0}, โ
= ฯ(ae_i be_i) = ฯ(ae_i) ฯ(be_i). Hence, for all ฮฑโ_2k+1 -{0}, ฯ(ฮฑ e_i) = ฮฒ_ฮฑ e_j for some ฮฒ_ฮฑโ_2k+1- {0} and a fixed j โ S.
We now need only consider ฯ for each individual index. Aut(_2k+1) acts on the coatoms of fixed index, C_i = {a โ_2k+1- {0}} by left multiplication, so by relabeling, we consider the action on {a:1 โค a โค 2k}. Let Per_Aut(_2k+1)(C_i) denote the permutations of C_i commuting with the action of Aut(_2k+1). We observe that Per_Aut(_2k+1)(C_i) is isomorphic to the centralizer of Aut(_2k+1) in S_2k. Thus, ฯโ C_S_2k(Aut(_2k+1)) โ S_I.
For a given ฯโ S_m, with a standard cycle decomposition consisting of N_j cycles of length j, we use that C_S_m(ฯ) โ
ฮ _j=1^l (_jโ S_N_j). As the centralizer of subgroup is the intersection of the centralizer of its generators, and we conclude our result.
It is known that the hyperoctehedral group can be represented as the signed permutation group. We have used this terminology to inspire our generalization.
We define the Aut(_2k+1)-value permutation group of degree I as a generalization of the elements of the permutation group in M_I(_2k+1) where each nonzero element can take any value of Aut(_2k+1). Note that M_I(ยท) denotes the (ยท) valued matrices indexed by I.
Let M be an |I| critical multi-cubic lattice over _2k+1 for k โ. Then there exists a group representation ฯ: Aut(_2k+1) โ S_I Mฬ_I(_2k+1) as the Aut(_2k+1)-value permutation group of degree I. Furthermore, when considering as the base ring Aut(_2k+1) โ Z(ฯ( Aut(_2k+1) โ S_I)).
We have that the standard basis vectors {e_i}_i โ I form a generating set of _2k+1^I as a module. Now Aut(_2k+1) โ S_I can be represented as the Aut(_2k+1)-value permutation group by considering the action e_i โฆ a_ie_ฯ(i) for a given (ร_i โ I a_i, ฯ) โ Aut(_2k+1) โ S_I, which we define as ฯ. As a commutative base ring, _2k+1^โโ Z(M_I(_2k+1)), which are in the Aut(_2k+1)-value permutation group by definition and in Z(Aut(_2k+1) โ S_I) because ฯ is a multiplicative group representation.
Let M be an |I| critical multi-cubic lattice over โค_2k+1, and ฯ be defined by Proposition <ref>. Then ฯ(Aut(_2k+1) โ S_I) โค Aut(M).
We claim that ฯ(Aut(_2k+1) โ S_I) โค Aut(M). We have a bijection, f: {ae_i:a โ_2k+1 - {0}, i โ I}Cฬ, where e_i denotes the standard basis vector, and C is the coatoms of the form specified in Example <ref>, and f is defined by ae_i โฆฮ _j โ I V_j, V_i = a, and V_j = X for all j i. Then we have that ฯ(Aut(_2k+1) โ S_I) defines a permutation of the coatoms by its action on {ae_i:a โ_2k+1 - {0}, i โ I}. We have already shown in Proposition <ref> that ฯ(Aut(_2k+1) โ S_I) commutes with the action of scalar multiplication of units on {ae_i:a โ_2k+1 - {0}, i โ I}, so we conclude our result.
Let M be an |I| critical multi-cubic lattice over โค_2k+1. Then Aut(_2k+1) โ S_I โค Aut(M) โ
Per_Aut(_2k+1)(C_M).
As a result of Lemma <ref> and Lemma <ref>, we have Aut(_2k+1) โ S_I is isomorphic to a subgroup of Per_Aut(_2k+1)(C_M).
We have now rederived the group theoretic results of <cit.> and shown it in much more generality. We highlight the cyclic case below as the operator algebraic structure will be most similar to the results of Section 5, and identical if k = 1.
If 2k+1 is prime, _2kโ S_I โ
Aut(M) โ
Per_Aut(_2k+1)(C_M).
By Theorem <ref>, Per_Aut(_2k+1)(C_M) โ
C_S_2k(Aut(_2k+1)) โ S_I. If 2k+1 is an odd prime, then Aut(_2k+1) โ
_2k, and C_S_2k(_2k) โ
(e ร e) ร (_2kร e) โ
_2k, where e denotes the group consisting of only the identity.
ยง LOGIC OF THE CRITICAL MULTI-CUBIC LATTICE
Up the this point, we have explored the algebraic structure of the critical multi-cubic lattice. In order to consider a propositional logic, we require a set of propositions and logical connectives. Since, we are starting with a lattice, we begin with the connectives of and , and our set of propositions.
In order to embed our propositions as projections onto a Hilbert space as is standard with quantum logic, we proceed to generalize <cit.> to the critical multi-cubic lattice. In the following theorem, we are using the infinite tensor product detailed in <cit.>.
Let H be Hilbert space constructed as an infinite tensor product for vector spaces of dimension 2k, k โ over an index set I. For the Hilbert lattice HL, there exists a critical multi-cubic lattice M such that M โ HL, and the atoms of M are projections onto subspaces forming an orthonormal basis of H.
We see that each simple tensor โ_i โ I a_i, a_i โ_2k+1 - {0}, is a C-sequence <cit.>, and we represent it by a respective projection operator. We use the notation ฮ _i โ I a_i to define an element m forming the atoms of M an |I| critical multi-cubic lattice over _2k+1.
Let _M denote join in M and i: M HฬL. We need only show that for all a, b โ M, i(a _M b) โ HL. Let a = ฮ _i โ I a_i and b = ฮ _iโ I b_i, then a _M b = ฮ _i โ J a_i + X_I - J, where J = {i โ I: a_i = b_i}, and i(a _M b) be the projection V = โ_i โ I V_i, V_i = a_i for i โ J and V_i = ^2k, i โ J. By atomisticity of Proposition <ref>, i is an order homomorphism and therefore a lattice homomorphism.
The set of simple tensors are all rank 1 and therefore atoms in HL. The atoms of M form an orthonormal system in H. For any distinct atoms a, b โ M, we have that there exists i โ I such that a_i b_i, so โจ a_i , b_i โฉ_H_ฮฑ = 0 which implies that โจ a ,b โฉ_H = 0. Furthermore, these vectors span ฮ 'โ_ฮฑโ I H_ฮฑ, and therefore are dense in H.
projection subscript definitionTo relate to our previous notation, we can see that for all a โ M, a can be considered as an element of p โ HL by the tensor coordinatization p_i = I_2k for i โฯ(a) and p_i = p_a_i, the projection onto the a_i subspace of C^2k, otherwise.
We want to highlight that the lattice homomorphism defined in Theorem <ref> is defined on the lattice M considered as a subset of HL and not a a lattice homomorphism from M to the lattice HL.
We have seen that we can embed a critical multi-cubic lattice to a Hilbert lattice in much the same way the cubic lattice to the Hilbert lattice. From an analytic perspective these objects have been shown to share many of the same qualities. However this is where the similarities stop for the most part. By the arguments of the previous section, we have removed the cardinality constraints of <cit.>, and defined and abstracted to critical multi-cubic auotomorphisms.
Now that we have projections acting as propositions, we can explore an additional connective, complement. One of the interesting characteristics of the cubic lattice is that we had an order preserving complementation, . In fact, the action of on the coatoms of cubic lattice embedded in the Hilbert lattice exactly matched the action of negation on the same coatoms. However, we lose this in the general case.
The action of ^โฅ on a critical multi-cubic lattice over _2k+1, k >2, M, embedded in Hilbert lattice, HL, in the manner of Theorem <ref> does not define a permutation on C, the coatoms of C of M.
We can show that ^โฅ does not map coatoms to coatoms. Fix a a coatom ae_i = c โ C, then c^โฅ = _b โ_2k+1- {a,0} be_i, which is not even in M.
Although the action โฅ does not act nicely on the critical multi-cubic lattice, we still have an order preserving map that is nearly a complement. We are now in a position to readdress the definition of in the context of a critical multi-cubic lattice. We first recall the definition of on the multi-cubic lattice.
Let M be a multi-cubic lattice and a, b โ M and a โค b. We define ฯ(a) to be the subset of ฮฉ i โฯ(a) if and only if a_i 0, and ฮ: MรMโM such that ฮ(b,a) = 2ฮ(b) - ฮ(a) + X_ฯ(a) <cit.>.
Equivalently, we can define purely by its action of the multi-cubic lattice.
Let M be a multi-cubic lattice and a,b โM, a โค b. Then we define ฮ: MรMโM by
ฮ(b,a)_i =
-a_i a_i โ X_ฯ(b) -ฯ(a)
a_i otherwise
for all i โ I.
As a quick justification, we note that both actions flip the sign of a_i if and only if ฮ(a)_i is nonzero and ฮ(a)_i โ X_ฯ(b). Another phrasing is that if a and b are critical with a โค b, then the coordinate, i, will change sign exactly when i โ{B - A}. However, this action defined on product modules will be defined as just another module homomorphism when b = 1.
By the definition of ฮ, one can directly see that it utilizes the module structure of the MCL. However, we can abstract ฮ by only focusing on its action and define it in the context of a multivalued signed set of the MCL. This will allow us to generalize the ideas proven for the cubic lattice.
Let M be a critical multi-cubic lattice over _2k+1. For all n โ M, we define _M(n, ยท) as multiplication by 2k on the respective ideal (n).
Let M be a critical multi-cubic lattice, n โ M. Then _M(n, ยท) always exists and is well defined.
For any odd k โฅ 3, _M is just scalar multiplication by 2k which exists by construction. As gcd(2k, 2k+1) = 1, and is well defined by Proposition <ref>.
The fact that such a map factors through the projection map should not be surprising as the original definition operated on on the equivalence class representatives.
Let M be a critical multi-cubic lattice such that M โ
L(S) for an indexing set S for the lattice isomorphism f of Theorem <ref>. The action of _M(n,m) on M is equal to the action of (f(n), f(m)) on L(S).
As the cubic lattice atomistic, we need only show that the claim holds for the atoms of the respective ideal. By the proof of Lemma <ref>, we have a well defined _M(n , m) that is equivalent to multiplication by -1 on the ideal (n).
When considering the action of the pushforward on L(S), f โ_M(n,m), we swap all nonzero indices of m โฯ(n) that are not equal to 1 to -1 or vice versa. If we let (B^-, B^+) = f(n), and (A^-, A^+) = f(m), then the pushforward is equal to (B^- โช{A^+ - B^+}, B^+ โช{A^- - B^-}) = (f(m), f(n)).
While the interplay of and โฅ is lost, has some of the properties of an order preserving compliment.
Let M be a critical multi-cubic lattice, a, b โ M. Then the following hold.
* ฮ(a,a) = a
* if a โค b, then ฮ(b,a) โค b
* if a โค b, then ฮ(b,a) = 2b - a
* if a โค b then ฮ(b, ฮ(b,a)) = a
* if a โค b โค c then ฮ(c,a) โคฮ(c,b)
* if a โค b, then ฮ(b,a) = b if and only if a = b
* if a < b, then ฮ(b,a) = b or ฮ(b,a) b = โ
The result follows from <cit.> and that factors through the quotient map.
We can see that ฮ is order preserving (2,4,5), and it acts as the identity on the diagonal elements (1) of the product M ร M. In addition, it retains a notion of a local complement (6,7).
Lastly, we look at the logical connective of implication.
Let M be a multi-cubic lattice, a, bโM. Define implication by b รฅ = ฮ(a) + X_ฯ(a) โชฯ(b)^c <cit.>.
Multi-cubic lattices form implication algebras <cit.>.
Using the above result, we only need to show that implication is well defined on the quotient in order to form an implication algebra.
Implication is well defined when considered on the critical multi-cubic lattice.
As the definition of implication is in terms of the critical element of a, we have that b รฅ = b ฮฬ(a). Therefore it is sufficient to show b รฅ = ฮ(b) รฅ.
We note that for all c โ M such that ฮ(c) = ฮ(b), we have that X_ฯ(c) = X_ฯ(b), and equivalently X_ฯ(c)^c = X_ฯ(b)^c. Therefore the equality follows.
Critical multi-cubic lattices are implication algebras with implication defined by b รฅ = a + X_ฯ(a) โชฯ(b)^c.
This follows directly from Proposition <ref> and Lemma <ref>.
In summation,
A critical multi-cubic lattice M(, , , )ฬ is an implication algebra, with a weak form of complementation and propositions faithfully embedded as projections onto a Hilbert space.
We believe that this is the beginnings of a propositional quantum logic for higher spin qudits. The relation to qudits will be made clear in the following section.
ยง OPERATOR ALGEBRAS OF A CRITICAL MULTI-CUBIC LATTICE
Now that have an embedding of the critical multi-cubic lattice into a Hilbert space, we can consider the representation of its automorphism group. From this, we obtain the generalized Pauli matrices and the quantum Fourier transform. To our knowledge, this is the first time, that these operators have acted on an infinite system of qudits.
Let M be a critical multi-cubic lattice such that where M โ HL as in Theorem <ref>, C be coatoms of M, and A be the atoms of M. We denote the projection operator onto c โ C as p_c, and the projection operator onto a โ C as p_a.
Let M be an |I| critical multi-cubic lattice over _2k+1. Then there exists a unitary representation ฯ: Aut(M) Bฬ(H) where H is constructed in Theorem <ref>.
For any ฯโ Aut(M), we equivalently consider ฯโ Per_Aut(_2k+1) C by Lemma <ref>. We use coatomicity of the critical multi-cubic lattice to define an action on the atoms of M. As the atoms of M form an orthonormal basis of H, we can then consider the linear extension to obtain ฯ(ฯ). Since ฯ(ฯ) sends an orthonormal basis to an orthonormal basis, we have that it must be unitary.
Ultimately, we have made a choice in our construction of ฯ. This is because we have an issue of ambiguity when translating actions on projection operators to a given subspace. As an example, ยฑ I both define exactly the same action the atoms of M when considered as unitary similarity transformations of projection operators. We now discuss an equivalent way of defining ฯ that highlights this. Recall that operators p_a are rank one projections onto some vector forming an orthonormal basis of H. From another perspective, every atom is an infinite tensor product of exactly one element, a_i โ H_i of our original choice of orthonormal basis of H_i for each i โ I. In order to reduce ambiguity, for each p_a, and a given permutation ฯ: {p_a}{ฬp_a}, we chose the vector a = โ_i โ I a_i as an equivalence class representative of vectors {h โ H: p_a = h h} = {z a: z โโ B_1(0) โ}, so ฯ(ฯ): H Hฬ is defined by the map a โฆฯ(a).
As a consequence of this choice, we have the following.
Let M โ HL as in Theorem <ref> and Uโ W^โ(ฯ(Aut(_2k+1))' be unitary. There exists a unitary V โฯ(Aut(M)) such that Ad_U = Ad_V : M Mฬ and U = VS for S โ W^โ({p_c}_c โ C) โฉ W^โ(ฯ(Aut(_2k+1)))'.
If U โ W^โ(ฯ(Aut(_2k+1))', then Ad_U โฯ(Aut(M)) by Lemma <ref>. Now let V = ฯ(Ad_U) โ W^โ(U_)'. Then Ad_V^โ = Ad_V^-1, so Ad_UV^โ|_M = Ad_I|_M. As the action of inner automorphism stabilizes M, UV^โโ W^โ({p_c}_c โ C)', so there exists S โ W^โ({p_c}_c โ C)' = W^โ({p_c}_c โ C) <cit.> such that U = VS. Furthermore, S = UV^โ, and we conclude that S โ W^โ(ฯ(Aut(_2k+1)))' as well.
W^โ(Aut(_2k+1))' = W^โ(ฯ(Per_Aut(_2k+1) C), W^โ({p_c}_c โ C) โฉ W^โ(Aut(_2k+1))').
This proof is a direct application Proposition <ref>.
As an interesting side note, when reduced to the finite case, the following corollary can be viewed as generalization of the group theoretic result that Z(B_n) is equal to the identity and the antipodal map.
Let M be a critical multi-cubic lattice over _2k+1. Then Z(ฯ(Aut(M))) = ฯ(Aut(_2k+1)).
We only show one containment in Proposition <ref>. The reverse containment follows by Proposition <ref> and that ฯ(Aut(M)) โฉ W^โ({p_c}_c โ C) = I.
We now reconstruct the relevant matrix unit structure of B(H) in terms of critical multi-cubic lattice automorphisms.
Let M be an |I| critical multi-cubic lattice over _2k+1, C_ฮฑ be the coatoms of M for a fixed index ฮฑโ I, and H be constructed in the manner of Theorem <ref>, then B(H) โ
M_2k(B), where B is the mutual commutant of a set of matrix units.
Let (ยท) denote the respective element in the permutation group contained in M_2k(_2) represented in the standard basis, and C_ฮฑ denote the coatoms for a fixed index ฮฑโ I. Now we claim the following matrix units form matrix units of B(H). For i โ C_ฮฑ:
e_ii = p_c
e_ij = e_ii (ij)
e_ji = (ij) e_ii
We can directly compute that โ_i โ C_ฮฑ e_ii = I, e_ij = e_ji^โ as permutation group is subgroup of the unitary group, and e_ije_kl = ฮด_jke_il. Therefore, B(H) โ
M_2k(B), where B is the commutant of the matrix units, see Lemma 4.27 of <cit.>.
representation definition
Let H be constructed as in Theorem <ref>. We define the unitary representation ฯ_i: C_S_2k(Aut(_2k+1)) Bฬ(H) as elements of the permutation group in M_2k(_2) of the ith index of the tensor product defining H.
With the conditions of Lemma <ref>, W^โ(ฯ_i(C_S_2k(Aut(_2k+1))), {p_C_i}_l=1^2k) โ W^โ({e_ij}_i,j =1^2k).
We claim W^โ(ฯ_i(C_S_2kAut(_2k+1)), {p_C_i}_l=1^2k) โ W^โ({e_ij}_i,j =1^2k).
The projections onto the appropriate coatoms are the diagonal elements by construction. In place of enforcing the conditions to be a cubic lattice, we have that the module conditions of the critical multi-cubic lattice must be preserved.
We identify ฯโ C_S_2k(Aut(_2k+1))) with ฯ_ฯโ S_2kโ M_2k(_2) represented by the permutation group. Thus, any element of ฯ_i(C_S_2k(Aut(_2k+1))) is a linear combination of matrix units.
Note that our previous results of <cit.> only fully generalize to a particular subset of critical multi-cubic lattices.
With the conditions of Lemma <ref>, B = W^โ(ฯ_i(C_S_2k(Aut(_2k+1))), {p_C_i}_l=1^2k)' if and only if 2k+1 is prime.
It is sufficient to consider when W^โ({e_ij}_i,j =1^2k) โ W^โ(ฯ_i(Aut(_2k+1)), {p_C_i}_l=1^2k), which is equivalent to the case when the action of ฯ_i(C_S_2k(Aut(_2k+1))) on {e_ii}_i = 1^2k generates {e_ij}_i,j = 1^2k. This occurs exactly when C_S_2k(Aut(_2k+1)) acts transitively on _2k+1- {0} with our relabeling of Theorem <ref>.
Let e ฯโ Aut(_2k+1). Consider C_S_2k(ฯ) โ
ฮ _j=1^l (_jโ S_N_j). The action of ฮ _j=1^l (_jโ S_N_j) on ฯ must preserve the cycle type of ฯ. Let 1 โค i < j โค 2k, then the action of ฮ _j=1^l (_jโ S_N_j) can map i to j only if i and j are both in cycles of the same length. Thus, if ฮ _j=1^l_jโ S_N_j acts transitively, ฯ must have a cycle decomposition of m cycles of length 2k/m where m divides 2k. On the other hand, these cycles are the orbits of the action โจฯโฉ on _2k+1 disregarding the orbit of e โ_2k+1. For a โ_2k+1, if |a| = 2k+1, then |orb(a)| = ฯ(a), where ฯ denotes the Euler totient function. If |a| < 2k+1, then |orb(a)| = ฯ(2k+1)/ฯ(d) for some 1 < d | (2k+1). As 2k+1 is odd, d > 2 and ฯ(d) > 1. Therefore, ฯ(2k+1)/ฯ(d)) ฯ(2k+1), so 2k+1 must be prime.
Conversely, if 2k+1 is prime, recall that C_S_2k(Aut(_2k+1)) = C_S_2k(ฯ) โ
_2k whose action as a 2k cycle in S_2k is transitive on {a: 1 โค a โค 2k}.
We generalize the Hadamard matrix for a critical multi-cubic lattice. Let M be an |1| critical multi-cubic lattice over _2k+1 where 2k+1 is prime. Let _2kโ
ฯ(C_2k(Aut(_2k+1))) be generated by the unitary X_2k. As X_2k is unitary, there exists a unitary U_2k such that U_2k^โ X_2kU_2k = D_2k, where D_2k is the diagonal matrix of the roots of 2k roots of unity in counterclockwise order. In the case where 2k+1 = 3, X_2k = X, and U_2k = H, where X is the Pauli gate, and H is the normalized Hadamard gate.
quantum fourier transform definition
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime. Define U_H = โ_i โ I U_2k,
As noted if M is an |1| critical multi-cubic lattice over _3, then U_2k is the Hadamard matrix up to a normalization constant. If M is an |1| critical multi-cubic lattice over _5, then
U_2k = 1/2[ 1 1 1 1; 1 i -1 -i; 1 -1 1 -1; 1 -i -1 i; ],
X_2k = [ 0 1 0 0; 0 0 1 0; 0 0 0 1; 1 0 0 0; ], and
D_2k = [ 1 0 0 0; 0 i 0 0; 0 0 -1 0; 0 0 0 -i; ]
.
In general up to a normalization constant, [U_2k]_ij = ฯ_j^i-1 where ฯ_j is jth element of the 2k roots of unity with counterclockwise ordering starting at 1. We recognize U_2k as the quantum Fourier transform, X_2k as the shift matrix, and D_2k as the clock matrix.
Originally introduced by <cit.>, it is well known that these matrices form the framework of quantum mechanical dynamics in finite dimensions. Furthermore, X_2k and D_2k are known as Generalized Pauli matrices, which are a representation of the respective Heisenberg-Weyl group <cit.>.
Let X_(2k)_i = โ_j โ I A_j, where A_i = X_2k and A_j = I_2k for all j i. Similarly, let D_(2k)_i = โ_j โ I A_j, where A_i = D_2k and A_j = I_2k for all j i.
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime and C be the coatoms of M. Then W^โ(ฯ(C_S_2k(Aut(_2k+1))) โ W^โ({X_(2k)_i}_iโ I) โ W^โ({U_Hp_c U^โ_H}_c โ C.
We first show W^โ(ฯ(C_S_2k(Aut(_2k+1))) โ W^โ({X_(2k)_i}_iโ I). Since 2k+1 is prime, C_S_2k(Aut(_2k+1)) = Aut(_2k+1) โ
_2k, so we reduce to the case of the representation of the cyclic group ฯ(_2k), which is a subset of the standard representation of the permutation group for our chose basis of Theorem <ref>. As ฯ(_2k) = โ_iโ I X_(2k)_i by construction, we have shown the first containment.
Now we show W^โ({X_(2k)_i}_iโ I) โ W^โ({U_Hp_c U^โ_H}_c โ C. We have D_(2k)_i = โ_j=1^2kฯ_j p_j where {ฯ_j} are the 2k cyclic roots of unity and p_j are the rank one projections onto the basis used to construct H_i in Theorem <ref>, so that D_(2k)_i is diagonal in our choice of basis. As all p_j are atoms of M, D_(2k)_iโ W^โ({p_c}_c โ C) by coatomisticity. Lastly, we conclude that X_(2k)_i = U_H D_(2k)_i U_H^โโ W^โ({U_Hp_c U^โ_H}_c โ C).
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime and C be the coatoms of M. Then
W^โ({U_Hp_c U^โ_H}_c โ Cโ W^โ({ฯ_i(C_S_2k(Aut(_2k+1))}_iโ I).
It is sufficient to reduce to an arbitrary i โ I. By Proposition <ref>, we need only show that W^โ(X_2k_i) = W^โ({U_H p_ij U_H}_j = 1^2k), where p_ij is the projection onto jth atom forming an orthormal basis of H_i used to construct H in Theorem <ref>.
We have already shown one containment, so it sufficient to show that the vector spaces have equal rank. This follows as rank(W^โ(X_2k_i) = 2k, as we have a matrix with distinct eigenvalues, so X_2k_i has minimal polynomial equal to its characteristic polynomial, x^2k - 1. W^โ({U_H p_ij U_H}_j = 1^2k)) = 2k as well since it is just a unitary transformation of 2k orthogonal projections.
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime and C be the coatoms of M. Then
W^โ({ฯ_i(C_S_2k(Aut(_2k+1)))}_i โ I) = W^โ({U_Hp_c U^โ_H}_c โ C if and only 2k+1 is prime.
If 2k+1 the result follows by Proposition <ref> and Proposition <ref>.
Assume that 2k+1 is not prime and W^โ(ฯ_i(C_S_2k(Aut(_2k+1))_i โ I) = W^โ({U_Hp_c U^โ_H}_c โ C. Specifically this implies that C_S_2k(Aut(_2k+1)) contains a 2k cycle. In addition, C_S_2k(Aut(_2k+1)) is an abelian centralizer and therefore a maximal abelian subgroup of S_2k. Similarly, 2k cycles are abelian subgroups equal to their centralizer in S_2k, so they are maximal abelian subgroups. Then C_S_2k(Aut(_2k+1)) contains a 2k cycle if and only if C_S_2k(Aut(_2k+1) is equal to a 2k cycle. This contradicts that Aut(_2k+1) โค C_S_2k(Aut(_2k+1) as either Aut(_2k+1) = {1} or Aut(_2k+1) = C_S_2k(Aut(_2k+1). The former is false as |Aut(_2k+1)| = ฯ(2k+1) > 1 for all k โฅ 1, and the latter holds if and only if 2k+1 is prime.
Our final theorem can be viewed as a generalization of the result that generalized Pauli matrices form an orthonormal basis.
Let M be an |I| critical multi-cubic lattice over _2k+1 where 2k+1 is prime, C be the coatoms of M, and H be constructed as in Theorem <ref>.
Then B(H) = W^โ({U_Hp_cU^โ_H}_c โ C, {p_c}_c โ C) = W^โ({ฯ_i(C_S_2k(Aut(_2k+1)))}_i โ I, {p_c}_c โ C) if and only if 2k+ 1 is prime.
By Lemma <ref>, the second equality holds if and only if 2k+1 is prime.
Now assume that 2k+1 is prime, and consider a coatom in p_i โ{Up_cU^โ}_c โ C and q_i โ{p_c}_cโ C for some fixed index i โ I. Then p_i q_i = lim_n โฬ (p_iq_ip_i)^n = lim_n โฬ (1/2kp_i)^n = 0. By construction, any atom a โ W^โ({Up_cU^โ}_c โ C), a is bounded by a coatom for all i โ I, so we assume without loss of generality that a โค p_i, and by symmetry we assume b โค q_i. Then a b โค p q = 0. Therefore the atomistic Boolean lattice of projections associated with {U_Hp_cU_H^โ}_c โ C and {p_c}_c โ C have distinct sets of atoms.
By atomisticity, W^โ({p_c}_cโ C) and W^โ({U_Hp_cU_H^โ}_cโ C) are abelian von Nuemann algebras whose only common projections are 0 and I, so their intersection is I by <cit.>.
Lastly, we have a result about the possible transitions of the critical multi-cubic lattice using only gates associated with the automorphism group.
Let M be an |I| critical multi-cubic lattice over _2k+1. The action of Aut(M) acts transitively on the atoms if and only if 2k+1 is prime.
As Aut(M) โ
C_S_2k(Aut(_2k + 1)) โ S_I, we need only show that C_S_2k(Aut(_2k + 1)) acts transitively on the coatoms on fixed index i โ I. As the action is just standard modular multiplication, we apply C_S_2k(Aut(_2k + 1)) acts transitively on _2k+1 - {0} if and only if 2k+1 is prime. By coatomisticity of the critical multi-cubic lattice, we conclude the result.
Our original interest in the critical multi-cubic lattice began as a generalization of the cubic lattice. Fortuitously, as in <cit.>, the appropriate representation of the critical multi-cubic lattice created a framework for infinite systems of quantum gates, in this case qudits. In order to do so, we have lost the Boolean-adjacent properties of the cubic lattice namely axiom 2, and therefore a nice characterization of the logic that followed. An interesting future pursuit would be to more fully discuss the implicative structure of the critical multi-cubic lattice. Another pursuit is to discover an intermediate structure between the generality of the critical multi-cubic lattice and the Boolean-like logic of the cubic lattice.
In order to highlight the utility and importance of moving between a logical formulation and algebraic formulation, we conclude with a quote by J. Sylvester: The application of Algebra to Logic is now an old tale-the application of Logic to Algebra marks a far more advanced stadium of the human intellect, <cit.>.
[
heading=bibintoc,
title=References
]
|
http://arxiv.org/abs/2306.03363v1
|
20230606023010
|
Robust inference for the treatment effect variance in experiments using machine learning
|
[
"Alejandro Sanchez-Becerra"
] |
econ.EM
|
[
"econ.EM"
] |
Event Encryption: Rethinking Privacy Exposure for Neuromorphic Imaging
Pei Zhang, Shuo Zhu and Edmund Y. Lam,ย Fellow, IEEE
This work was supported in part by the Research Grants Council of Hong Kong SAR (GRF 17201620, 17200321) and by ACCESS โ AI Chip Center for Emerging Smart Systems, sponsored by InnoHK funding, Hong Kong SAR.
The authors are with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China (e-mail: {zhangpei, elam}@eee.hku.hk, [email protected]). Edmund Y. Lam is also affiliated with ACCESS โ AI Chip Center for Emerging Smart Systems, Hong Kong Science Park, Hong Kong SAR, China.
Corresponding author: Edmund Y. Lam.
(July 31, 2023)
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
Experimenters often collect baseline data to study heterogeneity. I propose the first valid confidence intervals for the VCATE, the treatment effect variance explained by observables. Conventional approaches yield incorrect coverage when the VCATE is zero. As a result, practitioners could be prone to detect heterogeneity even when none exists. The reason why coverage worsens at the boundary is that all efficient estimators have a locally-degenerate influence function and may not be asymptotically normal. I solve the problem for a broad class of multistep estimators with a predictive first stage. My confidence intervals account for higher-order terms in the limiting distribution and are fast to compute. I also find new connections between the VCATE and the problem of deciding whom to treat. The gains of targeting treatment are (sharply) bounded by half the square root of the VCATE. Finally, I document excellent performance in simulation and reanalyze an experiment from Malawi.
Keywords: Debiased machine learning, treatment effect heterogeneity, experiments, non-standard inference, variance decomposition
JEL Codes C14, C21, C55, C90
ยง INTRODUCTION
In recent years, there has been a rapid expansion of experiments to evaluate public policy programs and corporate initiatives. There is also more evidence that the effectiveness of a program can vary across individuals. For instance, <cit.> studies a population of low-income parents in Malawi with large misperceptions about their children's school performance. She finds that a simple intervention can bridge these gaps and that the Conditional Average Treatment Effect (CATE) varies by children's initial scores. In practice, even though researchers collect baseline surveys with many characteristics, the CATE is typically estimated via regressions with one or two interactions, thus underutilizing the full set of variables. The promise of leveraging the vast and readily available baseline data has sparked more applications of supervised machine learning <cit.>. Methods such as LASSO, neural networks, random forests, or boosting are data-driven and allow for more variables and flexibility.
In this paper, I focus on the unconditional variance of the CATE, the VCATE, which measures the dispersion of treatment effects predicted by a set of baseline characteristics. The VCATE has a clear interpretation, even if the CATE is nonlinear or depends on many characteristics. <cit.> and <cit.> separately propose estimators with a misspecification robust interpretation, whereas <cit.> propose an efficient estimator. Despite these recent advances, there are currently no valid confidence intervals for the VCATE. In fact, <cit.> study the performance of confidence intervals based on the efficient influence function, i.e., the conventional way. They find that coverage degrades near the boundary, reaching a low of 32% in simulations. They speculate that poor coverage is due to the degeneracy of the efficient influence function when the VCATE is zero. As a result, conventional guarantees for โ(n)-asymptotic normality do not apply to this part of the parameter space. Moreover, a VCATE close to zero is economically meaningful because it could reflect null effects, low effect heterogeneity, or irrelevant covariates. For such situations, which are common in practice, conventional approaches to inference could be misleading.
This paper provides fresh insights regarding the VCATE and proposes a solution for inference in experiments. I propose (a) novel ways to interpret the VCATE for decision-making by deriving sharp bounds on the population gains of personalized treatment assignment, (b) novel estimators of the VCATE that are both misspecification-robust and efficient, combining the best features of previous approaches, and (c) novel confidence intervals that are shape-adaptive and fast to compute.
I break down why conventional confidence intervals have incorrect size. I show that the boundary inference problem can manifest even when the CATE is linear and univariate. I solve the problem for the linear case by proposing adaptive confidence intervals that meet the high-level conditions outlined in <cit.>. I then show that these can be readily extended to the class of nonlinear models in <cit.> that combine regression adjustments and a machine learning first stage. In addition to providing new confidence intervals, my paper has novel implications for estimation by showing that only a subset of these multi-step estimators is efficient. Adaptive inference and regression adjustments work well for various predictive models under weak assumptions.[While I specialize my results to the VCATE, my inference approach relies on more general principles: I use knowledge of the limiting distribution function, conditional on the cross-fitted estimates. Correct coverage follows from verifying high-level assumptions that can be satisfied by a wide array of machine learning method used in the first step. In principle, my approach could extend to other non-standard inference problems.] In the fully nonparametric case, I use a conservative procedure with valid coverage over multiple sample splits.[This adjustment is in the spirit of <cit.>, who propose robust t-tests assuming a conditionally normal distribution. Their results are not directly applicable here due to the boundary inference problem. However, I apply the principles behind their โmedian-parameterโ confidence intervals to my adaptive intervals.] I derive the local power curve for the associated tests of homogeneity and their relationship to the tests in <cit.> and <cit.>. I also propose confidence intervals for settings with cluster dependence.
I document excellent root mean square error (RMSE) performance and coverage in simulations using LASSO, even in high dimensions. I benchmark my multi-step approach against a two-step debiased machine learning estimator. As predicted by theory, all approaches are asymptotically normal, efficient, and have good coverage in highly heterogeneous designs. However, when the VCATE is zero or close to zero, coverage of two-step alternatives can be as low as 45%. By contrast, my adaptive intervals produce coverage at the intended 95% level and better RMSE at all regions of the parameter space. I study the robustness of the multi-step approach in both the theory and simulations. I consider situations where the predictive component is misspecified or slow to converge. I also discuss issues related to uniform vs. pointwise coverage.
I apply my approach to data from <cit.>, an information experiment with low-income parents in Malawi who had at least two school-age children. The intervention redesigned the way in which parents received information about their children's school performance. The endline survey measured parental beliefs about student grades and asked parents to allocate tickets to a scholarship between their children. <cit.> presented graphs with a non-parametric CATE by baseline test scores, which had an approximately linear shape, and separately tested for significance using a regression with an interaction. I use LASSO to compute the VCATE for the two outcomes (parental beliefs and lottery allocations) by different characteristics of students, parents, and households. To make the results interpretable, I focus on the standard deviation of the CATE, i.e., โ(VCATE), and normalize it by the standard deviation of the outcome in the control group.
My approach allows us to quantify the magnitude of effect heterogeneity. I find that the treatment effect heterogeneity explained by test scores is equivalent to 40% of the standard deviation (SD) of the beliefs of the control group, and 16% of the SD of the control group lottery allocation. I also find that the effect heterogeneity collectively explained by other student variables (grade, age, gender, attendance, and educational expenditures) is comparable to 11% of the SD of beliefs in the control group. The VCATE of beliefs by student variables is significant at the 5% level, but the VCATE of lottery outcomes by student variables is not. The combined VCATE associated with student scores and 12 other key characteristics has a similar value to the VCATE with only scores. Despite being conservative, the intervals for the VCATE are short in length in this empirical example. Using my new welfare bounds (-|ATE| + โ(VCATE + ATE^2))/2, I predict that targeted interventions using the baseline covariates have a maximum added benefit of 7.9% SD and 7.4% SD (standard deviations of the outcome for the control group) on beliefs and lottery allocations, respectively.
Researchers should focus on the VCATE because it is a model-free quantity with good properties: it is well-defined even if the CATE is continuous or discrete, and it weakly increases when researchers add more covariates to their analysis. Researchers can test for homogeneity by evaluating whether confidence intervals for the VCATE include zero. In addition to testing, by quantifying the VCATE, researchers can compare the magnitude of heterogeneity relative to a benchmark, such as the variance at baseline, the VCATE for different covariates, or experiments in other sites.
ยง.ยง Contribution
My first main contribution is to show that the VCATE provides a bound for the welfare gains of policy targeting. A policymaker might decide to use the information from the CATE to design personalized treatment recommendations <cit.>. One can measure utilitarian welfare by computing the expected outcome under different policies. Such policies can be further constrained to a class that respects budget limits, incentive compatibility, or fairness considerations <cit.>. I show that the difference in mean outcomes between a targeted policy and a non-targeted policy using only the average treatment effect (ATE), is bounded by โ(VCATE)/2. For instance, under homogeneity (VCATE = 0), there are no gains from targeting. I show that this bound holds in the population regardless of the choice of policy class and the underlying distribution. Furthermore, the bound is sharp in the sense that it holds exactly for at least one policy and distribution.
The proposed bound on utilitarian welfare communicates information to practitioners about whether a targeting exercise is even worth pursuing, without needing to solve the targeting problem itself. The VCATE can be a supplemental quantity reported in regression analyses, or a benchmark for analysts choosing the optimal policy. If the VCATE is very low, practitioners may consider expanding the set of covariates in the analysis. To derive the bound, I use a constructive approach to solve the most adversarial distribution. I also prove a more general bound (-|ATE| + โ(VCATE + ATE^2))/2, and show that the distribution that leads to a maximum welfare gain is one where the CATE has binary support and mean zero. The gains from targeting easily diminish if the value of the ATE is relatively higher than the VCATE.
My second contribution is related to efficient estimation and robust inference. New theory is required here because of a unique feature of the VCATE: the efficient influence function is degenerate when the CATE is homogeneous <cit.>. Classical results by <cit.> show that any regular, efficient estimator can be decomposed as 1/nโ_i=1^n ฯ_i + R_n, where n is the sample size, {ฯ_i}_i=1^n are a set of i.i.d. mean-zero influence functions, and R_n is a residual with higher-order terms that are o_p(n^-1/2).[Many standard estimators can achieve this property, e.g., the โdebiased machine learningโ estimator <cit.> or the targeted maximum likelihood estimator in <cit.>.] Conventional approaches assume that ๐(ฯ) > 0 and in this case, the estimation error converges at โ(n) to ๐ฉ(0,๐(ฯ_i)) by the CLT. However, when the VCATE is zero, ๐(ฯ_i) = 0 as well. Hence, the limiting distribution is dominated by the higher terms in R_n, which may not be asymptotically normal. Therefore, while โ(n)-estimation is still possible, t-tests that plug in an estimate of ๐(ฯ) may have incorrect coverage. By contrast, other common quantities such as the average treatment effect (ATE) or the local average treatment effect (LATE) do not have this problem because they satisfy ๐(ฯ_i) > 0 uniformly <cit.>.
I start by analyzing a simple two step estimator, assuming that the CATE is linear in covariates and can be estimated from a regression. I show that the limiting distribution of the VCATE estimator can be written as a linear combination of a Chi-square that converges at n-rate and a normal distribution that converges at โ(n)-rate. The weights are determined by the value of the VCATE, which means that the shape of the distribution changes depending on the region of the parameter space. At the boundary, it behaves like a rescaled chi-square, is O_p(n^-1), and confidence intervals with normal critical values will have incorrect coverage. For values of the VCATE bounded away from zero, the distribution is asymptotically normal as in the classical results.
In the linear case, I construct adaptive confidence intervals that account for the higher terms of the distribution. I apply the framework of <cit.> to show that this produces uniform, exact coverage when the linear model is correctly specified.[This type of strategy has proven effective to deal with other non-standard problems where the shape of the limiting distribution depends on an unknown parameter, such as the AR coefficient in a time series, the effect parameter under weak instruments, or the quasi-likelihood ratio test for nonlinear regression <cit.>.] The intervals are fast to compute because the expressions are all analytic. When there is a single covariate, I also show that a homogeneity test that evaluates whether zero is contained in the confidence intervals is algebraically identical to (i) a test of whether the interaction in the regression model is equal to zero, and (ii) the single-covariate homogeneity test of <cit.>. The test is also asymptotically equivalent to <cit.>. However, these other tests only apply to series estimators and are not nested with mine in the multivariate and non-parametric cases.
I extend my results to the class of nonlinear models proposed by <cit.>. <cit.> showed that in experiments with known assignment probabilities, their models produce a meaningful pseudo-VCATE even if the functional form is misspecified. The pseudo-VCATE is non-negative, weakly lower than the VCATE, and converges to the true value under mild conditions on the estimated CATE.[This monotonicity property means that in experiments the pseudo-VCATE will not falsely detect heterogeneity, even if the machine learning stage is misspecified.] <cit.> argue that the pseudo-VCATE might be of independent interest as a measure of model fit.[ <cit.> also define a similar pseudo-VCATE based on randomization inference.] They describe a three-step estimator with a machine learning/prediction first-stage, a regression second-stage, and a sample-variance third stage. Related multi-step estimators have also been considered in other work <cit.>.
To the best of my knowledge, there are no existing asymptotic results for the multi-step VCATE estimator proposed in <cit.>. I fill in that gap by proving two sets of results. First, I show that all the estimators in their class converge to the true VCATE at least at โ(n)-rate, are o_p(n^-1/2) at the boundary (as in the simple linear model), and have the convenient property that they are always non-negative. This builds on the asymptotic expansion for the linear case I introduced above. Second, I prove that only a subset of the <cit.> estimators are efficient, i.e. converge at โ(n)- to an average of i.i.d. efficient influence functions. The key ingredient is to prove a novel finite-sample equivalence result. I find that the first order conditions of the regression step and the bias-correction component of the VCATE influence function are in fact identical, given a particular decomposition of the nuisance functions. The asymptotic results follow from fairly standard assumptions on convergence rates <cit.>. To get the limiting distribution, the only meaningful extra assumption is that the estimated CATE has bounded kurtosis (thin tails).
I show that extending the adaptive confidence intervals (CIs) to multi-step estimators is straightforward. The procedure randomly splits the data into subsets or folds and estimates the nuisance functions and the VCATE on different folds. To compute the confidence intervals for a particular fold, the researcher can treat the second-stage regression as if the variables were given, and then construct the CIs as in the simple case. I construct median confidence intervals (CIs) to aggregate information across multiple folds. I show that the single fold procedure produces uniform, exact coverage for the pseudo-VCATE and point-wise, exact coverage for the VCATE for all points in the parameter space, at a nominal level 1-ฮฑ. The multifold CIs have pointwise conservative coverage.
Furthermore, the probability that the true VCATE is below the confidence interval bounds is uniformly bounded by ฮฑ in large samples. This result applies to the single and multifold CIs and does not require that the first-stage estimates to converge. Instead it relies on the fact that in experiments the pseudo-VCATE is weakly lower than the true VCATE. Tests for homogeneity (whether zero is contained in the CI) belong to this broader class of tests. Having uniform size control for this class of one-sided tests means that my tests of homogeneity are robust.
This paper is also related to a growing literature on debiased-machine learning <cit.>, semiparametric efficiency <cit.>, uniform inference for non-standard problems <cit.>, and tests of treatment effect homogeneity <cit.>. My approach combines results from these literatures by addressing a boundary inference problem with a machine learning stage, and applying techniques of uniform inference. A related literature also focuses on confidence intervals around point-predictions of the CATE <cit.>, rather than overall measures of dispersion.
Section <ref> provides key definitions, introduces the welfare bound, and presents a version of the adaptive confidence intervals for the univariate regression case. Section <ref> frames the inference problem in a more general setting, and extends the adaptive confidence intervals for VCATE estimation with a machine learning first stage. Section <ref> presents the large sample theory. Section <ref> introduces the simulations. Section <ref> applies my approach to an empirical example from Malawi. Section <ref> concludes.
ยง OVERVIEW OF FRAMEWORK
Consider a program evaluation setting in which an individual is assigned to either a treatment (D=1) or a control group (D= 0). The outcome of interest Y depends on the treatment status. I denote the potential outcome under treatment and control status as Y_1 and Y_0, respectively, and the treatment effect as Y_1 - Y_0. The conditional average treatment effect (CATE) given covariates X is defined as
ฯ(x) := ๐ผ[Y_1 - Y_0 | X = x],
and the average treatment effect (ATE) is defined as ฯ_av := ๐ผ[Y_1 - Y_0]. This paper proposes an estimator of the variance of the CATE (VCATE) defined as
V_ฯ := ๐(ฯ(X)).
The variance V_ฯ measures the dispersion of treatment effects that can be attributed to observable characteristics X. The value of V_ฯ depends on the choice of covariates. To understand how different covariates might impact the VCATE, let V_ฯ' = ๐(๐ผ[Y_1 - Y_0 | X']) be the VCATE for a different set of covariates X'.
If X is X'-measurable, then V_ฯโค V_ฯ' โค๐(Y_1-Y_0).
Lemma <ref> shows that the VCATE has the following monotonicity property: if the researcher adds more covariates to the analysis, or breaks down an existing covariate into more categories, then the VCATE will be weakly larger.
The propensity score, p(x), is defined as follows
p(x):= โ(D = 1 | X = x).
I restrict attention to experimental settings where p(x) is known. The CATE can be identified under further assumptions.
(i) Stable unit treatment value assumption (SUTVA), Y = Y_1D + (1-D)Y_0 (ii) Strong overlap, there is a constant ฮดโ (0,1/2) such that โ(ฮด < p(X) < 1-ฮด) = 1, (iii) Selection on observables, Y_1,Y_0 D | X.
Assumption <ref>.(i) formalizes the idea that the researcher can only observe either Y_1 or Y_0, but not both, for any particular individual. Assumption (ii) holds in randomized controlled trials with treatment probabilities bounded away from {0,1}. Assumption (iii) states that an individual's treatment probability depends on X but not their potential outcomes. Let ฮผ_d(x) be the conditional mean of Y given X and a fixed value of d โ{0,1},
ฮผ_d(x) := ๐ผ[Y | D = d, X = x].
Under Assumption <ref>, ๐ผ[Y_d | X =x ] = ฮผ_d(x), and hence ฯ(x) = ฮผ_1(x) - ฮผ_0(x). This means that the VCATE is identified, with V_ฯ = ๐(ฮผ_1(X)-ฮผ_0(X)).
ยง.ยง The VCATE and policy targeting
Practitioners can use estimates of ฯ(X) to decide whom to treat in future interventions <cit.>. Program managers can target the treatment recipients based on their initial covariates. However, whether targeting can substantially improve average outcomes depends on the dispersion of ฯ(x). I show that a simple function of the VCATE bounds the marginal gains of targeting.
Let ฮณ denote the joint distribution of (X,Y_1,Y_0), ๐ณ a set containing the support of X, ฯ_ฮณ(x) := ๐ผ_ฮณ[Y_1 - Y_0 | X =x] the CATE given ฮณ. A function which maps x to a probability of treatment ฯ(x) is known as a statistical allocation rule <cit.>. Furthermore, I denote the set of all possible allocation rules by ฮ , which contains all functions {ฯ:๐ณโ [0,1]}. The set ฮ includes many well-known assignment rules. For instance, it includes the โnon-targetedโ policy which assigns everyone to treatment if ๐ผ_ฮณ[Y_1] > ๐ผ_ฮณ[Y_0] and to the control group otherwise. Moreover, the average outcome under rule ฯ is ๐ผ_ฮณ[ฯ(X)Y_1 + (1-ฯ(X))Y_0], and the marginal benefit compared to the non-targeted policy is defined as ๐ฐ_ฮณ(ฯ) := ๐ผ_ฮณ[ฯ(X)Y_1 + (1-ฯ(X))Y_0] - max{๐ผ_ฮณ[Y_1],๐ผ_ฮณ[Y_0]}.
Let ฮ denote the set of distributions such that ๐_ฮณ[ฯ_ฮณ(X)] = V_ฯ. For all ฮณโฮ and ฯโฮ ,
๐ฐ_ฮณ(ฯ) โค([ Welfare; Optimal; Targeting ])_sup_ฯโฮ ๐ผ_ฮณ[ฯ(X)Y_1 + (1-ฯ(X))Y_0] - ([ Welfare; No; Targeting ])_max{๐ผ_ฮณ[Y_1],๐ผ_ฮณ[Y_0]}โค1/2โ(V_ฯ).
The bound is sharp in the sense that ๐ฐ_ฮณ(ฯ) = 1/2โ(V_ฯ) for at least one ฮณโฮ and ฯโฮ .
Consider distributions where ๐ผ_ฮณ[ฯ_ฮณ(X)]= ฯ_av and ๐_ฮณ[ฯ_ฮณ(X)] = V_ฯ, then U_ฮณ(ฯ)โค1/2(-|ฯ_av| + โ(V_ฯ+ฯ_av^2)). This bound is sharp over this subset of distributions.
Theorem <ref> shows the VCATE provides a welfare bound over the superset of policy classes. Furthermore, any type of restrictions on ฮ such as budget constraints or incentive compatibility will achieve utilitarian welfare gains that are weakly lower than 1/2โ(V_ฯ). The bound in Theorem <ref> and the generalization in Theorem <ref> provide simple bounds on the prospective gains of targeting, without needing to solve for ฯ(x). The bounds are most informative when V_ฯ is low. For instance, when V_ฯ = 0 there is no heterogeneity explained by the observables X and therefore there are no gains from targeting. However, the fact that the bound is sharp does not imply that it is always achievable for every ฮณโฮ, and when V_ฯ is high it is still be necessary to optimize ฯ(x) to determine whether personalized offers are worthwhile.
Finding the bound in Theorem <ref> relies on two important insights. On one hand, the optimal policy in ฮ treats an individual if and only if ฯ(x) โฅ 0 <cit.>. Substituting the optimal policy, sup_ฯโฮ ๐ฐ_ฮณ(ฯ) is equal to ๐ผ_ฮณ[max{ฯ_ฮณ(X),0}] - max{๐ผ_ฮณ[ฯ_ฮณ(X)],0}. On the other hand, to avoid optimizing over all ฮณโฮ, I break the problem down into equivalence classes based on the moments of the negative, zero, and positive components of the CATE. I use a constructive approach to derive the most โadversarialโ distribution. The upper bound is achieved when the CATE has a binary support, which is partly why the bounds in Theorems <ref> and <ref> have simple closed forms.
Let ฮบ_1,ฮบ_2 โโ and define a new outcome แปธ = ฮบ_1 + ฮบ_2 Y. The maximum welfare gain for the transformed outcome is |ฮบ_2|/2(-|ฯ_av| + โ(V_ฯ + (ฯ_av)^2)).
Corollary <ref> shows that the welfare bound is invariant to location shifts in the outcome, and grows linearly with scale shifts. This result implies that transformations that change the sign, e.g. ฮบ_2 = -1, do not change the value of the welfare bound. Consequently, the bound applies regardless of whether the welfare objective is to increase a desirable outcome or to decrease an undesirable outcome.
ยง.ยง Inference using regressions
Consider a simple situation where X is real valued, the treatment D is experimentally assigned with constant probability, and U is a mean zero error term. The researcher runs the following linear regression,
Y = c_1 + c_2 X + ฮฒ_1 D + ฮฒ_2 D X + U, ๐ผ[(1,X,D,DX)'U] = 0.
Define the auxiliary quantities ฯ^*(x) := ฮฒ_1 + ฮฒ_2 x and V_x := ๐(X). The pseudo-VCATE is defined as
V_ฯ^* := ๐(ฯ^*(X)) = ฮฒ_2^2V_x.
The pseudo-VCATE has a close connection to the VCATE. If the linear model describes the conditional mean ฮผ_d(x), then ฯ^*(x) = ฯ(x) and V_ฯ = V_ฯ^*. For instance, in models with binary X, the functional form is correctly specified and V_ฯ = V_ฯ^*. For now, assume that the pseudo-VCATE and the VCATE coincide. In later sections, I analyze models that allow for misspecification.
Consider a sequence of distributions {ฮณ_n}_n=1^โโฮ^โ. I index the regression coefficients and model variances by the sample size n as ฮฒ_2n and (V_xn, V_ฯ n), respectively. Define an estimator of the VCATE as V_ฯ n = ฮฒ_2n^2V_xn, where ฮฒ_2n is the least squares estimator of (<ref>) and V_xn :=1/nโ_i X_i^2 -[1/nโ_i X_i ]^2. With some algebraic manipulations the estimation error can be decomposed as
V_ฯ n-V_ฯ n^* = V_xn(ฮฒ_2n-ฮฒ_2n)^2(V_xn/V_xn)+ 2ฮฒ_2nV_xn(ฮฒ_2n-ฮฒ_2n)(V_xn/V_xn) + ฮฒ_2n^2V_xn(V_xn/V_xn-1).
To derive the asymptotic distribution we can apply the central limit theorem to individual components. For generality, I state joint convergence to a normal distribution as an assumption. This holds as a special case if the observations are i.i.d. and key moments of the distribution are bounded, but may also hold under other forms of dependence. I defer stating primitive conditions until Section <ref>.
There is a sequence of distributions {ฮณ_n}_n=1^โโฮ^โ with associated quantities {V_xn,V_ฯ n^*,ฮฒ_2n,ฮฉ_n }_n=1^โ, which are related by the identity V_ฯ n^* = ฮฒ_2n^2V_xn, and satisfy the following properties: (i) V_xn > 0, (ii) V_ฯ n^* is contained in a bounded subset of [0,โ), and (iii) ฮฉ_n is a positive definite matrix with eigenvalues bounded way from zero and a finite upper bound. There is a sequence of estimators {V_xn,V_ฯ n,ฮฒ_2n,ฮฉ_n }_n=1^โ which satisfy V_ฯ n = ฮฒ_2n^2V_xn. As n โโ, ฮฉ_n โ^p ฮฉ_n, and
ฮฉ_n^-1/2โ(n)[ โ(V_xn)(ฮฒ_2n - ฮฒ_2n); V_xn/V_xn - 1 ]โ^d Z_n โผ๐ฉ(0,I_2 ร 2).
The normalization by V_xn is intended to align with the decomposition in (<ref>). The 2 ร 2 matrix ฮฉ_n is an estimator of the covariance matrix. I present Assumption <ref> as a triangular array because it makes it easier to formalize discussions of uniform coverage over the parameter space. Assumption <ref> allows for cases where V_ฯ n^* is arbitrarily close to or includes zero. Let ฮฉ^1/2 denote the Cholesky decomposition of a matrix ฮฉ. The estimator of the VCATE converges to the empirical process G, defined as
G(n,V_ฯ^*,ฮฉ,z,ฮถ) := (e_1'ฮฉ^1/2z)^2/n + 2 ฮถโ(V_ฯ/n^*)(e_1'ฮฉ^1/2z)+ V_ฯ^*/โ(n)(e_2'ฮฉ^1/2z),
where z โโ^2, ฮถโ{-1,1}, e_1 = [1,0]' and e_2 = [0,1]'.
Suppose that Assumption <ref> holds, then V_ฯ n-V_ฯ n^* = O_p( max{1/n,โ(V_ฯ n^*/n)}), and there exists a sequence of ฮถ_n โ{-1,1}, such that
V_ฯ n-V_ฯ n^* = G(n,V_ฯ n^*,ฮฉ_n,Z_n,ฮถ_n) + o_p(1/n) + o_p(โ(V_ฯ n^*/n))+ o_p(V_ฯ n^*/โ(n )).
Lemma <ref> shows that the limiting distribution of V_ฯ n is a linear combination of a Chi-square and a normal, whose weights depend on the value of V_ฯ n^*. The relative magnitude of V_ฯ n^* determines the fit of the normal approximation. In the heterogeneous case, V_ฯ n^* โฅฮด > 0, โ(n)G converges to a normal as n โโ because the first term in (<ref>) is asymptotically negligible. However, when V_ฯ n^* = 0, only the first term remains and nG converges to a non-central Chi-Square distribution, which is asymmetric. Using normal critical values here (even if everything else was known) would produce distorted coverage. Furthermore, when V_ฯ n^* = 0 the rate of convergence is n, which is faster than โ(n), and hence the estimator is โsuper consistentโ near the boundary. The error is dominated by the first stage sampling uncertainty in estimating the nuisance parameter ฮฒ_2n^2, which converges at n rate.
In practice, all three components in (<ref>) contribute to the limiting distribution, and this information can be used for inference. I propose an analytic approach based on the quantiles of the empirical process that can deliver exact coverage. Let F_n,V_ฯ^*,ฮฉ,ฮถ(v) be the conditional CDF of the empirical process, defined as
F_n,V_ฯ^*,ฮฉ,ฮถ(v) = โ( G(n,V_ฯ^*,ฮฉ,Z,ฮถ) โค v ), Z โผ๐ฉ(0,I_2 ร 2), v โโ.
Based on this CDF we can construct a test statistic,
F_n,V_ฯ^*,ฮฉ_n,ฮถ(V_ฯ n-V_ฯ^*),
indexed by unknown values of (V_ฯ^*,ฮถ) and substituting the estimated covariance matrix ฮฉ_n.
By construction, the test statistic is contained in [0,1]. Similarly, I construct critical values as functions of the parameters for a nominal level ฮฑ, as follows
q_ฮฑ/2(n,V_ฯ^*,ฮฉ,ฮถ) := min{ฮฑ/2,F_n,V_ฯ^*,ฮฉ,ฮถ(0)}
q_1-ฮฑ/2(n,V_ฯ^*,ฮฉ,ฮถ) := 1-ฮฑ+min{ฮฑ/2,F_n,V_ฯ^*,ฮฉ,ฮถ(0)}
The difference in the critical values is (1-ฮฑ) to achieve the desired coverage. The lower critical value is the minimum of the ฮฑ/2 percentile and 0. This adjustment is meant to increase the power of tests of homogeneity (see Remark <ref>). I propose an adaptive confidence interval by substituting the ฮฉ_n, n, and V_ฯ n into the following formula
CI_ฮฑ n = { V_ฯ^* โโ_+, ฮถโ{-1,1}:
F_n,V_ฯ^*,ฮฉ_n,ฮถ(V_ฯ n-V_ฯ^*) โ[ q_ฮฑ/2(n,V_ฯ^*,ฮฉ_n,ฮถ), q_1-ฮฑ/2(n,V_ฯ^*,ฮฉ_n,ฮถ)] }.
The set CI_ฮฑ n can be constructed via a grid search between 0 and an arbitrarily high value, to test whether a particular V_ฯ^* satisfies the inequality constraints. The procedure achieves correct asymptotic size because the test statistic converges to a uniform random variable in [0,1] for each value of V_ฯ^*. In general, the distribution in (<ref>) depends on the value of ฮถ and I obtain a conservative interval in (<ref>) by considering the union of intervals with different values of ฮถ. Moreover, if the off-diagonal element of ฮฉ_n is zero, then the distribution of the empirical process in (<ref>) does not depend on the value of ฮถ. This property is plausible and I introduce primitive conditions that satisfy it in Section <ref>. Under those conditions the confidence interval has exact asymptotic coverage .
The procedure is fast because at each point in the grid the researcher evaluates the condition in (<ref>), using the same estimate of (V_ฯ n,ฮฉ_n). The critical values can be computed numerically from the quantiles of a generalized Chi-square with distribution F, which are available in most statistical software packages.
Researchers can test for homogeneity by evaluating whether 0 โCI_ฮฑ n. By definition, e_1'ฮฉ_n^1/2Z = โ(ฮฉ_n,11)Z_1, where ฮฉ_n,11 is the upper-left entry. Under the null, the test statistic is F_n,0,ฮฉ_n,ฮถ(v) = โ(ฮฉ_n,11Z_1^2/n โค v ), which is the CDF of a rescaled Chi-square distribution with one degree of freedom. Furthermore, the critical values are {0,1-ฮฑ}, given that F_n,0,ฮฉ_n,ฮถ(0) = 0. Neither quantity depends on the choice of ฮถ. Because of the normalization in (<ref>), we can choose ฮฉ_n,11 = V_xn๐(ฮฒ_2n), where ๐(ฮฒ_2n) is an estimate of the asymptotic variance of ฮฒ_2n such as the robust sandwich estimator. Therefore, evaluating F_n,0,ฮฉ_n,ฮถ(V_ฯ n) โ [0,1-ฮฑ] is algebraically equivalent to a test of whether n(ฮฒ_2n^2V_xn/(V_x n๐(ฮฒ_2n)) = nฮฒ_2n^2/๐(ฮฒ_2n) exceeds the 1-ฮฑ quantile of a Chi-square with one degree of freedom. This is identical to a test of ฮฒ_2 = 0 in the regression in (<ref>).
The critical value q_ฮฑ/2(V_ฯ^*,ฮฉ,n,ฮถ) in (<ref>) is constructed to guarantee that V_ฯ nโCI_ฮฑ n. In this case, V_ฯ n belongs to the CI if and only if F_n,V_ฯ n,ฮฉ_n,ฮถ(0) is contained in the critical region for some ฮถโ{-1,1}. The unadjusted CI with critical values {ฮฑ/2, 1-ฮฑ/2} is not guaranteed to contain the test statistic.[For example, suppose that V_ฯ n = 0. Then the empirical process has a Chi-square distribution for V_ฯ = 0. Since the unadjusted critical value is bounded away from zero, V_ฯ n would not be contained in the unadjusted CI.] Another rationale for doing the adjustment in (<ref>), is to increase the power of the test of homogeneity, 0 โCI_ฮฑ n, relative to a test based on the unadjusted CI. The unadjusted test has correct size but the rejection region is discontinuous: it rejects when the test statistics is very close to zero or when it exceeds a threshold. Instead, the adjusted test shifts the critical region left and has the form of a Chi-squared test. It only rejects the null if the test statistic is larger than 1-ฮฑ, which is a threshold that is smaller than 1-ฮฑ/2 for the unadjusted CI.
<cit.> suggest estimating ฮผ_d(x) by a series estimator with K terms, for subsamples D = d โ{0,1}. They propose a bias-corrected Wald statistic, which takes the form T_n^series := {[ฮพ_1 - ฮพ_0 ]'[๐(ฮพ_1-ฮพ_0)]^-1[ฮพ_1 - ฮพ_0]-(K-1)}/โ(2(K-1)), where (ฮพ_1,ฮพ_0) are non-intercept coefficients associated with ฮผ_1(x) and ฮผ_0(x), respectively and K is the number of covariates. For regressions with univariate X as in (<ref>), K=2 and T_n^series = (nฮฒ_2n^2/๐(ฮฒ_2n) - 1) / โ(2). Essentially this is just a transformation of the test statistic proposed above, which will produce the same acceptance/rejection result for significance level ฮฑ (using the critical values in their equation 3.11). <cit.> study a framework with a fixed population where the only source of randomness is the experimental assignment of offers. They propose a similar Wald estimator, but replace estimates of (ฮพ_1,ฮพ_0) and the asymptotic variance with randomization inference counterparts. In samples with large n, this leads to very similar test statistics, but may produce slightly different results in small samples.
The approach that I introduce in the following section differs substantially in the way that I handle multivariate cases. For K > 2, the approaches are non-nested because I use sample splitting and consider a wider range of methods to estimate ฮผ_d(x) than series estimators.
ยง INFERENCE FOR NONPARAMETRIC CATE
In this section I provide an overview of the inference problems associated with efficient estimators of the VCATE and how to solve them for the nonlinear/high-dimensional case. Let {Y_i,D_i,X_i} be i.i.d.. As shown in <cit.>, efficient estimators can be decomposed as
โ(n)(V_ฯ n - V_ฯ n) = 1/โ(n)โ_i=1^n ฯ_i + Residual_n _o_p(1),
where ฯ_i is an i.i.d. realization from the efficient influence function with mean zero, and the residual becomes asymptotically negligible as n โโ. The semiparametric lower bound is ๐(ฯ_i). Let ฮท(ยท) be a set of nuisance functions defined as
ฮท(x) := (ฯ(x),ฮผ_0(x), p(x),ฯ_av).
<cit.> showed that the efficient influence function for the VCATE is equal to ฯ_i = ฯ(Y_i,D_i,X_i,ฮท) - V_ฯ n, where ฯ is defined as
ฯ(y,d,x,ฮท) := (ฯ(x)-ฯ_av)^2 + 2 (ฯ(x)-ฯ_av)[ d(y-ฮผ_0(x)-ฯ(x))/p(x) - (1-d)(y-ฮผ_0(x))/1-p(x)].
By (<ref>), all efficient estimators โregardless of their formโ are โ(n)โasymptotically equivalent to 1/nโ_i=1^nฯ_i. Let ฯ_i = ฯ(Y_i,D_i,X_i,ฮท(X_i)) be a realization of the efficient influence function. If ๐(ฯ_i) > 0, then
๐(ฯ_i)^-1รโ(n)(V_ฯ n-V_ฯ n) = ๐(ฯ_i)^-1[1/โ(n)โ_i=1^n ฯ_i ]+ o_p(1) โ^p ๐ฉ(0,1).
In this case, any confidence interval based on normal critical values and a consistent estimator of ๐(ฯ_i), produces valid coverage. For common functionals such as the ATE, ๐(ฯ_i) > 0. However, this cannot be guaranteed for the VCATE.
Let ฯ_d^2(x) := ๐(Y | D =d,X=x).
๐(ฯ_i) = ๐((ฯ(X)-ฯ_av)^2) + 4๐ผ[(ฯ(X)-ฯ_av)^2(ฯ_1^2(X)/p(X) + ฯ_0^2(X)/p(X)) ].
Both of the inner terms in (<ref>) are multiplied by (ฯ(x) - ฯ). When the VCATE is zero, ฯ(x) = ฯ almost surely, and the influence function is degenerate.
The condition that ๐(ฯ_i) > 0 does not hold uniformly over all V_ฯ in the parameter space. In this case, the distribution of โ(n)(V_ฯ n-V_ฯ n) is dominated by the higher order terms of the residual (<ref>), and the CLT cannot be applied to guarantee normality near the boundary. The linear estimator discussed in the previous section is just one example. Moreover, if the tails of ฯ(x) are thin, then the value of ๐(ฯ_i) is also small near the boundary.
If ๐ผ[(ฯ(X)-ฯ)^4] โคฮบ^2 V_ฯ^2 for ฮบโโ_+, then
๐(ฯ_i) โคฮบ^2 V_ฯ^2+ 4ฮบ V_ฯโ(๐ผ[ ( ฯ_1^2(X)/p(X)+ฯ_0^2(X)/1-p(X)) ]).
Corollary <ref> shows that the variance of the efficient influence function is bounded by a quantity that scales up or down proportional to the value of V_ฯ. Consequently, when ๐(ฯ_i) is relatively small, the higher order terms in the residual may still dominate.
ยง.ยง Pseudo-VCATE, regressions, and efficiency
A robust way to introduce nonlinearity is to consider a regression with real-valued basis functions M(x) and S(x). For now, I will leave these unspecified but in the next section I will show how they can be estimated non-parametrically in a first stage.
Y = c_0 + M(X)c_1 + [D-p(X)]ฮฒ_1 + [D-p(X)]S(X)ฮฒ_2 _W(X,D)'ฮธ + U
with weights ฮป(X) := [p(X)(1-p(X))]^-1, regressors W(X,D), and parameters ฮธ = [c_0,c_1,ฮฒ_1,ฮฒ_2]. This specification accommodates experiments with heterogeneous assignment probabilities.[When p(x) = 1/2 this produces exactly the same coefficients as a regression of Y on (1,M(X),D,Dร S(X)), but differs when the probabilities are heterogeneous. If M(X) = S(X) = X as well, this reduces to (<ref>).] Consider the following minimizers:
ฮ^* := min_ฮธโโ^4 ๐ผ[ฮป(X)(Y-W(X,D)'ฮธ)^2].
<cit.> showed that if ๐(S(X)) > 0, ๐ผ[S(X)] = 0, and the vector (ฮฒ_1,ฮฒ_2) are part of the solution to (<ref>), then (ฮฒ_1,ฮฒ_2) are also the intercept and slope of the best linear projection of ฯ(X) on S(X).[When ๐(S(X)) = 0, ฮฒ_2 does not have a unique solution in (<ref>), but V_ฯ^* = ฮฒ_2^2๐(S(X)) = 0 โค V_ฯ is still the best linear projection, regardless of the value of ฮฒ_2.] Hence the pseudo-VCATE has an upper bound, V_ฯ^* = ฮฒ_2^2๐(S(X)) โค V_ฯ. Because of this bound, if V_ฯ = 0, then the pseudo-VCATE (V_ฯ^*) will not falsely detect heterogeneity even if S(X) is misspecified. If anything, poor choices of S(X) will possibly understate the amount of heterogeneity. When ฯ(X) is spanned by S(X), V_ฯ^* = V_ฯ and the two notions coincide.
To obtain a feasible estimator we define S(x) := S(x) - 1/nโ_i=1^n S(X_i), and compute W(x,d) by substituting S(x) in (<ref>). Now consider a value of ฮธ_n that minimizes 1/nโ_i=1^n ฮป(X_i)(Y_i-W(X_i,D_i)'ฮธ)^2, by solving the first order condition
๐ฌ(ฮธ_n) := 1/nโ_i=1^n ฮป(X_i)(Y-W(X_i,D_i)'ฮธ_n) W(X_i,D_i)'.
The regression parameters can be used to construct the CATE and other nuisance functions. For a given ฮธโโ^4,
ฮทฬ_ฮธ(x) := [ ฯฬ_ฮธ(x); ฮผฬ_0,ฮธ(x); pฬ_ฮธ(x); ฯฬ_av,ฮธ ] = [ (W(x,1)-W(x,0))'ฮธ; W(x,0)'ฮธ; p(x); 1/nโ_i=1^n ฯฬ_ฮธ(X_i) ].
Lemma <ref> shows that the sample variance of the estimated CATE can be interpreted as an estimator that plugs-in (<ref>) to the efficient influence function in (<ref>).
Define V_xn = 1/nโ_i=1^nS(X_i)^2. Let ฯ, ฮทฬ_ฮธ and ๐ฌ(ฮธ) be defined as in (<ref>),
(<ref>), and (<ref>), respectively. If ฮธ_n = (c_1n,c_2n,ฮฒ_1n,ฮฒ_2n) solves ๐ฌ(ฮธ_n) = 0, then V_ฯ n := ฮฒ_2n^2V_xn = 1/nโ_i=1^n ฯ(Y_i,D_i,X_i,ฮทฬ_ฮธ_n).
Mechanically, the influence function can be decomposed into primary and bias-correction components. As an intermediate step for Lemma <ref>, I show that the bias-correction terms and the fourth component of (<ref>) are proportional to each other. The optimal ฮธ_n implicitly sets the average bias correction to zero. Intuitively, the linear model minimizes the covariate imbalances between the treatment and control group in-sample. Lemma <ref> suggests that V_ฯ n could be asymptotically efficient if ฮทฬ_ฮธ_n is sufficiently close to ฮท. In Section <ref>, I show that my proposed semiparametric estimator can indeed achieve this.
As a preliminary step, it is necessary to determine which S(x) and M(x) ensure that ฮทฬ_ฮธ = ฮท. Not all choices achieve this property.[This point highlights that while all regressions of the form in
(<ref>) proposed by <cit.> estimate an interpretable V_ฯ^* โregardless of the choice of M(x)โ, not every regression in this class is efficient. The functions S(x), and M(x) in particular, both affect efficiency.] However, if they are chosen in such a way that W(x,d)'ฮธ = ฮผ_d(x) for some ฮธโโ^4, then that's sufficient to guarantee that ฮท_ฮธ = ฮท. Lemma <ref> shows that any ฮธ with this property is also a solution to the regression problem, and provides guidance on the choice of S(x) and M(x).
Let ฮ^* be the optimizer set defined in (<ref>). If (i) ๐ผ[S(X)] = 0 and (ii) W(x,d)'ฮธ = ฮผ_d(x) for some ฮธโโ^4, then ฮธโฮ^*. Conditions (i) can be satisfied by setting S(x) = ฯ(x) - ๐ผ[ฯ]. Condition (ii) can be satisfied by setting M(x) = ฮผ_0(x) + p(x)ฯ(x). In this special case, ฮธ = (0,1,๐ผ[ฯ(X)],1)' โฮ^*.
Lemma <ref> provides efficient choices of S(x) and M(x) that can be expressed in terms of conditional moments, and that for this choice, the optimal ฮธ has a known, simple form. In practice, S(x) and M(x) can be estimated non-parametrically.
ยง.ยง Multi-step approach
My proposed procedure randomly partitions the observations โ_n := {1,โฆ,n} into K folds of equal size n_k := n/K. Denote the observations in each fold by โ_nk, so that โ_k=1^Kโ_nk = โ_n, and let โ_-nk :=
โ_n \โ_nk be the set of observations that are not in fold k. In a slight abuse of notation, I use โ_-nk when defining conditional expectations, to denote the full set of random variables associated with observations not included in fold k. For simplicity, I also label the fold of observation i โโ_nk, by k_i.
Let ฮท_-k(x) := (ฯ_-k(x),ฮผ_0,-k(x), p(x),ฯ_-k,av) denote a prediction of the nuisance function ฮท(x) over the set โ_-nk, using the researcher's preferred prediction algorithm. This could include traditional methods such as linear regression, or more modern โmachine learningโ approaches such as LASSO, neural networks, or random forests. The only function that is known in advance is the propensity score, since I restrict attention to randomized experiments. Guided by Lemma <ref>, define
S_-k(x) := ฯ_-k(x) - ๐ผ[ฯ_-k(X_i) |โ_-nk],
M_-k(x) := ฮผ_0,-k(x) + p(x)ฯ_-k(x),
ฮป(x) := [p(x)(1-p(x))]^-1,
W_i' := [ 1 M_-k_i(X_i) (D_i-p(X_i)) (D_i-p(X_i))S_-k_i(X_i) ]
Consider a regression with weights ฮป(X_i), parameters ฮธ := (c_1,c_2,ฮฒ_1,ฮฒ_2), and
Y_i =
W_i'ฮธ_nk + U_i, ๐ผ[W_iU_i |โ_-nk] = 0
In practice, ๐ผ[ฯ_-k(X_i) |โ_-nk] needs to be estimated, and I use a sample analog:
S_-k(x) := ฯ_-k(x) - ฯ_nk,av, ฯ_nk,av := 1/n_kโ_i โโ_nkฯ_-k(X_i),
W_i' := [ 1 M_-k_i(X_i) (D_i-p(X_i)) (D_i-p(X_i))S_-k_i(X_i) ],
ฮธ_nk := [ 1/nโ_i โโ_nkW_iW_i' ]^-1[ 1/nโ_i โโ_nkW_iY_i ].
Let ฮธ_nk = (c_1nk,c_2nk,ฮฒ_1nk,ฮฒ_2nk) be the estimator over the subsample โ_nk.
The fold-specific variance of S_-k_i(x) is defined as
V_xnk := 1/n_kโ_i โโ_nkS_-k_i(X_i)^2.
The estimator of the VCATE for fold k is
V_ฯ n k := ฮฒ_2 nk^2V_x nk.
In this case V_xnk can be viewed as a preliminary estimate of the VCATE using the data in โ_-nk, whereas V_ฯ nk is a regression-adjusted estimator that fits the sample โ_nk. This adjustment will produce better results, with a pseudo-VCATE interpretation even if the first step S_-k_i(x) function is noisy, misspecified, or slow to converge to ฯ(x). The estimator in (<ref>) belongs to the class of multi-step estimators defined in <cit.>. I add a restriction on the choice of M_-k(x), guided by Lemma <ref>, to ensure asymptotic efficiency.
To quantify the uncertainty in (ฮธ_nk,V_xnk) I compute a robust (sandwich) estimator. I start by defining two auxiliary residuals, T_i := V_x n k^-1S_-k_i(X_i)^2 - 1 and U_i := Y_i - W_i'ฮธ_nk. Let ฮ _nk be a 4 ร 4 diagonal matrix with diagonal entries (1,1,1,V_xnk^-1/2). Researchers can compute estimators of the individual components of the sandwich form J_nk, H_nk, and a selection matrix ฮฅ defined as follows
J_nk := [ 1/n_kโ_i โโ_kฮป(X_i)ฮ _nkW_iW_i'ฮ _nk' 0; 0 1 ], ฮฅ:=[ 0 0 0 1 0; 0 0 0 0 1 ]
H_nk := 1/n_kโ_i โโ_k[ ฮป(X_i)^2[U_i^2ฮ _nkW_iW_i'ฮ _nk' ] ฮป(X_i) [U_iฮ _nkW_iT_i]; ฮป(X_i) [U_iW_i'ฮ _nk'T_i] T_i^2 ].
The sandwich covariance estimator is
ฮฉ_nk = ฮฅJ_nk^-1H_nkJ_nk^-1ฮฅ'.
The population covariance matrix is
ฮฉ_nk = ๐([ ฮป(X_i)(D_i-p(x_i))V_xnk^-1/2S_-k(X_i) U_i; V_xnk^-1S_-k(X_i)^2 ]|โ_-nk).
When the VCATE is zero, then V_xnk (as a consistent estimator of V_ฯ n) should converge to zero along the asymptotic sequence. To prevent asymptotic degeneracy, we need to rescale the estimands along the lines of Assumption <ref>. The random variable V_xnk^-1/2S_-k(X_i) is normalized to (conditionally) have variance one by design, even if S_-k(X_i) converges to zero. This requires two much weaker conditions: (i) that V_xnk > 0, i.e. there is some noise in estimating the CATE;[I also propose an extension that allows for V_xnk = 0 in Remark <ref>.] (ii) ฮฉ_nk has eigenvalues bounded away from zero. To ensure this, the tails of S_-k(X_i) need to be thin.[One sufficient additional restriction is that ๐ผ[U_i| X_i,D_i,โ_-nk] = 0 (the model is correctly specified), ๐(U_i| X_i,D_i|โ_-nk) is bounded away from zero, and S_-k(X_i) has bounded kurtosis. In that case the off-diagonal elements of ฮฉ_nk are zero and the diagonals are uniformly bounded. Positive-definiteness may also hold in a neighborhood where the nuisance functions are close to the true value and ๐ผ[U_i| X_i,D_i,โ_-nk] โ 0.]
[Moment Bounds]
Suppose that there exists a constant ฮดโ (0,1) such that for each fold k, almost surely, (i) ๐ผ[M_-k(X_i)^2 ฮป(X_i) |โ_-nk] - ๐ผ[M_-k(X_i)ฮป(X_i)|โ_-nk]๐ผ[ฮป(X_i) |โ_-nk] โฅฮด, (ii) ๐ผ[M_-k(X_i)^4 |โ_-nk] โค 1/ฮด, (iii) ๐ผ[U_i^4 |โ_-nk] โค (1/ฮด), and (iv) ๐ผ[S_-k(X_i)^4 |โ_-nk] โค (1/ฮด) ๐ผ[S_-k^2 |โ_-nk]^2, (v) ๐ผ[ฯ_-k(X)^2 |โ_-nk] < 1/ฮด.
Assumption <ref>.(i) is a rank condition that ensures that the auxiliary regressor M_-k(X_i) is not degenerate. Assumption <ref>.(ii) ensures that the second-moment of the candidate regressor M_-k(X_i) is bounded. Assumption <ref>.(iii) is a standard condition indicating that the fourth moment of the residuals are bounded. Assumption <ref>.(iv) is a bounded kurtosis condition indicating that the out-of-sample, machine learning predictions of ฯ(x) have thin tails. Finally, Assumption <ref>.(v) is a bound on the variance of the first-stage VCATE.
[Non-degeneracy]
The following properties hold almost surely over sequences of random data realizations {โ_n1,โฆ,โ_nK}_n=1^โ. Conditional on โ_-nk: (i) V_xn k > 0, (ii) V_xnk has a finite upper bound, (iii) V_ฯ nk^* := ฮฒ_2nk^2V_xnk is contained in a bounded subset of [0,โ), and (iv) ฮฉ_nk defined in (<ref>) is a positive definite matrix with bounded eigenvalues.
[Random Sampling]
The observations {Y_0i,Y_1i,D_i,X_i}_i^n are i.i.d. across i for fixed n, and drawn from a sequence of data generating processes {ฮณ_n}_n=1^โ.
Theorem <ref> shows how these primitive conditions imply an analog of Assumption <ref> for the cross-fitted case.
Consider a sequence of random data realizations {โ_n1,โฆ,โ_nK}_n=1^โ with associated quantities {V_xnk,V_ฯ n k^*,ฮฒ_2nk,ฮฉ_nk}_n=1^โ for each k, as well as a sequence of estimators {ฮฒ_2nk,V_xnk,V_ฯ nk^*,ฮฉ_nk}_n=1^โ computed from (<ref>), (<ref>), (<ref>), and (<ref>), respectively. Suppose that these quantities satisfy Assumptions <ref>.(ii), <ref>, <ref>, and <ref>. Then as n_k โโ, for all k โ{1,โฆ, K}, Conditional on a sequence of โ_-nk,
* ฮฉ_nk^-1/2โ(n_k)[ โ(V_xnk)(ฮฒ_2nk - ฮฒ_2nk); V_xnk/V_xnk - 1 ]|โ_-nkโ Z_nkโผ๐ฉ(0,I_2 ร 2).
* ฮฉ_nkโ^p ฮฉ_nk.
Theorem <ref> presents a central limit theorem for the components of V_ฯ nk^* = ฮฒ_2nk^2V_xnk, properly rescaled and conditional on โ_-nk. This result holds regardless of whether the nuisance parameters are properly specified and primarily relies on the independence of the folds. By Lemma <ref>, conditional on โ_-nk,
V_ฯ n k-V_ฯ n k^* = G(n_k,V_ฯ n k^*,ฮฉ_nk,Z_nk) + o_p(1/n_k) + o_p(โ(V_ฯ n k^*/n_k))+ o_p(V_ฯ n k^*/โ(n_k)).
Then it is possible to construct adaptive confidence intervals, substituting the sample size n_k and estimated statistics (V_ฯ nk,ฮฉ_nk).
CI_ฮฑ n k = { V_ฯ^* โโ_+, ฮถโ{-1,1}:
F_n_k,V_ฯ^*,ฮฉ_nk,ฮถ(V_ฯ nk-V_ฯ^*) โ[ q_ฮฑ/2(n_k,V_ฯ^*,ฮฉ_nk,ฮถ), q_1-ฮฑ/2(n_k,V_ฯ^*,ฮฉ_nk,ฮถ)] }.
The confidence intervals take the same form as in the regression case in (<ref>), except that now the inputs are obtained from the cross-fitted regression step. The confidence interval is fast to compute because (V_ฯ nk,ฮฉ_nk) only needs to be computed once. It is worth noting that because the confidence interval only uses information in fold โ_nk, the effective sample size is n_k. While this does not affect the nominal asymptotic size of the confidence interval, it may affect the power of tests against specific alternatives.
ยง.ยง.ยง Ensemble estimator
We can construct an โensembleโ to aggregate across folds, defined as follows
V_ฯ n := 1/Kโ_k=1^K ฮฒ_2 n k^2 V_xnk.
In Section <ref>, I show that this ensemble estimator is efficient.
ยง.ยง.ยง Splitting uncertainty and median intervals
So far in this section we have used the data from a single split or fold of the data. However, the choice of fold k or the particular split may lead to different values of V_ฯ nk and hence distinct confidence intervals. <cit.> propose an aggregation procedure based on โmedian parameterโ confidence intervals, inspired by false-discovery rate adjustments. Their proposed conditional t-tests are not directly applicable here because V_ฯ nk conditionally converges to a generalized Chi-square. However, I show that the basic idea can still be adapted.
Let K be the total number of folds, obtained across one or more splits of the data. For instance, a 2-fold sample with 10 splits would have K=20, Let infCI_ฮฑ nk and supCI_ฮฑ nk denote the lower and upper bounds of CI_ฮฑ nk, respectively, and Med_K{โฏ} denote the median over a set indexed by k = {1,โฆ,K}. If K is even then two quantities might be tied for the median, and in that case I compute their midpoint. The multifold confidence interval is defined as
CI_ฮฑ n^multifold = [ Med_K{infCI_ฮฑ/2 nk}, Med_K{supCI_ฮฑ/2 nk}].
Intuitively, the K fold-specific intervals โvoteโ to include a particular value, and V_ฯ^* โCI_ฮฑ n^multifold only if there is a majority vote. The โmedianโ interval CI_ฮฑ n^multifold contains values within the median lower bound and the median upper bound across folds. To control the overall false discovery rate, I adjust the nominal size to ฮฑ/2. This adjustment produces a conservative interval because it assumes a worst-case dependence structure between the folds and the splits, regardless of the size of K. In some instances, the asymptotic coverage probability may be strictly higher than (1-ฮฑ), particularly when there is a lot of heterogeneity.[For instance, given a single split, Theorem <ref> implies that the {โ(n_k)(V_ฯ nk-V_ฯ)}_k=1^K are asymptotically uncorrelated. However, near the boundary, the estimators converge at a rate faster than โ(n) and their relative dependence structure at that rate is unclear.] At the boundary, with low effect heterogeneity or none at all, it is much harder to asses the dependence structure between the fold-specific estimators. One of the benefits of using a worst-case approach is that it provides coverage guarantees under weak assumptions. Moreover, the empirical example illustrates that even though these intervals are conservative, they may have a short length in practice.
ยง LARGE SAMPLE THEORY
ยง.ยง โ(n)-Consistency, Efficiency, and Boundary Rates
Let ฮณโฮ denote a probability distribution over i.i.d observations (Y_1i,Y_0i,D_i,X_i). I use the notation ๐ผ_ฮณ[ยท] and โ_ฮณ(ยท) to denote the expectation and probability under ฮณ, respectively. Let S_-k(x) be the function defined in (<ref>). The true value of the CATE and VCATE is given by ฯ_ฮณ and V_ฯ(ฮณ), respectively. The pseudo-VCATE is given by
V_ฯ^*(ฮณ,โ_-nk) := V_ฯ(ฮณ) - inf_(ฮฒ_1,ฮฒ_2)โโ^2๐ผ_ฮณ[(ฯ_ฮณ(X) - ฮฒ_1 - ฮฒ_2 S_-k(X))^2 |โ_-nk].
Define the estimation error of the CATE in the L_2 norm as
ฯ(ฮณ):= โ(๐ผ_ฮณ[โฯ_-k(X) - ฯ_ฮณ(X) โ^2]).
We can bound the difference between the pseudo-VCATE and its true value:
Under the distribution ฮณโฮ,
๐ผ_ฮณ[| V_ฯ(ฮณ)-V_ฯ^*(ฮณ,โ_-nk) |] โคmin{16 รฯ(ฮณ)^2 ,V_ฯ(ฮณ)}.
Theorem <ref> derives a non-asymptotic bound for the VCATE as the minimum of two key quantities: (i) the conditional L_2 error between the candidate function and the true CATE, and (ii) the true value of the VCATE. This proof only relies on the definition in (<ref>). For instance, when V_ฯ(ฮณ) = 0, then V_ฯ(ฮณ)-V_ฯ^*(ฮณ,โ_-nk) = 0, regardless of whether ฯ_-k(ยท) is properly specified. The difference between the two quantities is also small if ฯ(ฮณ) is sufficiently close to zero. In the multi-step approach, ฯ(ฮณ) captures the first-stage uncertainty from estimating the CATE, which decreases with sample size. I consider the following convergence condition.
[Convergence CATE]
โ(n_k)ฯ(ฮณ_n)^2 = o(1) as n โโ.
Assumption <ref> imposes an L_2 consistency condition on the CATE. A large class of machine learning models can meet this requirement. For example, <cit.> and <cit.> evaluate rates of convergence under sparse models, <cit.> for neural networks, and <cit.> for regression trees and random forest.
Consider a sequence of data generating processes {ฮณ_n}_n=1^โ where V_ฯ(ฮณ_n) โ 0 as n โโ and Assumptions <ref>, <ref>, <ref>, <ref> and <ref> hold. Define ฮ_nk := ( V_ฯ n k - V_ฯ(ฮณ_n) ) for the estimator defined in (<ref>). Then (i) โ(n_k)ฮ_nk = o_p(1), and (ii) if in addition n_k^1/2+ฯV_ฯ(ฮณ_n)=o(1) for ฯโ [0,1/2), then n_k^1/2+ฯฮ_nk = o_p(1).
Theorem <ref> shows that multi-step estimators of the VCATE converge to zero faster than โ(n_k) near the boundary. I formalize โnearโ by considering sequences of distributions where the VCATE approaches zero. Theorem <ref> relies on the non-asymptotic bound in Theorem <ref>, the normal approximation in Theorem <ref>, and the empirical process in Lemma <ref>. There is no requirement on the rate of convergence of ฮผ_-0k(ยท) (and consequently on the generated regressor M_-k_i(ยท)), only an assumption that p(x) is known and that the CATE is estimated at a sufficiently fast rate. Furthermore, if the true CATE is nearly flat in the sense that for ฯโ [0,1/2), then n_k^1/2+ฯV_ฯ(ฮณ_n) = o(1) (or even exactly equal to zero), then the estimator has a faster rate guarantee.
To prove efficiency we have the stronger requirement that all the nuisance functions converge to their true value in the L_4 norm and at n_k^1/4 rate in the L_2 norm.
[Regularity conditions]
Define the residuals U_i = Y_i - ๐ผ_ฮณ_n[Y_i | D_i,X_i]. (i) ๐ผ_ฮณ_n[โ Y_i โ^4], ๐ผ_ฮณ_n[โ U_iโ^4], ๐ผ_ฮณ_n[โฮท(X_i) โ^4], (ii) ๐ผ_ฮณ_n[โฮท_-k(X_i) โ^4] are uniformly bounded, (iii) ๐ผ_ฮณ_n[ โฮท_-k(X_) - ฮท(X_i) โ^4] โ 0, (iv) โ(n_k)๐ผ_ฮณ_n[โฮท_-k(X_i)-ฮท(X_i)โ^2] = o(1) for all k โ{1,โฆ,K}.
The next step is to show that the estimation error of the fold-specific VCATE converges at โ(n_k) to an average of efficient influence functions.
Consider a sequence of data generating processes {ฮณ_n}_n=1^โ where V_ฯ(ฮณ_n) โ 0 as n โโ and Assumptions <ref>, <ref>, <ref>, <ref>, <ref>, and <ref> hold. Then
โ(n_k)(V_ฯ nk-V_ฯ(ฮณ_n)) = 1/โ(n_k)โ_i โโ_nkฯ_i + o_p(1).
Theorem <ref> shows that the fold-specific estimator converges at โ(n_k)-rate to an average of i.i.d influence function. This requires standard regularity conditions. The proof of Theorem <ref> is non-standard due to the multi-step nature of the procedure. I start by applying Lemma <ref>, which shows hows to write V_ฯ nk as an average of estimated influence functions. I break down the proof into sequences where V_ฯ(ฮณ_n) converges to zero and those where it's bounded away from zero. For the first part, I leverage (a) the boundary convergence result in Theorem <ref>, (b) the bound for ๐(ฯ_i) in Lemma <ref>. For the second part, I provide a novel decomposition of regression adjusted nuisance functions. The key is to prove that the regression parameters ฮธ_nk converge at n_k^1/4 rate to the values in Lemma <ref> for sequences where V_ฯ(ฮณ_n) โ V_ฯ > 0. Once in this form, the rest of the proof relies on a traditional Taylor expansion argument.
The ensemble estimator V_ฯ n combines information from the whole sample. By definition n = n_k ร K and K is finite, which means that algebraically โ(n)(V_ฯ n-V_ฯ(ฮณ_n)) = โ(n_k)/โ(K)โ_k=1^K(V_ฯ nk-V_ฯ(ฮณ_n)), and by Theorem <ref>,
โ(n)(V_ฯ n-V_ฯ(ฮณ_n)) = [ 1/โ(n_k K)โ_k=1^Kโ_i โโ_nkฯ_i +
1/โ(K)โ_k=1^K o_p(1)] = 1/โ(n)โ_i=1^n ฯ_i + o_p(1).
This means that aggregating the estimators restores full efficiency, satisfying the property described in (<ref>).
ยง.ยง Asymptotic Coverage
I start by showing that the single fold confidence interval has uniform coverage for the pseudo-VCATE, and exact coverage under an additional assumption.
[Exact coverage condition]
Let ฮฉ_nk,12 be the off-diagonal element of ฮฉ_nk. For each t > 0, n โโlimsupsup_ฮณโฮโ_ฮณ(โ(V_ฯ^*(ฮณ,โ_-nk))|ฮฉ_nk,12| > t) = 0.
Assumption <ref> states that the product of the pseudo-VCATE and the off-diagonal element of the limiting covariance matrix in (<ref>) needs to converge to zero uniformly.
Let ฮ denote a set of distributions, constrained in such a way that Assumptions <ref>, <ref>, <ref>, and <ref> hold. Let CI_ฮฑ n k and V_ฯ^*(ฮณ,โ_-nk) be defined as in (<ref>) and (<ref>), respectively. Then
1-ฮฑโคn โโliminfinf_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k)
If Assumption <ref> also holds, then
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k) โค 1-ฮฑ.
Theorem <ref> shows that the confidence intervals always have uniform coverage of the pseudo-VCATE of at least (1-ฮฑ).[The theorem only uses Assumptions <ref>, <ref>, <ref>, and <ref> to verify normality in Assumption <ref>. A broad class of confidence intervals of the form in (<ref>) constructed from regression adjusted estimators will satisfy these uniformity properties.] The key is to prove that the confidence intervals yield coverage under arbitrary sequences of distributions, which includes cases where V_ฯ n^* is either equal to zero or approaches zero as n โโ. The proof builds on the approximation of Lemma <ref> and shows that for every sequence, the test statistic for a particular ฮถ_n_kโ{-1,1} converges to a uniform distribution. This sequential characterization suffices to apply generic results in <cit.>, which guarantee uniform coverage even in non standard cases like this one. Coverage over the pseudo-VCATE holds regardless of whether the nuisance functions are slow to converge or even misspecified.
The intervals are in general conservative because we're not plugging in the unknown ฮถ, and instead define a robust confidence interval as the union of CIs with given ฮถโ{-1,1}. However, the key insight is that ฮถ only affects the coverage when the pseudo-VCATE is bounded away from zero. If Assumption <ref> holds, the value of ฮถ doesn't enter the asymptotic distribution of the estimator. I show that this condition holds automatically if the nuisance functions converge to their true value at a sufficiently fast rate.
Let ฮ denote a set of distributions that satisfy Assumptions <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>. Then Assumption <ref> also holds.
As a special case, when the model is correctly specified, i.e. W_i'ฮธ = ฮผ_d(x) for some ฮธโโ^4, then ฮฉ_nk,12 = 0 by construction. Lemma <ref> states that we only need a model that is correctly specified asymptotically, given the rates in Assumptions <ref> and <ref>. Then for non-boundary cases, ฮฉ_nk converges to the population analog under correct specification. These conditions also imply point-wise coverage of the true VCATE.
Let ฮ denote a set of distributions that satisfy Assumptions <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>. Then
inf_ฮณโฮn โโliminf โ_ฮณ( V_ฯ(ฮณ) โCI_ฮฑ nk) = sup_ฮณโฮn โโlimsup โ_ฮณ( V_ฯ(ฮณ) โCI_ฮฑ nk) = 1-ฮฑ.
Theorem <ref> shows that if the nuisance functions converge at a sufficiently fast rate, then the proposed intervals achieve point-wise exact coverage. The confidence intervals provide correct size coverage for all regions of the parameter space, including V_ฯ(ฮณ) = 0.
Proving uniform coverage of the VCATE (rather than the pseudo-VCATE) is more challenging in the non-parametric case without much stronger conditions on the convergence rates of the nuisance functions. The lack of uniformity stems from a difficulty in controlling the ratio โ(n_k)ฯ(ฮณ_n) / โ(V_ฯ(ฮณ_n)), which measures the relative error in estimating the CATE vs. the overall level of the VCATE. By the bound in (<ref>), this ratio is easy to control when n_kV_ฯ(ฮณ_n) = o(1) (near homogeneity) or V_ฯ(ฮณ_n) โ V_ฯ > 0 (strong heterogeneity). However, it is possible to construct sequences, e.g., n_kV_ฯ(ฮณ_n) โ v > 0, where (V_ฯ nk - V_ฯ^*(ฮณ_n,โ_-nk)) converges to zero at a faster or comparable rate to the error of the pseudo-VCATE. There may be distortions in coverage in smaller samples. I illustrate this issue in the simulations.
Uniform inference is only challenging for two-sided tests. If instead, the researcher is only interested in left-sided tests, then uniform inference is still possible. To do so, we can make explicit use of the inequality V_ฯ^*(ฮณ,โ_-nk) โค V_ฯ(ฮณ). If V_ฯ(ฮณ) < inf_V_ฯ^*CI_ฮฑ nk (the lower bound of the CI), then V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ nk. Therefore, for all ฮณโฮ,
โ_ฮณ( V_ฯ(ฮณ) โฅinfCI_ฮฑ n k) โคโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k).
I prove a weaker uniformity result for one-sided tests building on Theorem <ref>.
If Assumptions <ref>, <ref>, <ref>, and <ref> hold, then
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ(ฮณ) < infCI_ฮฑ n k) โคn โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k) โคฮฑ.
Corollary <ref> is empirically relevant for interpreting confidence intervals that do not include zero. It states that the asymptotic probability of having V_ฯ(ฮณ) โ[0,infCI_ฮฑ nk) is uniformly less than ฮฑ. Tests of homogeneity belong to this class and therefore have the correct size when V_ฯ(ฮณ) = 0. Moreover, the result in Corollary <ref> is much stronger because it guarantees that a broader class of one-sided tests also has the correct size. It is important to emphasize that I do not impose any assumptions on rates of convergence of (ฮท_-k(x)-ฮท(x)), but only the inequality on the pseudo-VCATE. Consequently, while estimating ฮผ(x) and ฯ(x) may be important for increasing the power of tests of homogeneity, it is not necessary for controlling their size.
ยง.ยง Multifold Coverage
The multi-fold confidence interval covers the VCATE asymptotically.
Let ฮ be a set of distributions that satisfy Assumptions <ref>, <ref>, <ref>, <ref>. Then
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ(ฮณ) < infCI_ฮฑ n k^multifold) โคฮฑ.
If Assumptions <ref>, and <ref> also hold, then
sup_ฮณโฮn โโlimsup โ_ฮณ(V_ฯ(ฮณ) โCI_ฮฑ n^multifold) โคฮฑ.
The first part of Theorem <ref> shows that the multifold CI uniformly controls the size of one-sided tests. The second part shows that if the nuisance functions converge to their true value asymptotically, then the multifold confidence interval provides point-wise size-control for two-sided tests. Coverage of the true parameter will be weakly larger that (1-ฮฑ) asymptotically.
ยง.ยง Power
The test of homogeneity has power against local alternatives.
Consider a sequence of distributions {ฮณ_n}_n=1^โ and {โ_-nk}_n=1^โ, where ฮฉ_nkโฮฉ_โ and n_kV_ฯ^*(ฮณ_n,โ_-nk) = v + o(1), for v โ [0,โ). Assume that <ref>, <ref>, <ref>, and <ref> hold. Let ฮฉ_โ,11 be the upper-left entry of ฮฉ_โ, ฮฆ(ยท) be the standard normal CDF, and z_1-ฮฑ be the (1-ฮฑ)-quantile. Then
lim_n โโโ_ฮณ_n(0 โCI_ฮฑ nk|โ_-nk) = 1-ฮฆ(z_1-ฮฑ-โ(v)/โ(ฮฉ_โ,11)) + ฮฆ( -โ(v)/โ(ฮฉ_โ,11)).
Lemma <ref> computes the power curve for a sequence of local alternatives. When v = 0 the power is equal to ฮฑ, whereas when v โโ the power tends to one. This shows that tests of homogeneity have local power the null. When the pseudo-VCATE is bounded away from zero, the test rejects with probability approaching one.
ยง EXTENSIONS
In some cases, assuming that units i are independent may be strong. For example, in <cit.> units are randomized at the household level, and it is reasonable to expects that units within a household have correlated outcomes and covariates. To deal with this dependence structure, suppose that the sample can be partitioned into C clusters, c โ{1,โฆ,C}, which are independent and identically distributed. The researcher can compute ฮฒ_2nk, V_ฯ nk and CI_ฮฑ nk via cross-fitting by randomly partitioning entire clusters rather than the individual observations.
Let {r_nk}_n=1^โ be a sequence of positive scalars. Suppose that V_xnk > 0, ฮฉ_nk is positive definite with positive eigenvalues, and that conditional on a sequence {โ_-nk}_n=1^โ, r_nk^-1ฮฉ_nkโ^p ฮฉ_nk, n_k/r_nkโโ, and
ฮฉ_nk^-1/2โ(n_k/r_nk)[ โ(V_xn)(ฮฒ_2nk - ฮฒ_2nk); V_xnk/V_xnk - 1 ]|โ_-nkโ^d Z_n โผ๐ฉ(0,I_2 ร 2).
Then CI_ฮฑ n k, substituting the arguments (n_k,V_ฯ n k,ฮฉ_nk), satisfies Theorem <ref>.
Lemma <ref> proposes high-level conditions that ensure that confidence intervals have correct coverage. The quantity โ(n_k/r_nk) is the effective rate of convergence, which features prominently in problems with cluster dependence <cit.>. For example, if the observations are fully correlated within clusters and the clusters have equal size, then r_nk is the cluster size, n_k/r_nk = C, and the estimators in (<ref>) converge at โ(C) rate (the total number of clusters). The analyst does not need to specify the quantity r_nk to apply the procedure, but merely specify an estimator of the covariance matrix that meets the rate requirement. Under minor modifications to the existing proofs, we can also prove analogs of Theorems <ref> and <ref>.
We can construct estimators that satisfy Lemma <ref>. Let โ_nkc be the set of units in fold k and cluster c, and let ๐_nk be the indexes of the clusters selected for fold k.
H_nk^cluster := 1/n_kโ_c โ๐_nk( โ_i โโ_nkc[ ฮป(X_i)U_iW_i; T_i ])( โ_i โโ_nkc[ ฮป(X_i)U_iW_i; T_i ])'.
The clustered standard errors are ฮฉ_nk^cluster = ฮฅ_nkJ_nk^-1H_nk^clusterJ_nk^-1ฮฅ_nk', where J_nk,ฮฅ_nk are computed as outlined in (<ref>).
When the conditional mean is constant, i.e. ฮผ_d(x) = ๐ผ[Y_d], prediction models with corner solutions like LASSO may estimate a constant conditional mean, i.e. ฮผ_d,-k(x) = ฮผ_d,av, ฯ_-k(x) = ฮผ_1,-k(x) - ฮผ_0,-k(x) is constant, and consequently V_xnk = 0.[It is still possible to have V_xnk > 0 almost surely even if V_ฯ = 0, as long as ฮผ_0(x) is not constant.] This violates Assumption <ref>.(i), and it is challenging to construct a confidence interval with exact coverage. One alternative is to construct an ensemble of sparse and non-sparse estimators of the CATE in the first-stage. Another alternative is to use degenerate confidence intervals:
CI_ฮฑ n k^0 = CI_ฮฑ n k if V_xnk 0,
[0,0] if V_xnk = 0.
The confidence intervals collapse to zero when the ฯ_-k(x) prediction is degenerate. For example, in LASSO researchers can check whether the coefficients are zero, in tree-based methods when there are no splits, or whether V_xnk = 0. We can also define an analogous multifold confidence interval.
CI_ฮฑ n^0,multifold = [ Med_K{infCI_ฮฑ/2 nk}, Med_K{supCI_ฮฑ/2 nk}].
I study the asymptotic properties of these confidence intervals.
Let ฮ denote a set of distributions that satisfy Assumptions <ref>, <ref>, and <ref>. Suppose that Assumption <ref> holds, except for the requirement that V_xnk = 0. Then (i)
n โโliminfinf_ฮณโฮ โ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k^0) โฅ 1-ฮฑ
(ii) If Assumptions <ref>, and <ref> also hold, then
inf_ฮณโฮn โโliminf โ_ฮณ( V_ฯ(ฮณ) โCI_ฮฑ n k^0) โฅ 1-ฮฑ
inf_ฮณโฮn โโliminf โ_ฮณ( V_ฯ(ฮณ) โCI_ฮฑ n k^0,multifold) โฅ 1-ฮฑ.
To prove this result I focus on the coverage for subsequences where V_xnk = 0 and V_xnk > 0, and apply the results for conservative coverage results in <cit.>. In subsequences where V_xnk = 0, then V_ฯ^*(ฮณ,โ_-nk) = 0 which means that coverage of the pseudo-VCATE is equal to one. In subsequences where V_xnk > 0 and assuming that ฮฉ_nk has eigenvalues bounded away from zero, then we can apply similar arguments as before to prove 1-ฮฑ coverage. To prove point-wise coverage, I separate the cases where V_ฯ(ฮณ) = 0 and V_ฯ(ฮณ) > 0. In the latter case, I show that V_xnk is point-wise bounded away from zero, though not uniformly. The proof of Lemma <ref> does not rely on the i.i.d. assumption, and can also accommodate cluster dependence. In the empirical example, I compute confidence intervals with clustered standard errors and degenerate CATE predictions.
When there's more heterogeneity and the nuisance functions are estimated accurately, then V_xnk > 0 with high probability. However, when V_ฯ nโ 0 and ๐(ฮผ_0(X)) โ 0, then procedures like LASSO may imply V_xnk = 0 <cit.>, which means that marginally heterogeneous CATEs could be estimated as homogeneous. This could be impact the power of tests of homogeneity. The size for two-sided tests is not uniformly bounded. Furthermore, the multifold confidence interval allows for some quantification of uncertainty across folds/splits: the CI is degenerate only if more than half the fold/split-specific CIs are degenerate.
Furthermore, the degenerate CI has correct size control for one-sided tests.
Under the assumptions of Lemma <ref>.(i),
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ(ฮณ) < infCI_ฮฑ n k^0) โคฮฑ.
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ(ฮณ) < infCI_ฮฑ n k^0,multifold) โคฮฑ.
The tests of homogeneity have the correct size when V_ฯ(ฮณ) = 0. Corollary <ref> guarantees that the probability of falsely rejecting a class of one-sided test is uniformly bounded in large samples.
It may be useful to report the standard deviation of the CATE, which is โ(VCATE). I propose the following confidence interval:
CI_ฮฑ n^0,multifold,sqrt = {โ(V_ฯ^*): V_ฯ^* โCI_ฮฑ n^0,multifold}.
Since the square root is a strictly increasing transformation and the VCATE is non-negative, then โ(V_ฯ(ฮณ))โCI_ฮฑ n^0,multifold,sqrt if and only V_ฯ(ฮณ) โCI_ฮฑ n^0,multifold. Since the events are equivalent, the transformed confidence interval preserves the coverage probabilities and will have valid coverage by Lemma <ref>.
ยง SIMULATIONS
I use a simulation design to study the properties of the VCATE estimators. The baseline covariates are distributed as [X_0,X_1] โ๐ฉ(0,ฮฃ_x), where ฯ = 0.5 and
ฮฃ_x =
[ I_J ร J ฯ I_J ร J; ฯ I_J ร J I_J ร J ].
The random variables X_0 and X_1 are standard normal vectors of dimension J. The covariance between pairs of components X_0j and X_1j' is equal to ฯ = 0.5 for when j = j', but zero otherwise. The outcome is generated from a model where Y = DY_0 + (1-D), D is generated by a Bernoulli draw with probability 0.5, and
Y_0 = c + ฮฒ_0'X_0 + U_0 โ(ฯฬ_0^2 + ฮบ_0'X_0X_0'ฮบ_0)
Y_1 = (c + ฯ) + ฮฒ_0'X_0 +ฮฒ_ฯ'X_1 + U_1 โ(ฯฬ_1^2 + ฮบ_1'X_1X_1'ฮบ_1),
where c,ฯโโ, ฮฒ_0,ฮฒ_ฯ,ฮบ_0,ฮบ_1 โโ^p. The errors (U_0,U_1) are independent of the covariates (U_0,U_1) (X_0,X_1), and distributed as standard normals [U_0,U_1]' โ๐ฉ(0_2 ร 1,I_2). The key model quantities have closed-form expressions. The conditional means at baseline and the CATE are given by ฮผ_1(x) = ฮฑ + ฮฒ_0'x_0 and ฯ(x) = ฯ + ฮฒ_ฯ'Z_1, respectively. The conditional variances are ฯ_d^2(x) = ฮบ_d'x_dx_d'ฮบ_d for d โ{0,1}. This formulation incorporates heteroskedasticity. Covariates that influence the outcomes at baseline may also affect the treatment effects.
The regressors are constructed in such a way that ๐ผ[X_dX_d'] = I_p for d โ{0,1}. This implies simple expressions for the variances of the model, ๐(U_d) = ฯฬ_d^2 + ฮบ_d'ฮบ_d,
V_ฯ = ฮฒ_ฯ'ฮฒ_ฯ, ๐(Y_0) = ฮฒ_0'ฮฒ_0 + ฯฬ_d^2 + ฮบ_0'ฮบ_0,
๐(Y_1) = ฮฒ_0'ฮฒ_0 + ฮฒ_ฯ'ฮฒ_ฯ + 2(1-ฯ)ฮฒ_0'ฮฒ_ฯ + ฯฬ_1^2 + ฮบ_1'ฮบ_1,
I choose an approximately sparse specification for (<ref>) where the coefficients decay exponentially at a rate of decay of ฮป = 0.7. Let โ_j = โ((1-ฮป/1-ฮป^J)ฮป^1-j) be a geometric sequence, which satisfies โ_j=1^J (1-ฮป/1-ฮป^J)ฮป^1-j = 1. Given user-specified parameters (V_ฮผ,V_ฯ,ฯ_0^2,ฯ_1^2), the coefficients for the entries j โ{1,โฆ,J } are determined by ฮฒ_0,j = โ_jโ(V_ฮผ), ฮฒ_ฯ,j = โ_j โ(V_ฯ), ฮบ_d,j = โ_j โ(ฯ_d^2-ฯฬ_d^2), for d โ{0,1}. Since โ_j=1^J (1-ฮป/1-ฮป^J)ฮป^1-j = 1, then ฮฒ_0'ฮฒ_0 = V_ฮผ, ฮฒ_ฯฮฒ_ฯ = V_ฯ, and ฮฒ_0'ฮฒ_ฯ = โ(V_ฮผ V_ฯ). We can obtain analogous expressions for the variances of the unobserved components, so that ฯฬ_d^2 + ฮบ_0'ฮบ_0 = ฯ_d^2 for d โ{0,1}.
I choose an average effect size of ฯ = 0.15, that is coherent with the recent meta-analyses of economic experiments in <cit.>. To make sure that the magnitudes are interpretable, I normalize the coefficients so that the variance for the control group is ๐(Y_0) = 1, by setting set c = 1, ฯ_d = 0.7, ฯฬ_d = 0.21, and V_ฮผ = 0.3. The design is easy to scale for different values of V_ฯ and J. My design is similar to that in <cit.> but I choose ฮฃ_x and the sparsity structure in such a way that V_ฯ has a closed form expression. I use LASSO to estimate ฮผ_1(x) and ฮผ_0(x), tuned via cross-validation. The coefficients of this model are consistent given this sparse linear structure, even in high dimensions. I randomly simulate 2000 datasets to compute each of the estimators, and split them into K=2 folds.
Figure <ref> considers a simulation with n = 2500. The figure displays a density plot for the multi-step estimator, V_ฯ n defined in (<ref>), and a two-step debiased machine learning estimator computed as:
V_ฯ n^two-step := 1/nโ_i =1^nฯ(Y_i,D_i,X_i,ฮท_-k_i(X_i)),
where ฮท_-k(x) = (ฯ_-k(x),ฮผ_0,-k(x),p(x),ฯ_n,av) and ฯ_n,av = 1/nโ_i=1^n ฯ_-k_i(X_i). When V_ฯ > 0 the efficient influence function is non-degenerate. In high-heterogeneity regimes both converge to the same limiting distribution.[This is shown in Theorem <ref> for the multistep approach and can be shown for the two-step using standard arguments, e.g. <cit.>.] However, when V_ฯ = 0, the influence function is degenerate and they may converge at different rates. We see that the multi-step approach is much more precise. This can be explained by the fast boundary convergence rates derived in Theorem <ref>. The two step approach can also produce negative estimates of V_ฯ, which is an undesirable feature, whereas the multi-step estimator is always non-negative. Both estimators have higher bias when the dimension increases because there is more first-stage noise.
Figure <ref> plots the root mean-square error (RMSE) of V_ฯ n and V_ฯ n^two-step for different sample sizes. I compute the semiparametric efficiency bound by computing the RMSE of an โoracleโ estimator that substitutes ฮท_-k(X_i) = ฮท(X_i) in (<ref>). The results show that as the sample size increases, both estimators achieve a higher level of accuracy and their variance approaches the semi-parametric lower bound (the RMSE of the oracle). As expected by Corollary <ref>, the semiparametric lower bound is zero at the boundary. The differences in RMSE shorten with higher V_ฯ and in lower dimensional settings (2J = 10) (which have lower first-stage noise).
Figure <ref> shows the coverage of V_ฯ for the different proposed confidence intervals (CIs). For the multi-step approach, I consider the single splits CIs in (<ref>) and the conservative multi-fold CIs from and (<ref>). The two-step CIs are constructed as 1/nโ_i=1^n ฯ_i ยฑ 1.96โ(V_ฯ/n), where ฯ_i is the summand in (<ref>) and V_ฯ is an estimate of its sample variance. The coverage of the two-step approach is very low under homogeneity, and there is no improvement as sample size increases when V_ฯ = 0. The coverage of the two-step estimator only improves with higher n, in high heterogeneity designs. By contrast, both multi-step approaches cover the parameter at the intended level, and coverage improves with higher sample size. For fixed n, coverage degrades for both cases when the number of covariates is higher.
Figure <ref> explores the differences in covering the VCATE vs the pseudo-VCATE when n = 2500 and 2J = 10 for a fine-grained set of values of V_ฯ. Panel (a) reflects a dip in coverage close to the boundary. My theory predicts that the multistep CIs have exact coverage when V_ฯ = 0, but may not cover uniformly close to the boundary (see discussion after Theorem <ref>). The mulit-fold CIs have conservative coverage. Conversely, Figure <ref>, Panel (b) shows the multi-step CIs always uniformly cover the pseudo-VCATE, as predicted by theory. This provides a robustness guarantee for how to interpret the CIs. The two-step approach has much lower coverage and no guarantees when V_ฯ = 0 in either panel.
Figure <ref>.(left) shows the power of tests of homogeneity in a simulation with n = 2500. The multi-step, single fold approach has correct size control and has local power, in line with the result of Lemma <ref>. The power of the test using the multifold approach is similar to using a single fold. The right panel shows the probability that the VCATE is strictly below the CI bounds. As predicted by theory, this probability is uniformly bounded by ฮฑ = 0.05 for the single and multifold approaches (see Corollaries <ref> and <ref>, and Theorem <ref>). The two-step approach has a non-monotonic power curve with incorrect size. The size of one-sided tests in the right panel is uniformly bounded by ฮฑ = 0.05, though this may be partly the fact that V_ฯ n^two-step can take negative values and has a negative bias (see Figure <ref>).
ยง EMPIRICAL EXAMPLE
In this section, I illustrate my approach using data from a large-scale information experiment conducted by <cit.>. The study, which covered 39 school districts, involved an intervention to provide low-income parents of at least two children with information about their children's school performance. Half the households where assigned to the information intervention and the rest were assigned to the control group. <cit.> showed that, at baseline, parents faced large information gaps regarding their children's grades and class ranking. Even though schools produced a report card, 60% of parents were unaware of their child's performance. Many parents reported that they did not receive the report card (children either lost them or did not take them home), or had trouble interpreting the report card structure, primarily due to low literacy levels.
The intervention was designed to present details of their children's school performance in an easily accessible way.
<cit.> showed that the information gaps (the difference between believed and true test scores) went down as a result of the intervention, and the amount of updating varied depending on students' initial test scores. <cit.> also introduced a real-stakes scenario where parents received a series of lottery tickets for a scholarship paying for four years of high school. Parents had to decide how to allocate tickets between two siblings. If there were more than two siblings residing in the household, the survey team selected two at random. The results showed that parents allocated tickets towards their better performing child.
To test for heterogeneity, <cit.> ran a linear regression of parental beliefs on initial scores, treatment, and an interaction as in (<ref>), and reported estimates Xฬ
= 46.8 (on a scale of 100) and (ฮฒ_1,ฮฒ_2) = (-25.9,0.40) in their Tables 1 and 2, respectively. The coefficient ฮฒ_2 captures how much the treatment effects vary (on a 0 to 100 scale) for a each additional point in students initial scores. The VCATE combines information about the coefficient and the initial variability in scores. From the data, I also estimate the variance of the control V_x = 305.53, and estimate the VCATE as ฮฒ_2^2V_x = 48.89. Taking the square root and normalizing by the standard deviation of the outcome for the control group (V_(Y | D= 0) = 311.72), produces 0.40. This means that the magnitude of treatment effect heterogeneity explained by scores is comparable to 40% of the standard deviation of beliefs in the control group.
The VCATE can also help us understand the magnitude of treatment effect heterogeneity in the experiment using multiple covariates. I use LASSO for the first-stage predictions, two folds per split with 20 splits, and estimate {ฮฉ_nk}_k=1^K using clustered standard errors at the household level and the formulas for the confidence intervals defined in (<ref>). The results are not very sensitive to the number of splits. For ease of exposition, I report point-estimates and CIs for โ(V_ฯ/๐(Y_0)), which is the standard deviation of the CATE divided by the standard deviation of the outcome for the control group. I compute confidence intervals for the square root via the transformation proposed in (<ref>).
Table <ref> computes the ATE and the โ(VCATE) for two outcomes (parental beliefs and lottery allocations) and 8 different sets of covariates. Panel (a) shows that, on average, parents downgrade their beliefs about test scores by 42% of the standard deviation (SD) of the beliefs of the control group. The treatment effect heterogeneity explained by test scores is equivalent to 40% of the standard deviation (SD) of the beliefs of the control group. This is statistically significant at the 5% level and has a comparable magnitude to the ATE. The confidence intervals are relatively short in length. However, applying the bounds from Theorem <ref> and Corollary <ref> shows that differentiating treatment offers based on scores could further lower beliefs by at most 8.1% SDs of the beliefs in the control group. In this case the ATE is already fairly high compared to โ(V_ฯ), so in spite of the large heterogeneity, the marginals gains from targeting would be modest. Panel (b) presents the results for the secondary school lottery. The ATE is estimated precisely at zero, because the lottery tickets had to be divided as a zero sum between the siblings. The VCATE measures how much the dispersion in the allocation depends on the covariates. The standard deviation of the VCATE explained by initial scores is 16% of the SD of the control group lottery allocation, and the maximum welfare gains are around 7.8% SD.
The student variables (grade, age, gender, attendance, and educational expenditures) collectively explain 11% of the SD of parental beliefs. This is statistically significant at the 5% level. The magnitude is around a fourth of the variation for test scores, and the maximum welfare gain from targeting is 0.7% of the SD of parental beliefs in the control group. I find that other subsets of covariates do not produce statistically significant estimates of the VCATE at the 5% level. The added welfare of personalizing treatment assignment using these covariates is also very low.
The estimates that use all the covariates are computed over a smaller subsample with non-missing values across all variables. Despite the large number of variables and the smaller sample, the estimates of the VCATE remain relatively stable across specifications. The VCATE computed from a rich set of respondent, household, and student covariates has a comparable magnitude to the VCATE that only includes student scores. The confidence intervals are also similar. The estimates of the maximum welfare gains from targeting using all covariates are 7.9% SD for beliefs and 7.4% SD for lottery outcomes, respectively, which are similar to the welfare gains computed using only scores.
ยง CONCLUSION
I propose an efficient estimator of the variance of treatment effects that can be attributed to baseline characteristics and propose novel adaptive confidence intervals that produce valid coverage. I analyze issues of non-standard inference that arise in this context, and how to address them. I also explore the economic significance of the VCATE for policymakers and researchers, by showing that the โ(VCATE)/2 bounds the marginal gains of targeted policies. Overall, this paper proposes a broadly applicable approach to measure treatment effect heterogeneity in experiments.
elsarticle-harv
assumpsection
ยง CRITICAL VALUES
Define the correlation ฯ = ฮฉ_12/โ(ฮฉ_11ฮฉ_22). By definition of the Cholesky decomposition, (e_1'ฮฉ^1/2Z) = โ(ฮฉ_11)Z_1, and (e_2'ฮฉ^1/2Z) = โ(ฮฉ_22(1-ฯ^2))Z_2 + ฯโ(ฮฉ_22) Z_1. Substituting these terms into the expression for G
G(n,V_ฯ^*,ฮฉ,Z,ฮถ)
= ฮฉ_11/n_ฮฝ_1Z_1^2 + 2[ ฮถโ(V_ฯ^*ฮฉ_11/n) + V_ฯ^*/2โ(n)ฯโ(ฮฉ_22)]_ฮบ_1 Z_1+V_ฯ^*/โ(n)[โ(ฮฉ_22(1-ฯ^2))]_ฮบ_2Z_2.
This has a quadratic form, G(n,V_ฯ^*,ฮฉ,z,ฮถ) = ฮฝ_1 (Z_1 + ฮบ_1/2 ฮฝ_1 )^2 + ฮบ_2Z_2 - ฮบ_1^2/4 ฮฝ_1, which fits the form of a generalized chi-square <cit.>. To compute critical values we compute feasible analogs (ฮฝ_1,ฮบ_1,ฮบ_2) from an estimate of ฮฉ.
ยง PROOFS MAIN DOCUMENT
Define ฯฬ(X') := ๐ผ[Y_1-Y_0 | X']. Since X is X'-measurable, then by the law of iterated expectations, ๐ผ[ฯฬ(X')| X] = ฯ(X). By the law of total variance V_ฯ' = ๐(ฯ(X)) + ๐ผ[๐(ฯฬ(X')| X)] โฅ V_ฯ. We can prove the upper bound by setting X' = Y_1 - Y_0.
The result is a special case of Theorem <ref>. The most adversarial distribution is one where ฯ_av = 0.
Our goal is to find โ := sup_ฮณโฮsup_ฯโฮ ๐ฐ_ฮณ(ฯ), and to prove that ๐ฐ_ฮณ(ฯ) = โ for at least one ฮณโฮ and ฯโฮ .
Define two random variables T := ๐ผ_ฮณ[Y_1 - Y_0 | X = x] and M := 1{T >0 }. By adding/subtracting ๐ผ_ฮณ[Y_0] and applying the law of iterated expectations, ๐ฐ_ฮณ(ฯ) = ๐ผ_ฮณ[ฯ(X)T] -max{0,๐ผ_ฮณ[T] }. The optimal policy ฯ^*(X) = 1{T โฅ 0}, which belongs to ฮ , i.e. the โfirst-bestโ <cit.>. This means that โ = sup_ฮณโฮ ๐ผ_ฮณ[max{0,T}] - max{0,๐ผ_ฮณ[T]}.
Step 1: (Problem Equivalence) The set of distributions ฮ is very large. Instead I focus on the problem over a set of equivalence classes. For m โ{0,1}, define the moments p_m = โ_ฮณ(M = m), ฯ_m = ๐ผ_ฮณ[T | M = m], and ฯ_m = ๐_ฮณ(T | M = m). By definition, ๐ฐ_ฮณ(ฯ^*) = p_1 ฯ_1 - max{ 0, ฯ_av}, where p_1ฯ_1 is the proportion of people that benefit from treatment times their conditional mean. Let ฮ := ฯ_1 - ฯ_0 be the mean difference between those that benefit from the program and those that do not. By definition ฮ > 0 because ฯ_1 > 0 and ฯ_0 โค 0. The conditional treatment effect is ๐ผ_ฮณ[T | M] = ฯ_0 + M ฮ, and ฯ_av = ฯ_0 + p_1 ฮ. Rearranging these expressions, ฯ_1 = ฯ_av + (1-p_1)ฮ and ๐ฐ_ฮณ(ฯ^*) = p_1 (ฯ_av + (1-p_1) ฮ) - max{ 0, ฯ_av}. By definition ๐_ฮณ(๐ผ_ฮณ[T | M]) = ฮ^2p_1(1-p_1). By applying the law of total variance V_ฯ = p_1 ฯ_1^2 + (1-p_1)ฯ_0^2 + ฮ^2(p_1)(1-p_1). Then
โ = sup_{p_1,ฯ_0,ฮ,ฯ_0^2,ฯ_1^2} p_1ฯ_av + p_1(1-p_1) ฮ - max{ 0, ฯ_av},
s.t. p_1 โ [0,1], ฯ_0 โค 0, ฯ_0 + ฮ > 0, ฮ > 0, ฯ_1^2,ฯ_0^2 โฅ 0, ฯ_0 + p_1ฮ = ฯ_av,
p_1 ฯ_1^2 + (1-p_1)ฯ_0^2 + p_1(1-p_1)ฮ^2 = V_ฯ.
The values of ฯ_av and V_ฯ impose the following constraints on the feasible set:
By the form of the objective, if p_1 โ{0,1} then โ = 0. Therefore, if V_ฯ = 0, then โ = 0 and the result of the theorem holds. Any value of the remaining parameters that satisfies the sign constraints will be feasible. Without loss, we focus on V_ฯ > 0.
Step 2: (Optimum has binary support) Consider a situation where (p_1,ฯ_1^2,ฯ_0^2) are fixed and p_1 โ (0,1). From the variance equation, ฮ^* = โ(V_ฯ - p_1ฯ_1^2 - (1-p_1)ฯ_0^2/p_1(1-p_1)), and from the mean ฯ_0^* = ฯ_av - p_1ฮ^* and ฯ_1^* = ฯ_av + (1-p_1) ฮ^*.
A solution is feasible as long as ฯ_1^2,ฯ_0^2 โฅ 0, p_1ฯ_1^2 + (1-p_1)ฯ_0^2 โค V_ฯ, ฯ_0^* โค 0, ฯ_1^* > 0, and ฮ^* =ฯ_1^*-ฯ_0^*> 0. The optimal ฯ_0^* is strictly increasing in (ฯ_1^2,ฯ_0^2) and ฯ_1^* is strictly decreasing. If a given value of (ฯ_1^2,ฯ_0^2) is feasible, then another candidate which shrinks it to zero will still satisfy the constraints. Moreover, the objective function is strictly increasing in ฮ^*, which in turn is strictly decreasing in (ฯ_1^2,ฯ_0^2). Therefore the optimum is (ฯ_1^2,ฯ_0^2) = (0,0), i.e. binary support for the CATE, ฮ^* = โ(V_ฯ / p_1(1-p_1)), ฯ_0^* = ฯ_av - โ(p_1/(1-p_1))โ(V_ฯ), and ฯ_1^* = ฯ_av + โ((1-p_1)/p_1)โ(V_ฯ).
Only some values of p_1 are feasible, i.e. satisfy the constraints ฮ^* > 0, ฯ_0^* โค 0, and ฯ_1^* > 0. The function p_1/(1-p_1) is strictly increasing in p with a range in (0,โ). The feasible values depend on the sign of ฯ_av. If (i) ฯ_avโค 0, then p_1 โ(0,V_ฯ/ฯ_av^2 + V_ฯ]. If (ii) ฯ_av > 0, p_1 โ[ฯ_av^2/ฯ_av^2 + V_ฯ,1).
Step 3: (Solve points of support)
For case (i) the objective function is โ = max_p_1โ[0,V_ฯ/ฯ^2 + V_ฯ] p_1 ฯ_av + โ(p_1(1-p_1))โ(V_ฯ). The unique solution to the FOC is p_1^* = (1/2) - (1/2)โ(ฯ_av^2 /(ฯ_av^2 + V_ฯ)), and is interior because โ(ฯ_av^2/(ฯ_av^2 + V_ฯ))โฅฯ_av^2 / (ฯ_av^2 + V_ฯ). By strict concavity, it is a global optimum, and โ = ฯ_av/2 -(ฯ_av|ฯ_av|)/(2 โ(ฯ_av^2 +V_ฯ)) + V_ฯ/(2โ(ฯ_av^2 + V_ฯ)). Since ฯ_avโค 0, this can be simplified to 1/2(-|ฯ_av| + โ(ฯ_av^2 + V_ฯ)). For case (ii), ฯ_av > 0, the objective function is (p_1 - 1)ฯ_av + โ(p_1(1-p_1))โ(V)_ฯ, subject to 1 โฅ p_1 โฅฯ_av^2/ฯ_av^2+V_ฯ. The unique solution to the FOC is p_1^* = (1/2) + (1/2)โ(ฯ_av^2 /(ฯ_av^2 + V_ฯ)), it is interior because p_1^* โฅ1/2 + 1/2ฯ_av^2/ฯ_av^2 + V_ฯโฅฯ_av^2/ฯ_av^2+V_ฯ, and produces the desired โ.
The potential outcomes under a linear transformation are ฮบ_1 + ฮบ_2 Y_0 and ฮบ_1 + ฮบ_2 Y_1, respectively. The treatment effect is ฮบ_2 (Y_1 - Y_0), which does not depend on ฮบ_1, and the transformed CATE is ฮบ_2ฯ(x). The ATE is ฮบ_2ฯ_av and the VCATE is ฮบ^2V_ฯ. The result follows from substituting the transformed values into Theorem <ref> and factorizing |ฮบ_2|.
We decompose V_ฯ n = ฮฒ_2n^2 V_xn into components that map into those of Assumption <ref>, by centering the key terms.
V_ฯ n - V_ฯ n^* = ฮฒ_2n^2V_xn - ฮฒ_2n^2V_xn = (ฮฒ_2n-ฮฒ_2n + ฮฒ_2n)^2V_xn/V_xn(V_xn - V_xn + V_xn) - ฮฒ_2n^2V_xn
=[โ(V)_xn(ฮฒ_2n-ฮฒ_2n)]^2+ 2[โ(V_xn)ฮฒ_2n][โ(V)_xn(ฮฒ_2n-ฮฒ_2n)]+ [V_xnฮฒ_2^2](V_xn/V_xn-1)
+ [โ(V_xn)(ฮฒ_2n-ฮฒ_2n)]^2( V_xn/V_xn-1) + 2[โ(V_xn)ฮฒ_2n][โ(V_xn)(ฮฒ_2n-ฮฒ_2nk)]( V_xn/V_xn-1).
Let Z_n = n^1/2ฮฉ_n^-1/2[ โ(V)_xn(ฮฒ_2n-ฮฒ_2n),V_xn/V_xn-1]' be a normalized statistic, and let e_1 = [1,0]', e_2 = [0,1]' be vectors that select the first and second coordinates, respectively. We can always find a ฮถ_n โ{-1,1} that solves ฮถ_nโ(V_ฯ n) = ฮฒ_2nโ(V_xn). If V_ฯ n > 0 this is the sign of ฮฒ_2n but otherwise any ฮถ_n โ{-1,1} will solve the equation. By substituting the definition of Z_n, V_ฯ n = ฮฒ_2n^2V_xn, and
V_ฯ n - V_ฯ n^*
= (e_1'ฮฉ_nZ_n)^2/n + 2 ฮถ_n โ(V_ฯ n^*)( e_1'ฮฉ_n^1/2Z_n/โ(n)) + V_ฯ n^*e_2'ฮฉ_n^1/2Z_n/โ(n)
+ (e_1'ฮฉ_n^1/2Z_n)^2(e_2'ฮฉ_n^1/2Z_n)/n^3/2 + 2 ฮถ_n โ(V_ฯ n^*)(e_1'ฮฉ_n^1/2Z_n)(e_2'ฮฉ_n^1/2Z_n)/n.
By Assumption <ref>, Z_n โ Z_n + o_p(1). Moreover, since ฮฉ_n has bounded eigenvalues, ฮฉ_n^1/2Z_n = ฮฉ_n^1/2Z_n + o_p(1). This means that
V_ฯ n - V_ฯ n^* = (e_1'ฮฉ_n^1/2Z_n)^2/n + 2 ฮถ_n โ(V_ฯ n^*/n)e_1'ฮฉ_n^1/2Z_n + V_ฯ n^*/โ(n)e_2'ฮฉ_n^1/2Z_n + Residual_n,
where the residual is o_p(n^-1) + o_p(โ(V_ฯ n^*/n))+ o_p(V_ฯ n^*/โ(n )) + O_p(n^-3/2) + o_p(n^-1โ(V_ฯ n^*)). The fourth and fifth terms of the residual are o_p(n^-1) and o_p(n^-1/2โ(V_ฯ n^*)), and hence asymptotically negligible. The leading term in (<ref>) is O_p( max{1/n,โ(V_ฯ n^*/n)}).
Let ฯ_i = ฯ(Y_i,D_i,X_i,ฮท) be a realization of the influence function in (<ref>) and compute ๐[ฯ_i | D_i=d,X_i =x] = 4 (ฯ(x)-ฯ_av)^2[ dฯ_1^2(x)/p(x)^2 + (1-d)ฯ_0^2(x)/(1-p(x))^2] and ๐ผ[ฯ_i | D_i=d,X_i =x] = (ฯ(x)-ฯ_av)^2.
By the law of iterated expectations ๐ผ[ฯ_i | X_i=x]=(ฯ(x)-ฯ_av)^2. By applying the law of total variance recursively, ๐(ฯ_i) = ๐(๐ผ[ฯ_i | X_i]) + ๐ผ[๐ผ[๐(ฯ_i | D_i,X_i)| X]] + ๐ผ[๐(๐ผ[ฯ_i | D_i,X_i]| X_i)]. This produces, ๐(ฯ_i) = ๐((ฯ(X_i)-ฯ_av)^2) +4 ๐ผ[(ฯ(X_i)-ฯ_av)^2( ฯ_1^2(X_i)/p(X_i) + ฯ_0^2(X_i)/(1-p(X_i)))].
๐((ฯ(X_i)-ฯ_av)^2) โค๐ผ[(ฯ(X_i)-ฯ_av)^4] โคฮบ^2 V_ฯ. By the Cauchy-Schwarz inequality, the term ๐ผ[(ฯ(X_i)-ฯ_av)^2( ฯ_1^2(X_i)/p(X_i) + ฯ_0^2(X_i)/(1-p(X_i)))] is bounded by ฮบ V_ฯโ(๐ผ[ ( ฯ_1^2(X_i)/p(X_i) + ฯ_0^2(X_i)/(1-p(X_i)))^2]).
Let ๐ฌ(ฮธ) be the Jacobian of the least squares problem, defined in (<ref>). Substituting W(X_i,D_i)'e_4 = (D_i-p(X_i))S(X_i) and ฮป(X_i) = [p(X_i)(1-p(X_i))]^-1 into the definition, ฮฒ_2๐ฌ(ฮธ)e_4 = 1/nโ_i=1^n [D_i-p(X_i)]/p(X_i)(1-p(X_i))ฮฒ_2S(X_i)(Y_i - W(X_i,D_i)). By the definition of the nuisance functions in ฮทฬ_ฮธ(x) in (<ref>), W(X_i,D_i)'ฮธ = ฮผฬ_0,ฮธ(X_i) + D_iฯฬ_ฮธ(X_i), ฯฬ_ฮธ(x) = ฮฒ_1 + ฮฒ_2 S(X_i), and ฯฬ_ฮธ = ฮฒ_1 + [1/nโ_i=1^n S(X_i)]ฮฒ_2. Furthermore, since 1/nโ_i=1^nS(X_i) = 0, then ฮฒ_2S(X_i) = ฯฬ_ฮธ(X_i)-ฯฬ_av,ฮธ and
ฮฒ_2๐ฌ(ฮธ)e_4 = 1/nโ_i=1^n [D_i-p(X_i)]/p(X_i)(1-p(X_i))ฮฒ_2S(X_i)(Y_i - ฮผฬ_0,ฮธ(X_i) - D_iฯฬ_ฮธ(X_i)).
Notice that D_i - p(X_i) = D_i(1-p(X_i)) + p(X_i)(1-D_i). This implies that D_i-p(X_i)/p(X_i) = D_i/p(X_i) - (1-D_i)/1-p(X_i). The next step is to substitute the estimated parameters. Then 1/nโ_i=1^n ฯ(Y_i,D_i,X_i,ฮทฬ_ฮธ_n) = 1/nโ_i=1^n (ฯฬ_ฮธ_n(X_i)-ฯฬ_av,ฮธ_n)^2 + ฮฒ_2n๐ฌ(ฮธ_n)e_4. The first term simplifies to ฮฒ_2n^2[1/nโ_i=1^n S(X_i)^2] = ฮฒ_2n^2V_x n. Finally, ๐ฌ(ฮธ_n) = 0 implies that the second term is zero.
The parameter ฮธ = (ฮฑ_1,ฮฑ_2,ฮฒ_1,ฮฒ_2) โฮ^* solves
๐ผ[ฮป(X)W(X,D)Y] - ๐ผ[ฮป(X)W(X,D)W(X,D)']ฮธ = 0_4 ร 1.
If there exists a ฮธโโ^4 such that ๐ผ[Y| D=d,X=x] = ฮผ_d(x) = W(x,d)'ฮธ, then by the law of iterated expectations ๐ผ[ฮป(X)W(X,D)Y ] = ๐ผ[ฮป(X)W(X,D)W(X,D)'ฮธ]. Such a ฮธ would automatically satisfy (<ref>). Now suppose that S(x) = ฯ(x) - ฯ_av, M(x) = ฮผ_0(x) + p(x)ฯ(x), and ฮธ = (0,1,ฯ_av,1). For this choice, W(x,d)'ฮธ = 0 + (ฮผ_0(x)+p(x)ฯ(x)) + (d-p(x))ฯ_av + (d-p(x))(ฯ(x)-ฯ_av). This simplifies to W(x,d)'ฮธ = ฮผ_0(x) + dฯ(x) = ฮผ_d(x) and ฮธ solves (<ref>).
Define a vector Y_nkโโ^n_k with the outcomes in fold โ_nk, W_nk be an n_k ร 4 matrix of regressors, U_nkโโ^n_k be a vector of errors, and ฮ_nk be a n_k ร n_k diagonal matrix with entries {ฮป(X_i) }. Let W_nk be an n_k ร 4 matrix of generated regressors defined in (<ref>), and let ฮ _nk be a 4 ร 4 diagonal matrix with entries (1,1,1,V_xnk^-1/2). Let ฮธ_nk^* := (ฮฑ_1nk,ฮฑ_2nk,ฮฒ_1nk,โ(V)_xnkฮฒ_2nk), and define an infeasible estimator ฮธ_nk^* := ฮ _nk^-1ฮธ_nk := (ฮ _nk'W_nk'ฮ_nkW_nkฮ _nk/n_k )^-1(ฮ _nk'W_nk'ฮ_nkY_nk/n_k).
Define Q_ww,nk := ๐ผ[ฮป(X_i)W_iW_i' |โ_-nk]. By Lemma <ref>,
ฮ _nk'Q_ww,nkฮ _nk = [ ๐ผ[ฮป(X_i) |โ_-nk] ๐ผ[ฮป(X_i)M_-k(X_i) |โ_-nk] 0 0; ๐ผ[ฮป(X_i)M_-k(X_i) |โ_-nk] ๐ผ[ฮป(X_i)M_-k(X_i)^2 |โ_-nk] 0 0; 0 0 1 0; 0 0 0 1; ].
By Assumptions <ref>.(ii), <ref>.(i) and (ii), ฮ _nk'Q_ww,nkฮ _nk has bounded eigenvalues. Algebraically, ฮ _nk'W_nk'ฮ_nkW_nkฮ _nk/n_k = 1/n_kโ_i โโ_nkฮ _nk'ฮป(X_i)W_iW_i'ฮ _nk. Since the terms are conditionally i.i.d, it converges to (<ref>). By Assumptions <ref>.(iii) and (iv) ฮป(X_i)ฮ _nkW_iU_i is bounded in the L_2 norm, and โ(n_k)(ฮธ_nk^*-ฮธ_nk) = (ฮ _nk'Q_ww,nkฮ _nk)^-1ฮ _nk'W_nk'ฮ_nkU_nk/โ(n_k) + o_p(1). Also, V_xnk^-1V_xnk - 1
= 1/n_kโ_i โโ_nk[V_xnk^-1S_-k(X_i)^2 -1 ] - [1/n_kโ_i โโ_nkV_xnk^-1/2S_-k(X_i) ]^2. The second term is O_p(n^-1) because the summand has mean zero and unit variance.
A_i := [ ฮป(X_i)ฮ _nkW_iU_i; V_xnk^-1S_-k(X_i) - 1 ], J_nk^* := [ (ฮ _nk'Q_ww,nkฮ _nk)^-1 0_4; 0_4' 1 ].
By (<ref>), (ฮ _nk'Q_ww,nkฮ _nk)^-1 has a block-diagonal structure, with a one in the bottom-right cell, and hence ฮฅ [J_nk^*]^-1 = ฮฅ. This means that
โ(n_k)ฮฅ[ ฮธ_n^* - ฮธ_n^*; V_xnk^-1V_xnk - 1 ] = ฮฅ[J_nk^*]^-1[1/โ(n)_kโ_i โโ_nk A_i ]+ o_p(1)
=1/โ(n_k)โ_i โโ_nk[ V_xnk^-1/2ฮป(X_i)(D_i-p(X_i))S_-k(X_i)U_i; V_xnk^-1/2S_-k(X_i)^2 - 1 ] + o_p(1).
Define ฯ_nk,av := 1n_kโ_i โโ_nkฯ_-nk(X_i) and ฯ_nk,av := ๐ผ[ฯ_-nk(X_i) |โ_-nk]. Algebraically, W_nk = W_nk(I_4 ร 4 + ฮ_nk), where ฮ_nk = [0_4,0_4,0_4, e_3 (ฯ_nk,av-ฯ_nk,av)] and e_3 = [0,0,1,0]'. The bias in centering the fourth column of W_nk is a scalar times the third regressor. Substituting W_nk = W_nkฮ _nkฮ _nk^-1(I + ฮ_nk),
ฮธ_nk^*
= (ฮ _nk^-1(I+ฮ_nk)ฮ _nk)^-1( ฮ _nk'W_nk'ฮ_nkW_nkฮ _nk)^-1( ฮ _nk'W_nk'ฮ_nkY_nk),
which simplifies to ฮธ_nk^* = (ฮ _nk^-1(I+ฮ_nk)ฮ _nk)^-1ฮธฬ_nk^*. Since ฮ_nkฮ_nk = 0_4 ร 4, then ( ฮ _nk^-1(I+ฮ_nk)ฮ _nk)^-1 = ( I + V_xnk^-1/2ฮ_nk)^-1 = I-V_xnk^-1/2ฮ_nk, and
[ ฮฒ_2n^*; V_xnk^-1V_xnk ] := ฮฅ[ ( ฮ _nk^-1ฮ_nkฮ _nk)^-1 0_4 ร 1; 0_1 ร 4 1 ][ ฮธ_nk^*; V_xnk^-1V_xnk ] = ฮฅ[ ฮธ_nk^*; V_xnk^-1V_xnk ].
The second equality follows from the fact that the selection matrix ฮฅ only extracts the fourth and fifth rows of the vector. We can plug-in (<ref>) and the identity ฮฅ[ฮธ_nk^*',1]' = [ฮฒ_2nk^*,1]' into (<ref>). By Assumptions <ref> and <ref>, the observations are conditionally i.i.d. given โ_-nk, and ฮฉ_nk has bounded eigenvalues. Then by the Lindeberg-Feller CLT,
โ(n_k)ฮฉ_nk^-1/2[ ฮฒ_2nk^*-ฮฒ_2nk^*; V_xnk^-1V_xnk - 1 ]|โ_-nkโ^d ๐ฉ(0_2 ร 1,I_2ร 2).
Let ฮฬ_nk = I+ฮ_nk, where the non-zero term in ฮ_nk is an average with mean zero and variance V_xnk. Under the assumptions of part (a), ฮ _nkฮ _nk = I+o_p(1) and ฮ _nkฮฬ_nkฮ _nk^-1 = I + V_xnk^-1/2ฮ_nk = I + o_p(1). Decomposing ฮ _nkW_nk'ฮ_nkW_nkฮ _nk in a similar way to (<ref>),
(ฮ _nkฮฬ_nk'ฮ _nk^-1)(ฮ _nk'W_nk'ฮ_nkW_nkฮ _nk)(ฮ _nk^-1ฮฬ_nkฮ _nk) = ฮ _nk'Q_ww,nkฮ _nk + o_p(1)
.
Hence J_nk = J_nk^* + o_p(1). Substituting W_nk, into the upper-left block of H_nk,
(ฮ _nkฮฬ_nk'ฮ _nk^-1) [1/n_kโ_i โโ_-nkU_i^2ฮป(X_i)^2 ฮ _nk'W_iW_i'ฮ _nk](ฮ _nk^-1ฮฬ_nkฮ _nk).
As before the outer terms are I + o_p(1). For the inner terms we apply (<ref>), U_i = Y_i -W_i'ฮธ_nk = Y_i - W_i'ฮฬ_nkฮธ_nk = Y_i - W_i'ฮ _nkฮ _nk^-1ฮฬ_nkฮ _nkฮ _nk^-1ฮธ_nk = Y_i - W_i'ฮ _nk'ฮธ_nk^* = U_i - W_i'ฮ _nk(ฮธ_nk^*-ฮธ_nk^*). The inner term of (<ref>) is ๐ผ[U_i^2ฮป(X_i)^2 ฮ _nk'W_iW_i'ฮ _nk|โ_-nk] + o_p(1) under Assumption <ref>.(ii) (overlap) and <ref> (bounds on moments of U_i and ฮ _nkW_i). In particular, <ref>.(iv) ensures that ๐ผ[โฮ _nkW_i โ^4 |โ_-nk] is uniformly bounded.
Define Sฬ
_nk := 1/n_kโ_i โโ_-nk S_-k(X_i) and S_-k(X_i) = ฯ_-k(X_i) - 1/n_kโ_iโโ_-nkฯ_-k(X_i). Adding/subtracting the mean, S_-k(X_i) = S_-k(X_i) - Sฬ
_nk. Algebraically, 1/n_kโ_i โโ_nkT_i^2 = V_xnk^-2( 1/n_kโ_i โโ_nk[S_-k(X_i)^2 -V_xnk]^2) = V_xnk^-2( 1/n_kโ_i โโ_nkS_-k(X_i)^4 - V_xnk^2 ). This simplifies to V_xnk^-2( 1/n_kโ_i โโ_nk[S_-k(X_i)-Sฬ
_nk]^4 ) - 1. By a binomial expansion,
1/n_kโ_i โโ_nkS_-k(X_i)^4/V_xnk^2 + (V_xnk/V_xnk)^2_โ^p 1โ_โ = 1^4 4 โ (-1)^โSฬ
_nk^โ/V_xnk^โ/2_o_p(1)[1/n_kโ_i โโ_nkV_xnk^2-โ/2S_-k(X_i)^4-โ]_O_p(1) - 1.
By Assumption <ref>.(iv), the bound on V_xnk, and the moment bounds for V_xnk^-1/2S_-k(X_i), then for โโ{1,2,3,4}, [1/n_kโ_i โโ_nkV_xnk^2-โ/2S_-k(X_i)^4-โ] is O_p(1). By the weak law of large number, V_xnk^-1/2Sฬ
_nk = o_p(1) and by (<ref>), V_xnk/V_xnkโ^p 1. Hence 1/n_kโ_i โโ_nkT_i^2 = ๐ผ[V_xnk^-1S_-k(X_i)^4 |โ_-nk]-1 + o_p(1) = ๐(V_xnk^-1/2S_-k(X_i)^2) + o_p(1). This shows that the diagonals of H_nk converge to their population analogs. The proof of convergence for the off-diagonals is similar. By the form of Q_ww,nk in (<ref>), ฮฅ J_nk^-1 = ฮฅ. Hence ฮฅJ_nk^-1H_nkJ_nkฮฅ' = ฮฅ J_nk^^*-1H_nk J_nk^*ฮฅ' + o_p(1) = ฮฅ H_nkฮฅ' + o_p(1) = ฮฉ_nk + o_p(1).
We start by proving that โ V_ฯ(ฮณ) - V_ฯ^*(ฮณ, โ_-nk) โโค V_ฯ. Since the second term of (<ref>) is always non-negative, V_ฯ^*(ฮณ,โ_-nk) โค V_ฯ(ฮณ). Also, since ฮฒ_1 =ฯ_ฮณ,av and ฮฒ_2 = 0 are part of the feasible set, then the second term is at most V_ฯ(ฮณ) = ๐ผ_ฮณ[(ฯ_ฮณ(X)-ฯ_ฮณ,av)^2]. This shows that V_ฯ^*(ฮณ,โ_-nk) โฅ 0.
To prove the second bound, we examine the solution to the best linear projection. Since the folds are independent, conditioning on โ_-nk is only important to be able to handle S_-k(x) as a deterministic function. For ease of exposition, let F denote the conditional distribution of ฮณ given โ_-nk, and let โ g โ_F,2 = โ(๐ผ_F[โ g(X) โ^2]) denote the L_2 norm.
Since ๐ผ_F[S_-k(X)] = 0, the optimal solution is ฮฒ_1F^* = ฯ_F,av. When ๐_F(S_-k(X)) = 0, the optimal ฮฒ_2F^* is indeterminate, and V_ฯ^*(ฮณ,โ_-nk) = 0. Otherwise,
(ฮฒ_2F^* - 1)= Cov_F(S_-k(X),ฯ_F(X))/๐_F(S_-k(X))- 1 = Cov_F(S_-k(X),ฯ_F(X)-S_-k(X))/๐_F(S_-k(X)).
By the Cauchy-Schwarz inequality, (ฮฒ_2F^*-1)^2๐_F(S_-k(X)) โค๐ผ_F[(ฯ_ฮณ(X)-ฯ_F,av-S_-k(X))^2]. Rearranging (<ref>) and substituting the optimum:
V_ฯ - V_ฯ^* = ๐ผ_F[(ฯ_ฮณ(X)-ฯ_av-S_-k(X) - (ฮฒ_2^*-1)S_-k(X))^2].
By the triangle inequality for the L_2 norm, and the above Cauchy-Schwartz bound,
V_ฯ(ฮณ) - V_ฯ^*(ฮณ,โ_-nk) โค[ 2 โฯ_F(X)-ฯ_F,av -S_-k(X)โ_F,2]^2.
Substituting S_-k(x) = ฯ_-k(x)-๐ผ_F[ฯ_-k(X)], (<ref>) can be rewritten as 4 โ (ฯ_ฮณ(X) -ฯ_-k(X)) - (๐ผ_F[ฯ_ฮณ(X)]-๐ผ_F[ฯ_-k(X)]) โ^2 โค 16 โฯ_-k - ฯ_ฮณโ_F,2^2. By the law of iterated expectations ฯ(ฮณ) := ๐ผ_ฮณ[โฯ_-k(X) - ฯ_ฮณ(X) โ^2] = ๐ผ_ฮณ[โฯ_-k - ฯ_ฮณโ_F,2^2]. Combining the two bounds and applying Jensen's inequality,
๐ผ_ฮณ[|V_ฯ(ฮณ)-V_ฯ^*(ฮณ,โ_-nk)|] โค๐ผ_ฮณ[min{16 โฯ_-k - ฯโ_F,2^2,V_ฯ(ฮณ)}] โคmin{16ฯ(ฮณ),V_ฯ(ฮณ)}.
Decompose ฮ_nk = ฮ_nk1 + ฮ_nk2, where ฮ_nk1 := V_ฯ(ฮณ_n) - V_ฯ^*(ฮณ_n,โ_-nk), and ฮ_nk2 := V_ฯ nk-V_ฯ^*(ฮณ_n,โ_-nk). First, by Lemma <ref> and Assumption <ref>, ๐ผ_ฮณ_n[โฮ_nk1โ] โคmin{16ฯ(ฮณ_n)^2,V_ฯ(ฮณ_n)} = o(n_k^-1/2). By Markov's inequality, ฮ_nk1 = o_p(n_k^-1/2). Second, conditional on a sequence {โ_-nk}_n=1^โ, by Theorem <ref> and Lemma <ref>, โฮ_nk2โ = O_p( max{1/n_k,โ(V_ฯ^*(ฮณ_n,โ_-nk))/โ(n_k)}). Since V_ฯ^*(ฮณ_n,โ_-nk) โค V_ฯ(ฮณ_n) = o(1), then almost surely, for all ฮต > 0, ฯ_n,ฮต(โ_-nk) := โ_ฮณ_n(n_k^1/2โฮ_nk2โ > ฮต|โ_-nk) โ 0. By iterated expectations and the bounded convergence theorem, lim_n_k โโโ_ฮณ_n(n_k^1/2โฮ_nk2โ < ฮต ) = lim_n_k โโ๐ผ_ฮณ_n[ฯ_n,ฮต(โ_-nk)] = ๐ผ_ฮณ_n[lim_n_k โโฯ_n,ฮต(โ_-nk)] = 0.
If, in addition, n_k^1/2 + ฯV_ฯ(ฮณ_n)โ 0 for ฯโ [0,1/2), then n_k^1/2+ฯฮ_nk1 = o_p(1) (the second bound dominates). Since n_k^ฯV_ฯ(ฮณ_n) โ 0 as well and ฯ < 1/2, then conditional on โ_-nk, n_k^1/2+ฯฮ_nk2 = o_p(1). We can apply the bounded convergence theorem once again to show that โ_ฮณ_n(n_k^1/2+ฯโฮ_nk2โ < ฮต ) = o(1).
Consider a sequence of distributions {ฮณ_n}_n=1^โ, with associated nuisance functions (ฯ_n(x),ฮผ_0n(x),p_n(x),ฯ_n,av).
Case 1 (Near Homogeneity): V_ฯ(ฮณ_n) โ 0. By the triangle inequality,
โ(n_k)โV_ฯ nk- 1/n_kโ_i โโ_nkฯ_i โ โคโ(n_k)โV_ฯ nk -V_ฯ(ฮณ_n) โ_ฮพ_n1 +
โ(n_k)โ1/n_kโ_i โโ_nkฯ_i - V_ฯ(ฮณ_n) โ_ฮพ_n2.
By Theorem <ref>, ฮพ_n1 = o_p(1). Since the ฯ_i are i.i.d with mean V_ฯ(ฮณ_n), then ๐ผ_ฮณ_n[ฮพ_n2^2] = ๐_ฮณ_n(ฯ_i). Let ฯ_dn^2(x) = ๐_ฮณ_n(Y_i | X_i=x,D_i=d) and p_n(x) = โ_ฮณ_n(D_i | X_i =x). By Lemma <ref> and Corollary <ref>,
๐_ฮณ_n(ฯ_i) โคฮบ^2 V_ฯ(ฮณ_n)^2+ 4ฮบ V_ฯ(ฮณ_n)โ(๐ผ_ฮณ_n[ ( ฯ_1n^2(X_i)/p_n(X_i)+ฯ_0n^2(X_i)/1-p_n(X_i)) ]). Since V_ฯ(ฮณ_n) โ 0 as n โโ, then ๐ผ[ฮพ_n2^2] โ 0 and hence ฮพ_n2 = o_p(1).
Case 2 (Strong Heterogeneity): V_ฯ(ฮณ_n) โ V_ฯโ (0,โ). For fixed (x,d), the cross-fitted regressors are W(x,d) = [ 1, ฮผ_0,-k(x) + p_n(x)ฯ_-k(x), (d-p_n(x)), (d-p(x))S_-k(x)]. Let e_โ be a 4 ร 1 vector with a one in the โ^th coordinate and zero otherwise. Analogous to (<ref>), define the regression adjusted nuisance functions as
ฮท_-k,ฮธ_nk(x) = [e_1 (W(x,1) -W(x,0))' +e_2 W(x,0)']ฮธ_nk + e_3 p_n(x) + e_4 ฯ_nk,av.
By applying Lemma <ref> to the subsample in i โโ_nk,
V_ฯ n k = 1/n_kโ_i โโ_kฯ(Y_i,D_i,X_i,ฮท_-k,ฮธ_nk),
where ฮท_-k,ฮธ_nk,r(x) := ฮท(x) + r (ฮท_-k,ฮธ_nk(x)-ฮท(x)). The true nuisance function ฮท depends on the distribution indexed by n, but we drop the subscript to simplify the notation. By a second-order term in the Taylor expansion around r = 0, for some rฬโ (0,1),
โ(n_k)(V_ฯ n k-V_ฯ(ฮณ_n)) = 1/โ(n_k)โ_iโโ_nk [ฯ(Y_i,D_i,X_i,ฮท)-V_ฯ(ฮณ_n)]
+ 1/โ(n_k)โ_iโโ_nkโฯ(Y_i,D_i,X_i,ฮท)/โฮท'(ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i))
+ 1/n_kโ_i=1^n โ(n_k) (ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i))'โ^2 ฯ(Y_i,D_i,X_i,ฮท_-k,ฮธ_nk,rฬ)/โฮท'(ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i)).
Our ultimate goal is to show that the second and third terms of the expansion are o_p(1). To keep the notation concise, let ฯ_i(ฮท) := ฯ(Y_i,D_i,X_i,ฮท). One of the main challenges is that the nuisance functions are estimated in multiple steps, combining information from the โ_nk and โ_-nk subsamples. The key is to decompose these different sources of uncertainty. Define (ฮป_nk-ฮป_nk) := [(ฮธ_nk-ฮธ_n),ฯ_nk,av(ฮธ_nk-ฮธ_n),(ฯ_nk,av-ฯ_n,av)]', where ฮธ_n = (0,1,ฯ_n,av,1). By Lemma <ref>, there exist matrices ฮจ_-nk(x) and ฮ_-nk(x) that are (x,โ_-nk)-measurable, such that for all x โ๐ณ,
ฮท_-k,ฮธ_nk(x) - ฮท(x) = ฮจ_-nk(x)(ฮป_nk-ฮป_nk) + ฮ_-nk(x).
Let e_โ be a 4 ร 1 elementary basis vector. Lemma <ref> shows that e_3'[ฮท_-k,ฮธ_nk(x) - ฮท(x)] = 0 and e_3' ฮ_-nk(x) = 0 because the experimental propensity scores are known. Lemma <ref> also shows that e_4' ฮ_-nk(x) = 0 by construction. Substituting (<ref>),
1/โ(n_k)โ_i โโ_nkโฯ_i(ฮท)/โฮท'(ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i))
=1/โ(n_k)โ_i โโ_nkโ_โ =1^4 โฯ_i(ฮท)/โฮท_โe_โ'(ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i))
= [1/โ(n_k)โ_i โโ_nkโ_โ 3โฯ_i(ฮท)/โฮท_โe_โ' ฮจ_-nk(X_i)]_ฮ_1nk(ฮป_nk-ฮป_nk) + [1/โ(n_k)โ_i โโ_nkโ_โ =1^2โฯ_i(ฮท)/โฮท_โe_โ'ฮ_-nk(X_i) ]_ฮ_2nk.
Lemma <ref> implies that ฮป_nk-ฮป_nk = o_p(1).
(i) Prove that ฮ_1nk = O_p(1): By Lemma <ref>.(a) the conditional mean of โฯ_i(ฮท)/โ (ฮท_1,ฮท_2,ฮท_4)' given (x,โ_-nk) is [0,0,-2(ฯ(x)-ฯ_n,av)] and by Lemma <ref>, e_4'ฮจ_-nk = c'. Also, ๐ผ_ฮณ_n[ ฯ_n(X_i) |โ_-nk] = ๐ผ_ฮณ_n[ ฯ_n(X_i)] = ฯ_n,av by fold independence.
By the law of iterated expectations, ๐ผ_ฮณ_n[ โฯ_i(ฮท)/โฮท_โe_โ' ฮจ_-nk(X_i) |โ_-nk] = 0 for โโ{1,2,4}. By Assumption <ref>, ๐ผ_ฮณ_n[โฮท(X_i)โ^4] and ๐ผ_ฮณ_n[ โ U_i โ^4 ] are uniformly bounded by a constant C < โ. By Assumption <ref>.(ii), p_n(x) is contained in [ฮด,1-ฮด]. Applying Lemma <ref>.(b), ๐ผ_ฮณ_n[ โโฯ_i(ฮท)/โฮท_โโ^4 ]^1/4โค (16/ฮด)C < โ. By Lemma <ref>.(d), and the triangle inequality, ๐ผ_ฮณ_n[ โ e_โ' ฮจ_-nk(X_i)โ^4]^1/4โค C [1+๐ผ_ฮณ_n[โฮท_-k(X_i)โ^4]^1/4]. By the bound in Assumption <ref>.(iii) and the Cauchy-Schwartz inequality, ๐ผ_ฮณ_n[โโฯ_i(ฮท)/โฮท_โe_โ' ฮจ_-nk(X_i)โ]<โ. We can combine these moment bounds to apply Lemma <ref>.(a), hence ฮ_1nk= O_p(1).
(ii) Prove that ฮ_2nk = o_p(1): By Lemma <ref>.(a), ๐ผ_ฮณ_n[ โฯ_i(ฮท)/โฮท_โe_โ' ฮ_-nk(X_i) |โ_-nk] = 0 for โโ{1,2}. Using similar arguments ๐ผ_ฮณ_n[ โโฯ_i(ฮท)/โฮท_โโ^4] < C. By Lemma <ref>.(b), โฮ_nk(x) โโค C รโฮท_-k(x) - ฮท(x) โ. By Assumption <ref>.(ii), ๐ผ_ฮณ_n[ โฮท_-k(x) - ฮท(x) โ^4] = o(1), which means that ๐ผ_ฮณ_n[โโฯ_i(ฮท)/โฮท_โe_โ' ฮจ_-nk(X_i)โ^2] is o(1). By Lemma <ref>.(b), ฮ_2nk = o_p(1).
Let ฮ = [e_1,e_2,e_4] be a 4 ร 3 matrix such that โฮโโค 1. Let ฮ_3nk be the second-order on the right-hand side of (<ref>). Since the propensity score is known,
ฮ_3nk :=
1/n_kโ_i โโ_nkโ(n_k) (ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i))'ฮโ^2 ฯ_i(ฮท_-k,ฮธ_nk,rฬ)/โ (ฮท_1,ฮท_2,ฮท_4)โ (ฮท_1,ฮท_2,ฮท_4)'ฮ'(ฮท_-k,ฮธ_nk(X_i)-ฮท(X_i)).
(iii) Prove that ฮ_3nk = o_p(1): By Lemma <ref>.(d), โโ^2 ฯ_i(ฮท_-k,ฮธ_nk,rฬ)/โ (ฮท_1,ฮท_2,ฮท_4)โ (ฮท_1,ฮท_2,ฮท_4)'โโค 18 รโ(3)/ฮด. For scalars, a,b โโ, (a+b)^2 โค 4(a^2 + b^2). By (<ref>) and the triangle inequality, โฮ_3nkโโค 4ร18โ(3)/ฮด( โ(n_k)โฮป_nk-ฮป_nkโ^2 [ 1/n_kโ_i โโ_nkโฮจ_-nk(X_i)โ^2 ] + [ 1/n_kโ_i โโ_nkโ(n_k)โฮ_-nk(X_i)โ^2 ] ). By Lemma <ref>, โ(n_k)โฮป_nk-ฮป_nkโ^2 = o_p(1). By Lemma <ref>.(d), โฮจ_-nk(x) โโค C[1+ 2โฮท_-k(x) โ]. By the moment bound in Assumption <ref>.(i) and Markov's inequality, then 1/n_kโ_i โโ_nkโฮจ_-nk(X_i)โ^2 = O_p(1). By Lemma <ref>.(e), โฮ_-nk(x) โโค C รโฮท_-k(x) - ฮท(x) โ. By Assumption <ref>.(iii), โ(n_k)๐ผ_ฮณ_n[โฮท_-k(X_i)-ฮท(X_i)โ^2] = o(1), then by Lemma <ref>.(c), 1/n_kโ_i โโ_nkโ(n_k)โฮ_-nk(X_i)โ^2 = o_p(1). Combining these results, ฮ_3nk = o_p(1).
Let ฯ(ฮณ,โ_-nk) = โ_ฮณ(V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ nk|โ_-nk) denote the conditional probability that the pseudo-VCATE is contained in the confidence interval. Let โฑ(ฮณ) be the support of โ_-nk, and let โฑ(ฮณ,t) โโฑ(ฮณ). Almost surely,
inf_โ_-nkโโฑ(ฮณ)ฯ(ฮณ,โ_-nk)
โค๐ผ_ฮณ[ฯ(ฮณ,โ_-nk)] โคsup_โ_-nkโโฑ(ฮณ,t)ฯ(ฮณ,โ_-nk) + โ_ฮณ(โ_-nkโโฑ(ฮณ,t)).
The left inequality considers the worst-case coverage. The right inequality applies the law of iterated expectations by the event โ_-nkโโฑ^*(ฮณ,t), then bounds โ_ฮณ(โ_-nkโโฑ(ฮณ,t)) โค 1 and ฯ(ฮณ,โ_-nk) โค 1 to simplify the expressions. Applying limits,
n โโliminfinf_ฮณโฮinf_โ_-nkโโฑ(ฮณ)ฯ(ฮณ,โ_-nk) โคn โโliminfinf_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k),
and an analogous result for the upper bound. The data in โ_-nk only affects fold k through the estimated nuisance functions ฮท_-k(x). To prove uniform coverage we will derive the bounds for a class of distributions where the (Y_i,D_i,X_i) in fold k is distributed as ฮณ and the plug-in nuisance functions are deterministic. We will define โฑ(ฮณ,t) as the set where |V_ฯ^*(ฮณ,โ_-nk)ฮฉ_nk,12| โค t for some fixed t > 0.
Let {ฮณ_n}_n=1^โ and {โ_-nk}_n=1^โ denote a sequence of distributions and data realizations, respectively. Theorem <ref> verifies that under Assumptions <ref>, <ref>, <ref>, and <ref>, the conditional CDFs {F_ฮณ_n|โ_-nk}_n=1^โ satisfy the high-level Assumption <ref>, almost surely. Furthermore, CI_ฮฑ nk satisfies the form in (<ref>), substituting (V_ฯ nk,ฮฉ_nk). Therefore, Lemma <ref>.(a) shows that the uniform lower bound on the asymptotic coverage probability is (1-ฮฑ). If Assumption <ref> also holds, then for all t > 0, n โโlimsupsup_ฮณโฮโ_ฮณ(โ_-nkโโฑ^*(ฮณ,t)) = 0. Hence by Lemma <ref>.(ii), the upper bound is (1-ฮฑ) plus a term that can be made arbitrarily small by choosing t close to zero.
Consider an arbitrary sequence of distributions {ฮณ_n}_n=1^โโฮ^โ and a convergent subsequence where V_ฯ(ฮณ_n_โ) โ V_ฯ. Since V_ฯ^*(ฮณ_n_โ,โ_-n_โ k)โค V_ฯ n_โ and ฮฉ_n_โ k has bounded eigenvalues, if V_ฯ = 0 then lim_โโโโ_ฮณ_n_โ(โ(V_ฯ^*(ฮณ_n_โ,โ_-n_โ k))|ฮฉ_n_โ k,12| > t) = 0. Now suppose that V_ฯ > 0. By Assumption <ref>, the nuisance functions converge to their true value. Then ฯ_-k(x) converges point-wise to ฯ(x), and by the moment bound in <ref>.(v) and the dominated convergence theorem, V_xn_โ k = ๐_ฮณ_n_โ(ฯ_-k(X) |โ_-n_โ k) = V_ฯ(ฮณ_n_โ)+o_p(1) โฅ V_ฯ + o_p(1). By (<ref>),
ฮฉ_n_โ k = ๐_ฮณ_n_โ([ ฮป_n_โ(X_i)(D_i-p_n_โ(x_i))V_xn_โ k^-1/2S_-k(X_i) U_i; V_xn_โ k^-1S_-k(X_i)^2 ]|โ_-n_โ k),
where U_in_โ = Y_i - W_i'ฮธ_n_โ k. By Assumption <ref>, for fixed {D_i=d,X_i = x}, W_i' point-wise converges to W_i^*' := [1, ฮผ_0n_โ(x) + p_n(x)ฯ_n_โ(x), (d-p_n_โ(x)) , (d-p_n_โ(x))ฯ_n_โ(x)]. By Lemma <ref>, ฮธ_n_โ kโฮธ_n_โ := [0,1,ฯ_n_โ,av,1]. By Lemma <ref>, W_i^*'ฮธ_n_โ = ฮผ_d,n_โ(x), and hence U_in_โ^* := Y_i - W_i^*'ฮธ_n_โ. Since ฮฉ_n_โ k is almost surely bounded by Assumption <ref> over random partitions โ_-n_โ k, then applying the dominated convergence theorem,
ฮฉ_n_โ k = ๐_ฮณ_n_โ([ ฮป_n_โ(X_i)(D_i-p_n_โ(x_i))V_ฯ^-1/2(ฯ_n_โ(X_i)-ฯ_n_โ ,av) U_in_โ^*; V_ฯ^-1(ฯ_n_โ(X_i)-ฯ_n_โ,av)^2 ]|โ_-n_โ k) + o_p(1),
Since ๐ผ_ฮณ_n[U_in_โ^* | D_i = d,X_i = x,โ_-n_โ k] = 0 and the second component of (<ref>) only depends on X_i. By iterated expectations, ฮฉ_n_โ k,12 = o_p(1) and the limiting probability is lim_โโโโ_ฮณ_n_โ(โ(V_ฯ^*(ฮณ_n_โ,โ_-n_โ k))|ฮฉ_n_โ k,12| > t) = 0. Hence we verified Assumption B^* in <cit.>. Uniform consistency follows from their Corollary 2.1.
We break-down the proof into cases.
Case (i): When V_ฯ(ฮณ) = 0, then V_ฯ(ฮณ,โ_-nk) = 0 almost surely. Therefore, n_k( V_ฯ n k -V_ฯ^*(ฮณ_n,โ_-nk)) = n_k( V_ฯ n k^*-V_ฯ(ฮณ)). Define ฯ(ฮณ,โ_-nk) as in Theorem <ref>. Then n โโliminfinf_โ_-nkโโฑ(ฮณ)ฯ(ฮณ,โ_-nk) โคn โโliminfโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k). We can prove an analogous upper bound. This implies that we only need to derive coverage bounds under sequences of conditional distributions where V_ฯ^*(ฮณ_n,โ_-nk) = 0. Exact coverage follows from the proof of the near-homogeneity case in Lemma <ref>.
Case (ii) When V_ฯ(ฮณ) > 0. By Assumption <ref>, โ(n_k)ฯ(ฮณ)^2 โ 0 as n_k โโ. Then by Lemma <ref>, โ(n_k)| V_ฯ(ฮณ)-V_ฯ^*(ฮณ,โ_-nk) | = o_p(1), and by the continuous mapping theorem, โ(V_ฯ^*(ฮณ,โ_-nk))โ^p โ(V_ฯ(ฮณ)) > 0. By (<ref>) in the proof of Lemma <ref> and for fixed ฮณ_n = ฮณ, ฮฉ_nk = ฮฉ + o_p(1), where ฮฉ is a population covariance matrix that does not depend on โ_-nk. By applying similar limits to the mild heterogeneity case in Lemma <ref> we can show that (V_ฯ nk-V_ฯ(ฮณ)) is โ(n_k) asymptotically equivalent to an empirical process indexed by the oracle V_ฯ(ฮณ). We obtain exact coverage due to Lemma <ref>.
By the first part of Theorem <ref>,
n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k) = 1- n โโliminfinf_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k) โคฮฑ.
The result follows taking limits on either side of the inequality in (<ref>).
By the definition in (<ref>),
V_ฯ(ฮณ) โฅinfCI_ฮฑ n^multifold1/Kโ_k=1^K 1{ V_ฯ(ฮณ) โฅinfCI_ฮฑ nk}โฅ1/2.
By negating the statement, computing expectations, and applying Markov's inquality,
โ_ฮณ(V_ฯ(ฮณ) < infCI_ฮฑ n^multifold) =
๐ผ_ฮณ[ 1{(1/Kโ_k1{ V_ฯ(ฮณ) < infCI_ฮฑ/2 nk}) > 1/2 }]
โค 2 1/Kโ_k๐ผ_ฮณ[1{ V_ฯ(ฮณ) < infCI_ฮฑ/2 nk}]
โค 2 โ_ฮณ(V_ฯ(ฮณ) < infCI_ฮฑ/2 nk) โค 2 โ_ฮณ(V_ฯ(ฮณ,โ_-nk^*) โCI_ฮฑ/2 nk)
The last line follows from the fact that the folds are split at random and the inequality in (<ref>) holds almost surely. By Theorem <ref> the asymptotic size is uniformly bounded by 2(ฮฑ/2) = ฮฑ. By construction, โ_ฮณ(V_ฯ(ฮณ) โCI_ฮฑ n^multifold) = โ_ฮณ(V_ฯ(ฮณ) < infCI_ฮฑ n^multifold) + โ_ฮณ(V_ฯ(ฮณ) > supCI_ฮฑ n^multifold). Applying similar arguments as before, โ_ฮณ(V_ฯ(ฮณ) โinfCI_ฮฑ n^multifold) โค 2 โ_ฮณ(V_ฯ(ฮณ) โCI_ฮฑ/2 nk). If Assumptions <ref>, and <ref> also hold, then Theorem <ref> implies that the right-hand side is point-wise bounded by ฮฑ.
Under the null hypothesis, G(n_k,0,ฮฉ_nk,Z, ฮถ) = (e_1'ฮฉ_nk^1/2Z)^2/n_k, where ฮฉ_nk,11^1/2Z, where Z โผ๐ฉ(0,1). The adjusted critical values are q_ฮฑ/2(n_k,0,ฮฉ_nk,ฮถ) = 0 and q_1-ฮฑ/2(n_k,0,ฮฉ_nk,ฮถ) = ฮฉ_nk,11 z_1-ฮฑ^2 / n_k. Then 0 โCI_ฮฑ nk if and only if 0 โคV_ฯ n k - 0 โคฮฉ_nk,11 z_1-ฮฑ^2 / n_k. Following similar steps to the โnear homogeneityโ regime in Theorem <ref>, n_k(V_ฯ n k -V_ฯ^*(ฮณ_n,โ_-nk)) = (ฮฉ_โ,11^1/2Z_nk + โ(v))^2 - v + o_p(1), where Z_nkโผ๐ฉ(0,1). Consequently, n_kV_ฯ n k = (ฮฉ_โ,11^1/2Z_nk + โ(v))^2 + o_p(1). Then
โ_ฮณ_n(0 โCI_ฮฑ nk|โ_-nk) = โ_ฮณ_n(0 โค n_kV_ฯ nkโคฮฉ_nk,11 z_1-ฮฑ^2 )
= โ_ฮณ_n( 0 โค (ฮฉ_โ,11^1/2Z_nk+ โ(v))^2 โคฮฉ_โ,11 z_1-ฮฑ^2 ) + o(1)
= โ_ฮณ_n( -โ(v)/โ(ฮฉ_โ,11)โค Z_nk +o_p(1) โค z_1-ฮฑ-โ(v)/โ(ฮฉ_โ,11)) + o(1).
and hence lim_n โโโ_ฮณ_n(0 โCI_ฮฑ nk|โ_-nk) = ฮฆ(z_1-ฮฑ-โ(v)/โ(ฮฉ_โ,11)) - ฮฆ( -โ(v)/โ(ฮฉ_โ,11)). The final result is obtained by 1-โ_ฮณ_n(0 โCI_ฮฑ nk|โ_-nk).
The first part of the proof is identical to that of Theorem <ref> in terms of setting up the problem, and defining a sequence the conditional distributions {F_ฮณ_n|โ_-nk}_n=1^โ. By equation (<ref>) this sequence satisfies Assumption <ref> almost surely with an effective sample size รฑ = n/ r_n and a particular choice of covariance estimator. To complete the proof, we develop a modified version of Lemma <ref> to prove uniform coverage under cluster dependence.
Consider a sequence of distributions {ฮณ_n}_n=1^โโฮ^โ and a subsequence {n_โ}_โ=1^โ, where n_โ/r_n_โ V_ฯ n_โ^* โ^p v โ [0,โ), V_ฯ n_โ^* โ^p 0, and ฮฉ_n_โโฮฉ as n_โโโ. Applying Lemma <ref> , and factorizing terms as in Lemma <ref>,
n_โ/r_n_โ( V_ฯ n_โ^*-V_ฯ n_โ^*) = (e_1'ฮฉ^1/2Z_n_โ + โ(v))^2 - v + o_p(1).
Now we need to show that the quantiles of the empirical process converge to the same limit. The quantity n_โ/r_n_โ G(n_โ ,V_ฯ n_โ^*,ฮฉ_n_โ,Z) is equal to
n_โ(e_1'r_n_โ^-1/2ฮฉ_n_โ^1/2Z)^2/n_โ + 2 โ(n_โ V_ฯ n_โ^*/r_n_โ)(e_1'r_n_โ^-1/2ฮฉ_n_โ^1/2Z)+ โ(n_โ V_ฯ n_โ^*/r_n_โ)โ(V_ฯ n_โ^*) (e_2'r_n_โ^-1/2ฮฉ_n_โ^1/2Z).
In the first term the n_โ components cancel out and r_n_โ^-1ฮฉ_n_โโ^p ฮฉ. Similarly, since n_โ/r_n_โ V_ฯ n_โ^* โ^p v the second term of (<ref>) converges to 2โ(v)e_1'ฮฉ^1/2Z, and a suitable factorization with the first and second the expression in (<ref>). Since n_โ/r_n_โโโ by assumption, then V_ฯ n_โ^* โ 0 and the third term of (<ref>) is o_p(1). Therefore the estimated quantiles are consistent. Proving consistency of the quantiles for the mild hereterogeneity proceeds analogously. Once we prove that the quantiles are asymptotically correct, the rest of the proof is the same as in Lemma <ref>.
The first part of the proof is identical to that of Theorem <ref> in terms of setting up the problem. In this case the sequence of conditional distributions {F_ฮณ_n|โ_-nk}_n=1^โ only satisfies Assumption <ref> for subsequences where V_xnk = 0. Instead, I will modify the first part of the proof of Lemma <ref>.(i) for a class of regression-adjusted CIs with possible degeneracy. Consider a sequence of distributions {ฮณ_n}_n=1^โโฮ^โ and let h_n := (n V_ฯ n^*,V_ฯ n^*,vec(ฮฉ_n),ฮถ_n,V_xn) be a sequence of parameters where V_xn is the variance of S_-k(X_i). Our goal is to show that for h โโ and all subsequences {n_โ}_โ =1^โ where h_n_โโ h โโ,
lim_n_โโโโ_ฮณ_n_โ( V_ฯ n_โ^* โCI_ฮฑ n k^0 ) โฅ 1-ฮฑ.
Partition the subsequences in such a way that h_n_โ either has V_xn_โ = 0 or V_xn_โ > 0. When V_xn_โ = 0, then V_ฯ n_โ^* is exactly degenerate and the confidence interval (<ref>) covers the pseudo-VCATE with probability one. For the sequences where V_xn_โ > 0, we can apply the remaining cases from Lemma <ref> which have coverage 1-ฮฑ. This satisfies Assumption B in <cit.> and we can prove uniform conservative coverage of the pseudo-VCATE applying their Corollary 2.1
We prove point-wise coverage by cases. When V_ฯ(ฮณ) = 0, then V_ฯ^*(ฮณ,โ_-nk) = 0 almost surely. By applying the result above and the near homogeneity case in Theorem <ref> we find that coverage is at least (1-ฮฑ). Now consider the case where V_ฯ(ฮณ) is bounded away from zero. By Assumption <ref>, the nuisance functions converge to their true value. Then ฯ_-k(x) converges point-wise to ฯ(x), and by the moment bound in <ref>.(v) and the dominated convergence theorem, ๐_ฮณ_n(ฯ_-k(X) ) = V_ฯ(ฮณ)+o(1), which is bounded away from zero. Then we can apply the mild heterogeneity results in Theorem <ref> to prove (<ref>). The proof of (<ref>) is identical to that of Theorem <ref>, substituting the confidence intervals CI_ฮฑ nk^0 instead of CI_ฮฑ nk. Point-wise coverage of the fold-specific confidence intervals holds by (<ref>).
The proof of (<ref>) and (<ref>) follows a similar structure to Corollary <ref> and Theorem <ref>, respectively. In each case the only thing that changes is that we apply the uniformity result for degenerate CIs in (<ref>) (Lemma <ref>), rather than the non-degenerate uniformity result in Theorem <ref>.
ยง SUPPORTING LEMMAS AND PROOFS
Suppose that ฮ is a set of distributions constrained in such a way that Assumption <ref> holds. Let V_ฯ^*(ฮณ) = ฮฒ_2(ฮณ)^2V_x(ฮณ) and ฮฉ(ฮณ) be the pseudo-VCATE and covariance matrix associated with ฮณ, respectively. If CI_ฮฑ n is a confidence interval obtained by substituting (V_ฯ n,ฮฉ_n) into (<ref>), then
* 1-ฮฑโคn โโliminfinf_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ) โCI_ฮฑ n k)
* If in addition, ฮฉ_12(ฮณ)V_ฯ(ฮณ) โค t, then n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ,โ_-nk) โCI_ฮฑ n k) โค (1-ฮฑ) + ฮฑฬ(t), where ฮฑฬ(t) โฅ 0 and lim_t โ 0ฮฑฬ(t) = 0.
Let {ฮณ_n โฮ: n โโ} denote a sequence of distributions. Our goal is to verify that the confidence interval satisfies the assumptions of Corollary 2.1(c) in <cit.>. Define a sequence of parameters, h_n := (n V_ฯ n^*,V_ฯ n^*,vec(ฮฉ_n),ฮถ_n). By Assumption <ref>, each element is contained in โ, a subset of the extended Euclidean space in which ฮฉ_n is positive-definite with bounded eigenvalues. The quantity nV_ฯ n^* is positive but unbounded, and can converge to +โ. Assumption B in <cit.> is stated in terms of subsequences and the first step is to write the problem in this way. We show that for h โโ and all subsequences {n_โ}_โ =1^โ where h_n_โโ h โโ,
1-ฮฑโคlim_n_โโโโ_ฮณ_n_โ( V_ฯ n_โ^* โCI_ฮฑ n) โค (1-ฮฑ) + ฮฑฬ(t), ฮฑฬ(t) โฅ 0, ฮฑโ [0,1]
We break down the proof by cases.
(a) Near homogeneity case: Suppose that n_โ V_ฯ n_โ^* โ^p v โ [0,โ), V_ฯ n_โ^* โ^p 0, ฮถ_n_โโฮถ^* โ{0,1} and ฮฉ_n_โโฮฉ as n โโ, where ฮฉ is positive-definite. For this case, โ(n_โ)V_ฯ n_โ^* = o(1). By applying Lemma <ref>,
n_โ( V_ฯ n_โ^*-V_ฯ n_โ^*) =
(e_1'ฮฉ^1/2Z_n_โ)^2 + 2 ฮถ^*โ(v)e_1'ฮฉ^1/2Z_n_โ + o_p(1)
= (e_1'ฮฉ^1/2Z_n_โ + ฮถ^* โ(v))^2 - v + o_p(1).
Let Z โผ๐ฉ(0,I_2ร 2) (independent of n_โ). Since ฮฉ_n_โ = ฮฉ + o_p(1), then the estimated empirical process at V_ฯ n_โ^* has the same limiting distribution as the estimator. For a fixed ฮถโ{-1,1} (that may differ from ฮถ^*),
n_โ G(n,V_ฯ n_โ^*,ฮฉ_n_โ,Z,ฮถ)
= (e_1'ฮฉ^1/2Z + ฮถโ(v))^2 - v + o_p(1).
Define the limiting CDF as H_ฮฉ,v,ฮถ(แนฝ) := โ_ฮณ_n_โ( (e_1'ฮฉ^1/2 Z + ฮถโ(v))^2 - v โคแนฝ), where Z โผ๐ฉ(0,I_2ร 2). Since Z has mean zero, H_ฮฉ,v,ฮถ = 1(แนฝ) = H_ฮฉ,v,ฮถ = -1(แนฝ) = H_ฮฉ,ฮฝ(แนฝ), which does not depend on ฮถ. Since ฮฉ is positive-definite, the function H_ฮฉ,v(แนฝ) is continuous and strictly increasing. Let F_n_โ,V_ฯ n_โ^*,ฮฉ_n โ,ฮถ_n_โ(แนฝ) = F_n_โ,V_ฯ n_โ^*,ฮฉ_n_โ,ฮถ_n_โ(แนฝ/n_โ) โ H_ฮฉ,v(แนฝ). Since H_ฮฉ,v is continuous, then <cit.> implies that this convergence is uniform in แนฝ, and since the limiting CDF is strictly increasing,
F_n_โ,V_ฯ^*,ฮฉ_n,ฮถ(V_ฯ n_โ-V_ฯ^*) = Fฬ_n_โ,V_ฯ n_โ^*,ฮฉ_n,ฮถ(n_โ(V_ฯ n_โ-V_ฯ^*)) โ^d U_n_โ,
for all ฮถโ{-1,1} where U_n_โโผโผ Uniform[0,1]. The test statistic converges to a fixed distribution regardless of the choice of ฮถ. Similarly, q_ฮฑ/2(n_โ,V_ฯ n_โ^*,ฮฉ,ฮถ) โ^p min{ฮฑ/2,H_ฮฉ,v(0)}. Define a random variable, R_n_โ,ฮถ := F_n_โ,V_ฯ^*,ฮฉ_n,ฮถ(V_ฯ n_โ-V_ฯ n_โ^*) - q_ฮฑ/2(n_โ,V_ฯ n_โ^*,ฮฉ_n_โ,ฮถ) โ^d U_n_โ - min{ฮฑ/2,H_ฮฉ,v(0)}. By definition, V_ฯ n_โ^* โCI_ฮฑ nโ_ฮถโ{-1,1}{R_n_โ,ฮถโ [0,1-ฮฑ] }.
As n_โโโ,
โ_ฮณ_n_โ( V_ฯ n_โ^* โCI_ฮฑ n k) โฅmax_ฮถโ{-1,1}โ_ฮณ_n_โ( R_n_โ,ฮถโ [0,1-ฮฑ] ) โฅโ_ฮณ_n_โ( R_n_โ,ฮถ_n_โโ [0,1-ฮฑ] )
= โ_ฮณ_n_โ(0โค U_n_โ - min{ฮฑ/2,H_ฮฉ,v(0)}โค 1-ฮฑ) + o(1)
= (1-ฮฑ) + o(1).
Since the limiting distribution doesn't depend on ฮถ, we can apply the continuous mapping theorem to show that R_n_โ^max := min_ฮถโ{-1,1}R_n_โ,ฮถ and R_n_โ^min:=max_ฮถโ{-1,1}R_n_โ,ฮถ both converge to the same limit, U_n_โ - min{ฮฑ/2,H_ฮฉ,v(0)}. As n_โโโ,
โ_ฮณ_n_โk( V_ฯ n_โ^* โCI_ฮฑ n k) โคโ_ฮณ_n_โ( 0 โคR_n_โ^max, R_n_โ^minโค 1-ฮฑ)
= (1-ฮฑ) + o(1).
For this class of subsequences the confidence interval produces exact coverage.
Mild heterogeneity case: Suppose that n_โ V_ฯ n_โ^* โโ and V_ฯ n_โ^* โ V_ฯ^*, where V_ฯโ [0,โ), ฮถ_n_โโฮถ^*, and ฮฉ_n_โโฮฉ. Then we can rescale by โ(n_โ /V_ฯ n_โ^*).
โ(n_โ/V_ฯ n_โ^*)( V_ฯ n_โ^*-V_ฯ n_โ^*)
= (e_1'ฮฉ_n_โ^1/2Z_n_โ)^2/โ(n_โ V_ฯ n_โ^*) + 2 ฮถ_n_โ e_1'(ฮฉ_n_โ^1/2Z_n_โ)+ โ(V_ฯ n_โ^*)(e_2'ฮฉ_n_โ^1/2Z_n_โ) + o_p(1)
= 2 ฮถ^* (e_1'ฮฉ^1/2Z_n_โ)+ โ(V_ฯ^*)(e_2'ฮฉ^1/2Z_n_โ) + o_p(1).
The limiting distribution is normal. For convenience, we write this as โ(n_โ/V_ฯ n_โ^*)( V_ฯ n_โ^*-V_ฯ n_โ^*) = ฯ(ฮถ^*)Zฬ_n_โ + o_p(1), where Zฬ_n_โโผ๐ฉ(0,1) and ฯ(ฮถ)^2 := ฮฉ_11 + V_ฯ^*ฮฉ_22 + ฮถโ(V_ฯ^*)ฮฉ_12 for ฮถโ{1,-1}. Since the norm of [1,ฮถโ(V_ฯ^*)]' is larger than one, it follows that ฯ(ฮถ)^2 โฅฮป_min, where ฮป_min is the smallest eigenvalue of ฮฉ. In this case, the limiting CDF is H_ฮฉ,V _ฯ^*,ฮถ(แนฝ) := ฮฆ(แนฝ/ฯ(ฮถ)) where ฮฆ(ยท) is the CDF of a standard normal. Let z_ฮฑ = ฮฆ^-1(ฮฑ) denote the ฮฑ-quantile, and ฯ(ยท) the marginal of a standard normal.
โ_ฮณ_n_โ(R_n_โ,ฮถโ [0,1-ฮฑ]) = โ(-ฮฑ/2 โคฮฆ(ฯ(ฮถ^*)/ฯ(ฮถ)Zฬ_n_โ) โค 1-ฮฑ/2) + o(1)
= ฮฆ( ฯ(ฮถ^*)/ฯ(ฮถ)z_1-ฮฑ/2) - ฮฆ( ฯ(ฮถ^*)/ฯ(ฮถ)z_-ฮฑ/2)_ฮบ(ฯ(ฮถ),ฯ(ฮถ^*)) + o(1).
To obtain the lower bound,โ_ฮณ_n_โ( V_ฯ n_โ^* โCI_ฮฑ n k) โฅโ_ฮณ_n_โ( R_n_โ,ฮถ^*โ [0,1-ฮฑ] ) = (1-ฮฑ) + o(1). To obtain the upper bound, I write down a Taylor expansion. There is a ฯฬโฅโ(ฮป_min) between ฯ(ฮถ^*) and ฯ(ฮถ) such that
โฮบ(ฯ_ฮถ,ฯ_ฮถ^*) - (1-ฮฑ) โ โคโ[ฯ(ฯฬz_1-ฮฑ/2/ฯ(ฮถ))z_1-ฮฑ/2 - ฯ(ฯฬz_-ฮฑ/2/ฯ(ฮถ))z_-ฮฑ/2][ฯ(ฮถ^*) - ฯ(ฮถ)]/ฯ(ฮถ)โ
โค|z_-ฮฑ/2|/โ(2ฯฮป_min)โฯ(ฮถ^*) -ฯ(ฮถ) โ .
By another Taylor expansion, โฯ(ฮถ^*)-ฯ(ฮถ)โโค1/2 โ(ฮป_min)โฯ(ฮถ^*)^2 - ฯ(ฮถ)^2โ. If in addition, โโ(V_ฯ*)ฮฉ_12โโค t, then โฯ(ฮถ^*)-ฯ(ฮถ)โโค t and we can define a non-negative function ฮฑฬ(t) = t โ z_-ฮฑ/2โ /(2ฮป_minโ(2ฯ)), which satisfies lim_t โ 0ฮฑฬ(t) = 0. Therefore we have bounded the coverage over an exhaustive class of subsequences of distributions. This satisfies Assumption B in <cit.>. By their Corollary 2.1,
1-ฮฑโคn โโliminfinf_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ) โCI_ฮฑ n) = n โโlimsupsup_ฮณโฮโ_ฮณ( V_ฯ^*(ฮณ) โCI_ฮฑ n) โค 1-ฮฑ + ฮฑฬ(t).
Let (S_-k(X_i),M_-k(X_i),W_i) and ฮป(X_i) be the set of regressors and weights, respectively, that were defined in (<ref>). Define Q_ww,nk := ๐ผ[ฮป(X_i)W_iW_i' |โ_-nk] and let ฮ _nk be a 4 ร 4 diagonal matrix with entries (1,1,1,V_xnk^-1/2). If (X_i,D_i) are independent of the data in โ_-nk, then ฮ _nk'Q_ww,nkฮ _nk has the form in (<ref>).
Let V_i = (D_i-p(X_i)). By definition, ฮป(X_i)W_iW_i' is
ฮป(X_i)[ 1 M_-k(X_i) V_i V_iS_-k(X_i); M_-k(X_i) M_-k(X_i)^2 M_-k(X_i)V_i M_-k(X_i) V_iS_-k(X_i); V_i M_-k(X_i) V_i V_i^2 V_i^2S_-k(X_i); V_iS_-k(X_i) M_-k(X_i) V_iS_-k(X_i) V_i^2S_-k(X_i) V_i^2S_-k(X_i)^2 ].
Since (X_i,D_i) are independent of the data in โ_-nk, then ๐ผ[V_i | X_i = x, โ_-nk] does not depend on โ_-nk and is equal to ๐ผ[V_i | X_i = x] = ๐ผ[D_i | X=x] - p(x) = 0. By a similar reasoning, ๐ผ[V_i^2 | X_i = x, โ_-nk] = p(X_i)(1-p(X_i)) = ฮป(X_i)^-1. Using both conditional moment results, we can show that ๐ผ[ฮป(X_i)W_iW_i' | X_i, โ_-nk] is equal to
Q_ww,nk = ๐ผ[. ฮป(X_i) ฮป(X_i) M_-k(X_i) 0 0
ฮป(X_i) M_-k(X_i) ฮป(X_i) M_-k(X_i)^2 0 0
0 0 1 S_-k(X_i)
0 0 S_-k(X_i) S_-k(X_i)^2
| โ_-nk].
We substitute ๐ผ[S_-k(X_i) |โ_-nk] = 0 and ๐ผ[S_-k(X_i)^2 |โ_-nk] = V_xnk. Finally, ฮ _nk'Q_ww,nkฮ _nk only normalizes the lower right corner to 1.
Consider a sequence of distributions {ฮณ_n}_n=1^โ over a collection of random matrices {Z_i1,โฆ,Z_iL}_i โโ_nk where L is a finite constant, Z_iโโโ^Mรโ^B. Define ฮถ_nk := โ_i โโ_-nkโ_โ=1^L Z_iโ. If for all โโ{1,โฆ,L}, (i) the observations are i.i.d. conditional on โ_-nk, (ii) ๐ผ_ฮณ_n[Z_iโ|โ_-nk] = 0_Mร B, and (iii) ๐ผ_ฮณ_n[โ Z_i โโ^2] has a uniform upper bound and ฮณ_n, then as n_k โโ, (a) n_k^-1/2ฮถ_nk = O_p(1), (b) If in addition, ๐ผ_ฮณ_n[โ Z_iโโ^2] = o(1), then n_k^-1/2ฮถ_nk = o_p(1), (c) Now suppose (ii) and (iii) do not necessarily hold, but instead n_k^r๐ผ_ฮณ_n[โ Z_iโโ] = o(1) for all โ and some r > 0, then n_k^r-1ฮถ_nk = o_p(1).
Since M,L,B are all finite, it suffices to consider ฮถ_nkโ mb := โ_i โโ_-nk Z_iโ mb, where Z_iโ mb is the coordinate (m,b) of Z_iโ. By the law of iterated expectations, R_nkโ mb := ๐ผ_ฮณ_n[(ฮถ_nkโ mb-๐ผ_ฮณ_n[ฮถ_nkโ mb|โ_-nk])^2] is equal to ๐ผ_ฮณ_n[๐ผ_ฮณ_n[(ฮถ_nkโ mb-๐ผ_ฮณ_n[ฮถ_nkโ mb|โ_-nk])^2 |โ_-nk]]. Substituting the definition of conditional variance,
R_nkโ mb = ๐ผ_ฮณ_n[๐_ฮณ_n[ฮถ_nk โ mb|โ_-nk]+๐ผ_ฮณ_n[ฮถ_nkโ mb|โ_-nk]^2].
The first term of (<ref>) is O(n_k^-1). By Assumption (i), ฮถ_nkโ mb is a sum of n_k variables that are i.i.d. conditional on โ_-nk, and hence ๐_ฮณ_n[ฮถ_nkโ mb|โ_-nk] = n_k ๐_ฮณ_n( Z_iโ mb|โ_nk) = n_k ๐ผ_ฮณ_n[ Z_iโ mb^2 |โ_nk]. By the law of iterated expectations, ๐ผ_ฮณ_n[๐_ฮณ_n[ฮถ_nkโ mb|โ_-nk]] = n_k๐ผ_ฮณ_n[Z_iโ mb^2]. The second term of (<ref>) is zero by Assumption (ii).
To prove (a), we apply Chebyshev's inequality โ(n_k^-1ฮถ_nkโ mb> t) โค๐ผ_ฮณ_n[Z_iโ mb^2]/t^2 for some t>0. This shows that n_k^-1/2ฮถ_nkโ mb = O_p(1). To prove part (b), we use the condition that ๐ผ_ฮณ_n[Z_iโ mb^2] = o(1) to show that n_k^-1/2ฮถ_nkโ mb = o_p(1). To prove (c), we apply the triangle inequality, โ n_k^r-1ฮถ_nkโ mbโโค n_k^r-1โ_i โโ_-nkโ Z_iโ mbโ. By Markov's inequality, โ(โ n_k^r-1ฮถ_nkโ mbโ > t) โค n_k^r๐ผ[โ Z_iโ mbโ ]/t = o(1), hence n_k^r-1ฮถ_nkโ mb = o_p(1).
Let ฯ_i(ฮท) := ฯ(Y_i,D_i,X_i,ฮท), for i โโ_nk, and U_i = Y_i - ๐ผ[Y_i | D_i,X_i]. Suppose that {Y_i,D_i,X_i}_i โโ_nk is independent of {Y_i,D_i,X_i}_i โโ_-nk for all k โ{1,โฆ,K}, and consider a set of ฮทโ๐ฏ where the propensity score is bounded in [ฮด,1-ฮด] for ฮดโ (0,1/2]. Then (a) ๐ผ[โฯ_i(ฮท)/โ(ฮท_1,ฮท_2,ฮท_4)'| X= x,โ_-nk] = [0,0,-2(ฯ(x)-ฯ_av)] almost surely, (b) ๐ผ[ โโฯ_i(ฮท)/โฮท_โโ^4 ]^1/4โค (8/ฮด)(๐ผ[โฮท(X_i)โ^4]^1/4 + ๐ผ[ โ U_i โ^4 ]^1/4), and (c) โโ^2 ฯ_i(ฮทฬ)/โ(ฮท_1,ฮท_2,ฮท_4)โ(ฮท_1,ฮท_2,ฮท_4)'โโค 18 รโ(3)/ฮด almost surely, for ฮทฬโ๐ฏ.
Part (a): The jacobian of ฯ_i(ฮท), โฯ_i(ฮท)/โ(ฮท_1,ฮท_2,ฮท_4) is
[ 2(ฯ(X_i)-ฯ_av)[1-D_i/p(X_i)]+
2 [ D_i(Y_i - ฮผ_0(X_i)-D_iฯ(X_i))/p(X_i)-(1-D_i)(Y_i - ฮผ_0(X_i))/1-p(X_i)]; 2(ฯ(X_i)-ฯ_av)[ - D_i/p(X_i) + 1-D_i/1-p(X_i)]; -2(ฯ(X_i)-ฯ_av) -2[ D_i(Y_i - ฮผ_0(X_i)-D_iฯ(X_i))/p(X_i)-(1-D_i)(Y_i - ฮผ_0(X_i))/1-p(X_i)] ]
Part (a): Substituting ๐ผ[D_i | X_i = x,โ_-nl] = p(x) and ๐ผ[D_iY_i | X=x,โ_-nk] = p(x)(ฮผ_0(x)+ฯ(x)), then ๐ผ[โฯ_i(ฮท)/โ(ฮท_1,ฮท_2,ฮท_4)'| X= x,โ_-nk] = [0,0,-2(ฯ(x)-ฯ)]'.
Part (b): By construction, U_i = Y_i - ฮผ_0(X_i)-D_iฯ(X_i). Then the jacobian simplifies to โฯ_i(ฮท)/โ(ฮท_1,ฮท_2,ฮท_4)' = G_1i + G_2i, where G_1i := 2(ฯ(X_i)-ฯ_av)ร [(1-D_i/p(X_i)),(-D_i/p(X_i)+(1-D)/(1-p(X_i))),-1] and G_2i := 2(D_i/p(X_i)-(1-D_i)/(1-p(X_i)))ร [U_i,0,-U_i]. Since p(x) โ [ฮด,1-ฮด], โ G_1iโโค (4/ฮด)โฯ(X_i) - ฯ_avโโค (8/ฮด)โฮท(x) โ and โ G_2iโโค (8/ฮด) โ U_i โ. Therefore, by the triangle inequality, ๐ผ[ โโฯ_i(ฮท)/โฮท_โโ^4 ]^1/4โค (8/ฮด)(๐ผ[โฮท(X_i)โ^4]^1/4 + ๐ผ[ โ U_i โ^4]).
Part (c): The hessian of ฯ_i(ฮทฬ), which I denote by H(ฮทฬ), is symmetric and
โ^2 ฯ_i(ฮทฬ)/โ(ฮท_1,ฮท_2,ฮท_4)โ(ฮท_1,ฮท_2,ฮท_4)' =
[ 2[1-D_i/pฬ(X_i)] -2 D_i/pฬ(X_i) ยท ยท; 2[ - D_i/pฬ(X_i) + 1-D_i/1-pฬ(X_i)] 0 ยท; 2[ 1 - D_i/pฬ(X_i)] -2 [ D_i/pฬ(X_i)-(1-D_i)/1-pฬ(X_i)] - 2 ]
Since pฬ(x) โ [ฮด,1-ฮด], then โ D_i/pฬ(X_i) โโค 1/ฮด, โ 1-D_i/pฬ(X_i) โโค (1+1/ฮด) โค 2/ฮด, and โ D_i/pฬ(X_i) - (1-D_i)/(1-pฬ(X_i)) โโค 2/ฮด. This means that each of the entries of H(ฮทฬ) is bounded by 6/ฮด. By Lemma <ref>, โ H(ฮทฬ) โโค 3 ร (6/ฮด) รโ(3) = 18 รโ(3)/ฮด.
Define ฮท_-k,ฮธ_nk(x) as in (<ref>), ฮธ_n := (0,1,ฯ_n,av,1), (ฮป_nk-ฮป_nk):= [(ฮธ_nk-ฮธ_n),ฯ_nk,av(ฮธ_nk-ฮธ_n),(ฯ_nk,av-ฯ_n,av)]', and let {e_j}_j=1^4 be 4 ร 1 vectors with 1 in the j^th coordinate and zero otherwise. Then there exist (x,โ_-nk)-measurable matrices ฮจ_-nk(x), ฮ_-nk(x), such that
ฮท_-k,ฮธ_nk(x) - ฮท(x) = ฮจ_-nk(x)(ฮป_nk-ฮป_nk) + ฮ_-nk(x),
and for some constant C< โ, (a) e_3'[ฮท_-k,ฮธ_nk(x) - ฮท(x)] = 0, (b) e_3'ฮ_-nk(x) = e_4'ฮ_-nk(x) = 0, (c) e_4'ฮจ_-nk = c, for c โโ^9, (d) โฮจ_-nk(x) โโค C ร[1+ 2โฮท_-k(x) โ] a.s., (e) โฮ_-nk(x) โโค C รโฮท_-k(x) - ฮท(x) โ a.s.
Define B(x) := [e_1 (W(x,1) -W(x,0))' +e_2 W(x,0)'].
ฮท_-k,ฮธ_nk(x)' = B(x) + e_3 p_n(x) + e_4 ฯ_nk,av,
ฮท_-k,ฮธ_n(x) = B(x)ฮธ_n + e_3 p_n(x) + e_4 ฯ_nk,av,
ฮท(x) = e_1 ฯ_n(x) + e_2 ฮผ_0n(x) + e_3 p_n(x) + e_4 ฯ_n,av.
The estimation error can be decomposed as
ฮท_-k,ฮธ_nk(x) - ฮท(x) = B(x)(ฮธ_nk-ฮธ_n) + e_4[ฯ_nk,av-ฯ_n,av] + [B(x)'ฮธ_n - e_1ฯ_n(x)+e_2ฮผ_0n(x)].
By definition B(x) = ฮจ_1,-nk(x)+ฯ_nk,avฮจ_2,-nk(x), with auxiliary matrices ฮจ_1,-nk := e_1[0,0,1,ฯ_-k(x)]+e_2 [1,M_-k(X), -p_n(x), -p_n(x)ฯ_-k(x)] and ฮจ_2,-nk := e_1[0,0,0,-1] + e_2 [0,0,0,p_n(x)]. Substituting the parameter ฮธ_n := (0,1,ฯ_n,av,1) and grouping common terms, B(x)'ฮธ_n -e_1 ฯ_n(x) - e_2ฮผ_0n(x)= e_1 [(ฯ_-k(x)-ฯ_n(x)) + (ฯ_n,av-ฯ_nk,av)]+ e_2[M_-k(x) -ฮผ_0n(x) -p_n(x)ฯ_-k(x)+p_n(x)(ฯ_nk,av-ฯ_n,av)]. We can simplify the second term of this expression by substituting M_-k(x) = ฮผ_0,-k(x) + p_n(x)ฯ_-k(x), which produces e_2[(ฮผ_0n(x) -ฮผ_0n(x)) +p_n(x) (ฯ_nk,av-ฯ_n,av)]. Consequently, the second and third terms of (<ref>) can be written as ฮจ_3,-nk(x)(ฯ_nk,av-ฯ_n,av) + ฮ_-nk(x), where ฮจ_3,-nk(x) := -e_1 +e_2p_n(x) + e_4 and ฮ_-nk(x) := e_1(ฯ_-k(x)-ฯ_n(x)) + e_2 (ฮผ_-k(x)-ฮผ_0n(x)).
Define ฮจ_-nk(x) := [ฮจ_1,-nl(x),ฮจ_2,-nk(x),ฮจ_3,-nk(x)] and the parameter error as (ฮป_nk-ฮป_nk) := [(ฮธ_nk-ฮธ_n)',ฯ_nk,av(ฮธ_nk-ฮธ_n)',(ฯ_nk,av-ฯ_n,av)]'. Combining the results,
ฮท_-k,ฮธ_nk(x) - ฮท(x) = ฮจ_-nk(x)(ฮป_nk-ฮป_nk) + ฮ_-nk(x).
Measurability with respect to (x,โ_-nk) can be verified by inspection. Property (a) follows from the fact that the propensity score is known, and (b) because ฮจ_-nk(x) and ฮ_-nk(x) depend on vectors e_1,e_2, which are orthogonal to e_3,e_4. Property (c) follows by the fact that e_4'ฮจ_-nk = [0_1 ร 8,1]. To prove part (d), we apply Lemma <ref> to show that ฮจ_-nk(x) is bounded by 9โ(4) times the largest absolute value of the matrix. Since p_n(x) โค 1, then the largest value is bounded by 1+โฮผ_0,-k(x) โ + โฯ_-k(x) โ which is less than 1+ 2โฮท_-k(x) โ. This means that โฮจ_-nk(x)โโค 12 รโ(4)ร[1+ 2โฮท_-k(x) โ]. To prove, part (e) we once again apply Lemma <ref>. The quantity ฮ_-nk(x) is a 4 ร 1 vector, whose individual entries are bounded by โฮผ_0,-k-ฮผ_0n(x) โ + โฯ_-k(x)-ฯ_n(x) โ, which is weakly less than 2 โฮท_-k(x) - ฮท(x) โ.
Consider a sequence of distributions {ฮณ_n}_n=1^โ that satisfy Assumptions <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>, and that V_ฯ nโ V_ฯ > 0. Define oracle regressors, W_i^* :=[1,ฮผ_0n(X_i) + p_n(X_i)ฯ_n(X_i),(D_i-p_n(x_i)), (D_i-p_n(X_i))(ฯ_n(X_i)-ฯ_av,n)]' and ฮธ_n := ๐ผ_ฮณ_n[ฮป_n(X_i)W_i^*W_i^*']^-1๐ผ_ฮณ_n[ฮป_n(X_i)W_i^*Y_i]. Then (a) ฯ_nk,av-ฯ_n,av = o_p(n_k^-1/4), (b) ฮธ_n = (0,1,ฯ_n,av,1)', and (c) ฮธ_nk - ฮธ_n = o_p(n_k^-1/4).
Part (a): Decompose, n_k^1/4(ฯ_nk,av - ฯ_av,n) = n_k^-1/4[ 1/โ(n_k)โ_i โโ_nkฯ_n(X_i)-ฯ_n,av] + n_k^1/4-1โ_i โโ_nk [ฯ_-k(X_i) - ฯ_n(X_i)]. The first term is a centered random variable that is o_p(1). By Assumption <ref>.(iv), and the Cauchy-Schwartz inequality, ๐ผ_ฮณ_n[n_k^1/4โฯ_-k(X_i) - ฯ_n(X_i)] โโค๐ผ_ฮณ_n[n_k^1/2โฮท_-k(X_i) -ฮท(X_i)โ^2]^1/2โ o(1). By applying Lemma <ref>.(c), n_k^1/4(ฯ_nk,av - ฯ_av,n) = o_p(1).
Part (b): For given {X_i = x,D_i = d}, W_i^*'(0,1,ฯ_n_av,1)' = ฮผ_d(x). Therefore, by applying Lemma <ref>, ฮธ_n = (0,1,ฯ_n,av,1)'.
Parts (c): Define Q_ww := 1/n_kโ_i โโ_nkฮป_n(X_i)W_iW_i', Q_ww:= ๐ผ_ฮณ_n[ฮป_n(X_i) W_i^*W_i^*'], M_n(x) = ฮผ_0n(x) + p_n(x)ฯ_n(x). Following similar derivations to Lemma <ref>,
Q_ww = [ ๐ผ_ฮณ_n[ฮป(X_i)] ๐ผ_ฮณ_n[ฮป(X_i)(ฮผ_n(X_i))] 0 0; ๐ผ_ฮณ_n[ฮป(X_i)(ฮผ_n(X_i))] ๐ผ_ฮณ_n[ฮป(X_i)(ฮผ_n(X_i))^2] 0 0; 0 0 1 0; 0 0 0 V_ฯ n; ].
The upper left block has bounded eigenvalues by Assumption <ref>.(i) and V_ฯ n is asymptotically bounded. Therefore Q_ww is positive definite with bounded eigenvalues. Furthermore, Q_ww - Q_ww can be decomposed as:
[ 1/n_kโ_i โโ_nkฮป_n(X_i)W_i^*W_i^*' - ๐ผ_ฮณ_n[ฮป_n(X_i)W_i^*W_i^*'] ] + [ 1/n_kโ_i โโ_nkฮป_n(X_i)(W_iW_i' - W_i^*W_i^*') ].
The first term of (<ref>) is an average of mean-zero random variables and bounded variance, then it is O_p(n_k^-1/2) = o_p(n_k^-1/4). To prove that it has bounded variance, apply Lemma <ref>, then โ W_i^*โโคโ(4)(1+ โฮผ_0n(X_i) โ + โฯ_n(X_i) โ) โคโ(4)(1+ 2โฮท(X_i) โ). Since p_n(x) โ [ฮด,1-ฮด] and ฮป_n(X_i) = [p_n(X_i)(1-p_n(X_i))]^-1, then ๐ผ_ฮณ_n[โฮป_n(X_i) W_i^*โ^2]^1/2โค (1/ฮด^2) ๐ผ_ฮณ_n[โ W_i^*โ^4]^1/4โค (โ(4)/ฮด^2)(1+2 ๐ผ_ฮณ_n[โฮท(X_i) โ^4]^1/4), which is bounded by Assumption <ref>.(ii).
To bound the second term of (<ref>), we apply the triangle inequality, โฮป_n(X_i) (W_iW_i' - W_i^*W_i^*')โโค (1 / ฮด^2)ฯ_i, where ฯ_i := 2 โ W_i^*'โ โฮถ_i โ + โฮถ_i โ^2 and ฮถ_i := W_i - W_i^*. Our goal is to show that 1/n_kโ_i โโ_-nkฯ_i = o_p(n_k^-1/4). Let e_โ be a 4 ร 1 vector with one in the โ^th entry and zero otherwise. We can further decompose ฮถ_i =ฮ_nkฮถ_-nk,1(X_i) + ฮถ_-nk,2(X_i), into components that map into our assumptions: ฮ_nk := (ฯ_nk,av-ฯ_n,av), ฮถ_-nk,1(X_i) := e_4 (D_i-p_n(X_i)) and ฮถ_-nk,2(X_i) := e_1(M_-k(X_i)-M_n(X_i)) + e_2(D_i-p_n(X_i))(ฯ_-k(X_i)-ฯ_n(X_i)). By construction, โฮถ_-nk,1(X_i) โโค 1. Applying the triangle inequality and grouping terms, ฯ_i โคโฮ_nkโ ^2 + โฮ_nkโฯ_i1 + ฯ_i2, where ฯ_i1 := ( 2 โ W_i^*โ + 2 โฮถ_-nk,2โ), and ฯ_i2 := (2 โ W_i^*โ โฮถ_-nk,2(X_i) โ + โฮถ_-nk,2(X_i) โ^2 ). By Assumption <ref>.(iv), ๐ผ_ฮณ_n[โฮถ_-nk,2(X_i) โ^2]^1/2โค 3 ๐ผ_ฮณ_n[โฮท_-k(X_i) - ฮท(X_i) โ^2]^1/2 = o(n_k^-1/4). Therefore, by the Cauchy-Schwarz inequality, ๐ผ_ฮณ_n[ฯ_iโ^2]^1/2 = o(n_k^-1/4) for โโ{1,2}. Then by Lemma <ref>.(c), [ 1/n_kโ_i โโ_-nkฯ_i_โ] = o_p(n_k^-1/4) for โโ{1,2}. Combining terms,
โn_k^1/4/n_kโ_i โโ_-nkฮป_n(X_i)(W_iW_i' - W_i^*W_i^*') โโค1/ฮด^2โ_โ = 0^2 n_k^1/4โฮ_nkโ ^โ -2[ 1/n_kโ_i โโ_-nkฯ_i_โ]= o_p(1).
Define Q_wy := 1/n_kโ_i โโ_nkฮป_n(X_i) W_iY_i and Q_wy:= ๐ผ_ฮณ_n[ฮป_n(X_i)W_i^*Y_i]. We can apply similar arguments as above to show that Q_wy- Q_wy = o_p(n_k^-1/4).
Substituting the definition, ฮธ_n := Q_ww^-1Q_wy and rearranging terms, n_k^1/4(ฮธ_nk -ฮธ_n) = n_k^1/4(Q_ww^-1Q_wy-ฮธ_nk) = Q_ww^-1 n_k^1/4(Q_wy-Q_wy)-n_k^1/4(Q_ww^-1Q_wy-Q_ww^-1Q_wy).
The first term is o_p(n_k^-1/4). The second term can be rewritten as n_k^1/4(Q_ww^-1-Q_ww^-1)Q_wy = o_p(n_k^-1/4). To prove this, note that โ Q_ww^-1-Q_ww^-1โ =โ Q_ww^-1(Q_ww-Q_ww)Q_ww^-1โโคโ Q_ww^-1โ โQ_ww-Q_wwโ โQ_ww^-1โ. The right-hand side is o_p(n_k^-1/4) since Q_ww has eigenvalues bounded away from zero and Q_ww converges to its true value at rate n_k^1/4. Combining the results produces n_k^1/4(ฮธ_nk -ฮธ_n) = o_p(n_k^-1/4).
Let H be an M ร L matrix and let โ H โ = sup_{z โโ^L: โ z โ = 1}โ Hz โ be corresponding matrix operator norm. The absolute value of the individual entries of H is bounded a constant C. Then โ H โโค LCโ(M).
Let H_m be the m^th row and H_mโ be the (m,โ) entry. Then โ H โ = sup_{z โโ^L: โ z โ = 1}โ(โ_m =1^M [ H_mz]^2 )= sup_{z โโ^L: โ z โ = 1}โ(โ_m = 1^M [ โ_โ =1^L H_mโz_โ]^2 ). Since |z_โ| โค 1 and โ H_mโโโค C, โ H โ = sup_{z โโ^L: โ z โ = 1}โ(โ_m=1^M [ โ_โ=1^L| H_mโ| | z_โ| ]^2 )โค CLโ(M).
|
http://arxiv.org/abs/2306.01421v1
|
20230602102233
|
Convergence analysis of equilibrium methods for inverse problems
|
[
"Daniel Obmann",
"Markus Haltmeier"
] |
math.NA
|
[
"math.NA",
"cs.CV",
"cs.NA"
] |
A study of the capabilities for inferring atmospheric information from high-spatial-resolution simulations
C. Quintero Noda 1,2Corresponding author E. Khomenko1,2 M. Collados1,2 B. Ruiz Cobo1,2 R. Gafeira3 N. Vitas1,2 M. Rempel4
R. J.ย Campbell5 A. Pastor Yabar6 H. Uitenbroek7 D. Orozco Suรกrez8
Received 25 March 2021 ; accepted 28 January 2022
=================================================================================================================================================================================================================
Recently, the use of deep equilibrium methods has emerged as a new approach for solving imaging and other ill-posed inverse problems. While learned components may be a key factor in the good performance of these methods in practice, a theoretical justification from a regularization point of view is still lacking. In this paper, we address this issue by providing stability and convergence results for the class of equilibrium methods. In addition, we derive convergence rates and stability estimates in the symmetric Bregman distance. We strengthen our results for regularization operators with contractive residuals. Furthermore, we use the presented analysis to gain insight into the practical behavior of these methods, including a lower bound on the performance of the regularized solutions. In addition, we show that the convergence analysis leads to the design of a new type of loss function which has several advantages over previous ones. Numerical simulations are used to support our findings.
Keywords:
inverse problems, regularization, equilibrium points, stability guarantees, stability estimates, convergence, convergence rates, learned reconstruction, neural networks
ยง INTRODUCTION
In various imaging applications it is often not possible to measure the image of interest directly which is therefore measured indirectly. Assuming a linear measurement model, recovering the sought for image โ requires solving the inverse problem
= + z_ฮด ,
where โ is a linear operator between Hilbert spaces modeling the forward problem, z_ฮด is the data perturbation and โ is the noisy data. In many cases problems of the form (<ref>) are ill-posed, meaning that the operator cannot be uniquely and stably inverted. To get reasonable approximate solutions one has to use regularization methods which essentially approximate (<ref>) with a family of neighboring well-posed problems. The prime example is variational regularization <cit.> where approximate solutions are constructed as minimizers of (ยท)-^2/2 +, where is the regularizer and > 0 is a tuning parameter that acts as a tradeoff between data-fitting and stability.
ยง.ยง Equilibrium methods
For quite some time now, there has been a trend towards solving imaging problems by learned components <cit.>. While these methods often achieve superior results compared to classical reconstruction methods, in many cases the theory is still rather underdeveloped. In this paper we are interested in a relatively recent development in this area, where the neighboring problem is defined by certain equilibrium equation; see <cit.> for a recent review. Specifically, in this paper we consider approximate solutions _^ฮด as solutions of the equilibrium equation
_(, ) ^*( - ) + ฮฑ () = 0 ,
where > 0 is a tuning parameter and โ is a potentially learned regularization operator. If = โ is of gradient form, then equationย (<ref>) characterizes critical points of (ยท ) - ^2/2 + ฮฑ. However, in this paper we are interested in the more general case where is not necessarily of gradient form.
Equation (<ref>) can equivalently be written in either of the fixed point forms
= - ฮฒ (^*( - )
+ ())
= (๐ + ฮฒ)^-1( - ฮฒ^*( - ))
where the characterization (<ref>) requires (๐ + ฮฒ)^-1 to be invertible. While (<ref>), (<ref>) have been used to find appropriate numerical schemes to solve equationย (<ref>), in this paper we analyzeย (<ref>) rather than specific algorithms for solving it.
To the best of our knowledge, theoretical considerations for methods of the form (<ref>), (<ref>), (<ref>) mostly consider the convergence of iterative methods for its solution; see <cit.>. Opposed to this, theoretical questions such convergence of solutions of (<ref>) as ฮดโ 0 are still open. We mention here the paper <cit.> which considers an implicit form similar toย (<ref>) motivated by the use of learned denoisers <cit.>.
ยง.ยง Main contributions
In this paper we analyzeย (<ref>) and provide stability and convergence under relatively weak assumptions on . Besides this, we give some additional results about the behavior of these solutions including stability estimates and convergence rates. To be more specific, our main contributions are as follows.
* We show that for a fixed > 0 solutions ofย (<ref>) depend stably on the data . Moreover we show convergence in the sense that for โโ() as ฮดโ 0 and a suitable parameter choice = (ฮด), solutions of (<ref>) converge to solutions of the limiting problem
= and -() โ()^โฅ .
In the special case that = โ is of gradient form, this is the first order optimality condition of the constrained optimization problem _{() | = } defining -minimizing solutions from variational regularizationย <cit.>.
* Under the source-condition () โ(^*) we derive convergence rates in the absolute symmetric Bregman-distance and -.
* We strengthen our results for the case that = ๐ - is a residual operator with contractive residual part ; compare <cit.>. In particular, we derive uniqueness of the regularized problem (<ref>) and the limiting problem (<ref>). Moreover, we show that in this case convergence in the symmetric Bregman-distance is equivalent to convergence in norm. For this specific form we also study the case of inexact solution ofย (<ref>).
* In the context of learning = ๐ -, we provide a-priori lower bounds on () necessary to guarantee that a set of desirable solutions can be recovered with (<ref>) in the limit. Additionally, we provide lower bounds on the approximation error - _^ฮด.
* Based on (<ref>), we propose a novel loss-function for constructing the operator that is independent of iterative methods for approximating solutions ofย (<ref>).
ยง.ยง Overview
We begin the paper with the general theoretical analysis in Sectionย <ref>. In Sectionย <ref> we consider contractive residual parts showing that (<ref>) indeed yields a regularization method, derive lower bounds on its performance and propose a new type of loss-function. Sectionย <ref> provides some numerical simulations to support the theoretical findings. The paper finishes with a brief summary and outlook in Sectionย <ref>.
ยง CONVERGENCE ANALYSIS
Let โ be a linear and continuous mapping between real Hilbert-spaces and and โ be an operator used for regularization. In this section we derive stability, convergence and quantitative estimates
for solutions of (<ref>) under relatively weak assumptions on the operator given below. Recall that g โ is coercive if g() โโ as โโ.
[Conditions for convergence]
* โ z โโฆ(), - z is coercive.
* is weak-to-weak continuous.
If () โโ() is a selection of subgradient of a convex and sub-differentiable function , then (), - zโฅ() - (z). Thus Conditionย <ref> is satisfied whenever is coercive and is weak-to-weak continuous. This shows that the conditions are in fact satisfied by subgradients of convex regularizers which are of particular interest in variational regularization <cit.>. However, as we show later, Conditionย <ref> is also satisfied for a large class of operators that cannot be written as gradient. As an illustrative example we mention here the class of linear operators which are not self-adjoint.
In the following we make frequent use of the convexity of the data fidelity (ยท) - ^2 / 2 which gives the inequality 2 ^*( - ), z - โค z - ^2 - - ^2.
ยง.ยง Stability and convergence
We begin our analysis by deriving stability and convergence for solutions of equationย (<ref>). Conditions for their existence will be discussed later.
Let > 0 and (_k)_kโโ^ with _kโ. Then, any sequence (_k)_kโ satisfying _(_k, _k) = 0 has a weakly convergent subsequence. Moreover, any weak cluster point of (_k)_kโ is a solution of _(, ) = 0.
We begin by showing that the sequence (_k)_k is bounded. By definition of _k and the convexity of the data-fidelity term, for any z โ,
0 = ^* (_k - _k) + (_k), z - _kโค1/2 z - _k^2 - 1/2_k - _k^2 + (_k), z - _k .
Hence, (_k), _k - zโค z - _k^2/2
where z - _k is bounded. By Conditionย <ref>, (_k)_kโ is bounded and by reflexivity of it has a weakly convergent subsequence.
If is a weak cluster point of (_k)_kโ, then taking the limit in equationย (<ref>) and using the weak continuity of _ shows _(, ) = 0.
The next goal is to show convergence of the regularized solutions for ฮดโ 0.
Let โ(), (_k)_kโโ^ satisfy _k - โคฮด_k for ฮด_k โ 0 and let _k = (ฮด_k) with lim_k _k = lim_k ฮด_k^2 / _k = 0. Any sequence (_k)_kโ with __k (_k, _k) =0 has at least one weak cluster point. Any such cluster point is a solution of (<ref>), that is = and () โ()^โฅ. If the solution of (<ref>) is unique, then (_k)_kโ weakly converges to .
Let ^* be any solution of =. By definition of _k and the convexity of the data-fidelity term we have
0 = ^* (_k - _k) + _k (_k), ^* - _k
โค1/2^* - _k^2 - 1/2_k - _k^2 + _k (_k), ^* - _k
โคฮด_k^2/2 + _k (_k), ^* - _k .
Hence 2 (_k), _k - ^*โคฮด_k^2 / _k. The choice of _k and Conditionย <ref> show that (_k)_k is bounded and hence has a weakly convergent subsequence.
Let be the limit of any weakly convergent subsequence denoted again by (_k)_k. By the weak continuity of we have that ((_k))_k is bounded and thus 0 = lim_k __k(_k, _k) = lim_k ^*(_k - _k) + ฮฑ_k (_k) = ^*( - ). Because โ() this shows that is a solution of =. Moreover, for any z_0 โ() we have
-(), z_0 = lim_k -(_k), z_0 = lim_k ^*(_k - _k), z_0/ _k = 0
which is gives () โ()^โฅ.
Finally, if the solution of (<ref>) is unique, then any subsequence of (_k)_k has subsequence weakly converging to , which implies that the full sequence weakly converges to .
Theorems <ref> and <ref> show that solutions to the equilibrium equationย (<ref>) are indeed stable and convergent whenever satisfies Conditionย <ref>.
ยง.ยง Quantitative estimates
The next goal is to derive quantitative results in the form of convergence rates and stability estimates. This will be done in the absolute symmetric Bregman-distance , defined by
(, z) = z ,
for , z โ.
Recall that is called monotone if zโฅ 0 for all , z โ; see <cit.> and <cit.> for recent developments in the context of inverse problems. If is monotone, we call (, z) = z= z symmetric Bregman-distance. As the name suggests, the absolute symmetric Bregman-distance is indeed symmetric. If = โ for some convex functional , then the symmetric Bregman-distance is bounded from below by the classical Bregman-distance. As we will see, the absolute symmetric Bregman-distance is quite natural to consider forย (<ref>) and allows making use of the defining property efficiently. Before that, let us briefly discuss a possible interpretation of .
Assume that = is a linear, bounded positive semidefinite not necessarily self-adjoint operator. Then, the symmetric Bregman-distance is non-negative and 0 โค, = ( + ^*) , /2
where ( + ^*)/2 is the projection of onto the set of self-adjoint operators. Since + ^* is self-adjoint and positive semidefinite, there exists with ^* = ( + ^*) and therefore , = , /2 =: _^2/2. Thus, the symmetric Bregman-distance of a linear operator can be interpreted as the square of a weighted norm ยท_^2. In the case that is non-linear but smooth, a similar interpretation can be given around a given point. Locally around z โ, it holds z + hzโD (z) h, h = h_D (z)^2 / 2 where the weight D (z) depends on z and on the local behavior of .
We write โฮด for =(ฮด) if there exist C_1, C_2 > 0 such that C_1 ฮดโคโค C_2 ฮด as ฮดโ 0.
Let โ() and (_k)_kโโ^ be a sequence of data satisfying _k - โคฮด_k with ฮด_k โ 0 and _k โฮด_k.
Let (_k)_k satisfy __k(_k, _k) = 0 and denote by the weak limit of (_k)_k, possibly after restriction to a subsequence (see Theorem <ref>). Assume there exist c,ฮต > 0 such that
โ z โ_ฮต() (z), - zโค c z -
_ฮต() z โ(z) โ(^*) โง(z), z - < ฮต .
Then, the source condition () โ(^*) is sufficient and necessary for
* _k - _k = ๐ช(ฮด_k) for k โโ
* (_k, ) = ๐ช(ฮด_k) for k โโ.
We adapt the proof presented in <cit.>. Let us first assume that the source condition () โ(^*) holds. From the proof of Theoremย <ref> we see 2 _k (_k), _k - โคฮด_k^2.
By assumption on _k we have lim sup_k (_k), _k - โค 0 and hence _k โ_ฮต() for k sufficiently large. For the rest of the proof suppose _k โ_ฮต(). By definition we have (z, ) = z + ฮท() - (z), z -, where ฮท = 0 if zโฅ 0 and ฮท = 2 otherwise. By assumption, () โ(^*) and (z), - zโค c z - for any z โ_ฮต(). Hence, () - (z), z - โค c (z - )
for some c โฅ 0 and (z, ) โคz + c (z - ).
In particular, for z = _k,
(_k, ) โค_k + c (_k - )
By construction of _k, the convexity of the data-fidelity term, the equality = and the assumption - _kโคฮด_k we have
1/2_k - _k^2 + ฮฑ_k (_k), _k -
= 1/2_k - _k^2 + ^* (_k - _k), - _kโค1/2 - _k^2 โค1/2ฮด_k^2 .
By the source condition () โ(^*) it holds
-(), _k -
= ^* w, - _kโค C _k - โค C ( ฮด_k + _k - _k).
Finally, from (<ref>)-(<ref>) and Young's product-inequality we obtain
1/2_k - _k^2 + ฮฑ_k (_k, ) โค1/2ฮด_k^2 + C_1 ฮฑ_k ฮด_k + C_2 ฮฑ_k _k - _k
โค1/2ฮด_k^2 + C_1 ฮฑ_k ฮด_k + C_3 ฮฑ_k^2 + 1/4_k - _k^2 ,
for some constants C_1, C_2 >0. The rates <ref>, <ref> then follow with ฮฑ_k โฮด_k.
Assume now conversely that <ref>, <ref> hold and define w_k (_k - _k) / _k. Then (w_k)_k โ is bounded and thus has a weakly convergent subsequence (w_k')_k' โ with weak limit w^. Along this subsequence we have -() = lim_k -(_k') = lim_k ^* w_k' = ^* w^ and thus -() โ(^*).
For () โ(^*), Theorem <ref> implies that (_k, ) cannot converge at rate ฮด_k. However, this does not mean that no convergence rate holds. Instead, if rates hold, these rates have to be slower than ฮด_k. For example, it is easy to construct examples where the convergence rate is exactly ฮด_k^ฮต for some ฮต > 0.
Note that in general, the condition () โ(^*) is stronger than the condition () โ()^โฅ. These conditions do, however, coincide if (^*) is closed, in which case ^ is bounded. Moreover, in this setting, Conditionย (<ref>) holds trivially. As a consequence, in such a case one obtains convergence rates without any further assumptions on , because Theoremย <ref> guarantees -() โ(^*). In particular, (^*) is closed whenever is finite dimensional and hence in this case convergence rates hold.
For any monotone operator, Conditionย <ref> holds whenever -() โ(^*). Indeed, in this case (z), - z = (), - z - zโค^* w, - zโค c ( - z). In particular the convergence rates
<ref>, <ref> hold.
We next derive stability estimates for the class of monotone operators.
Assume is monotone, let > 0, _1, _2 โ
and _1, _2 โ with _(_1, _1) = _(_2, _2) = 0. Then
* (_1 - _2)โค_1 - _2,
*
(_1,_2) โค 1 / (2 ) _1 - _2^2.
By construction of _1, _2 and Young's product inequality,
(_1,_2)
= _1_2
= -^*(_1 - _1) - ^*(_2 - _2), _1 - _2
= -(_1 - _2)^2 + _1 - _2, (_1 - _2)
โค -(_1 - _2)^2 + _1 - _2(_1 - _2)
โค - (_1 - _2)^2/2
+ _1 - _2^2/2 .
Thus (_1 - _2)^2/2 + (_1,_2) โค_1 - _2^2/2 which yields <ref>, <ref>.
The theoretical analysis provided above suggests that monotone operators are reasonable choices for regularizing inverse problems usingย (<ref>).
ยง CONTRACTIVE RESIDUAL OPERATORS
We now turn to a particular case where the regularization operator satisfies
โ, z โ( - ๐)() - ( - ๐)(z)โค L - z,
where ๐ is the identity operator and L < 1. In this case, = ๐ - for some contractive residual part ; compare <cit.>.
In what follows we will show that any withย (<ref>) is monotone and satisfies Conditionย <ref>. Thus, the theory in Sectionย <ref> is applicable. We will show, however, that in this case even stronger results hold. We will consider the case of exact regularization where (<ref>) is solved exactly (see Section <ref>) as well as the inexact case where where (<ref>) is only solved up to a certain accuracy (see Section <ref>) .
Before we continue with our main results, we want to briefly discuss that, in general, variational regularization theory is not applicable. This is because might not be of gradient form and hence the equilibrium points are not necessarily critical points of a Tikhonov functional. Indeed, even for = ^d there are simple vector fields satisfying equationย (<ref>) which are not of gradient form. Concrete instances are operators of the form = ๐ - for some non-symmetric . Consider, for example, a single layer neural network function of the form () = - _2 ฯ(_1 + b). In order be of gradient form this requires _2 = _1^* and specific choices for ฯ, see for example <cit.>. Thus, application of variational regularization theory requires severe restrictions on the regularization operator.
Instead, in this paper we focus on the more general case where may not be of gradient form.
Let satisfy (<ref>) with L < 1. Then,
โ, z โ
(1+L) - z^2 โฅ() - (z), - zโฅ (1-L) - z^2.
In particular, is monotone, one-to-one and โ, z โ() - (z)โฅ (1-L) - z.
Let , z โ. From (<ref>) it follows () - (z)โค (1+L) - z. With the Cauchy-Schwartz-inequality, we get () - (z), - zโค - z() - (z)โค (1+L) - z^2 which is the left inequality in (<ref>). Byย (<ref>) we further have
() - (z), - z = (() - ) - ((z) - z), - z + - z, - z
โฅ - - z(() - ) - ((z) - z) + - z^2
โฅ -L - z^2 + - z^2 ,
which is the right inequality in (<ref>).
This right inequality implies that is monotone and one-to-one and with the Cauchy-Schwartz inequality () - (z)โฅ (1-L) - z.
In the situation of Lemmaย <ref> one can even show that is bijective with Lipschitz-continuous inverse.
ยง.ยง Exact regularization
We start with the case that (<ref>) is solved exactly. Lemmaย <ref> shows is uniformly monotone and suggests thatย (<ref>) behaves similar to a variational regularization with a uniformly convex regularizer. Indeed, the next theorem shows that we get similar results.
Let be weakly continuous and satisfyย (<ref>) with L < 1. Then, the following hold.
* โโ โ > 0 _(, ) = 0 has a unique solution.
* Let (_k)_k โ^ converge to and let (_k)_k satisfy _(_k, _k) = 0. Then, (_k)_k norm-converges to the unique solution of _(, ) = 0.
* Let _1, _2 โ, > 0 and _1, _2 satisfy _(_1, _1) = _(_2, _2)= 0. Then
* (_1 - _2)โค_1 - _2,
* (_1) - (_2), _1 - _2โค 1 / (2 ) _1 - _2^2,
* _1 - _2โคโ(1 / (2 (1-L)))_1 - _2.
* Let (_k)_k โ^ be a sequence converging to โ() with - _kโคฮด_k. Assume = (ฮด) is such that for _k = (ฮด_k) we have lim_k _k = lim_k ฮด_k^2 / _k = 0. For k โ, let _k be the unique solution of __k(, _k) = 0. Then, the sequence (_k)_k converges in norm to the unique solution of = with () โ()^โฅ.
* In the setting of <ref> with _k โฮด_k, the source condition -() โ(^*) is necessary and sufficient for the rates
* _k - _k = ๐ช(ฮด_k),
* (_k) - (), _k - = ๐ช(ฮด_k),
* _k - = ๐ช(โ(ฮด_k)).
The proof extends results derived above. To show <ref>, let , ฮฒ > 0, โ and define () - ฮฒ( ^* ( - ) + () ). For , z โ we have
() - (z) โค((1-ฮฒ) ๐ - ฮฒ^* ) ( - z) + ฮฒ( - ๐)() - ( - ๐)(z)
โค(1-ฮฒ) ๐ - ฮฒ^* - z + ฮฒ L - z .
With ฮฒโค 1 / (^2 + ) we have (1-ฮฒ) ๐ - ฮฒ^* โค (1 - ฮฒ) and hence is Lipschitz-continuous with constant ฮณโค (1 - ฮฒ) + L ฮฒ = 1 - ฮฒ (1 - L) < 1. Uniqueness and existence of a fixed-point of follows from Banach's fixed point theorem which is <ref>. Lemmaย <ref> shows (), - zโฅ (1-L)-z^2 + (z), - z and thus Conditionย <ref> is satisfied whenever L < 1. Theoremย <ref> thus gives weak stability. The stability estimates in <ref> are a consequence of Theoremย <ref> and Lemmaย <ref> and in particular, norm-stability stated in <ref> holds.
For proving <ref>, Theoremย <ref> gives the existence of a weakly convergent subsequence. We next show that the solution of the limiting problem (<ref>) is unique. To that end let satisfy = and -() โ()^โฅ. Then - โ(), () - () โ()^โฅ and thus () - (), - = 0. By Lemmaย <ref>, 0 = () - (), - โฅ (1-L) - ^2 and thus =. Following the proof of Theoremย <ref> we find 2 _k (_k), _k - โคฮด_k^2. By Lemmaย <ref>, (1-L) _k - ^2 โค(_k) - (), _k - โคฮด_k^2/(2 _k) + (), - _k.
Because (_k)_k converges weakly to we get the norm convergence which completes the proof of <ref>. Finally, <ref> follows with Theoremย <ref> and Lemmaย <ref>.
The stability estimates in Theoremย <ref> demonstrate that (<ref>) is stable independent of the noise-distribution, as empirically observed in <cit.>. More precisely, for > 0 fixed, the reconstruction operator defined byย (<ref>) is injective and Lipschitz-continuous. Further note that we derived stability with respect the norm and the symmetric Bregman-distance. Opposed to the norm estimate, the estimate in the symmetric Bregman-distance is independent of the constant L. Hence, for L close to 1 the stability estimate in the norm has a potentially large constant, whereas the stability estimate for the symmetric Bregman-distance does not depend on this factor. Additionally, it suggests that for L = 1 the derivation of stability estimates requires further assumptions.
As we noted above, any satisfyingย (<ref>) is bijective. Hence the limiting problem (<ref>) can be rewritten as
Find with โ^-1(-()^โฅ) โฉ^-1 ( ) .
In particular, ^-1()^โฅโ is a parametrization of a manifold on which the solutions must lie. Taking the intersection of this manifold with the set of solution will then give the desired solution. Figureย <ref> provides an illustration of this.
Consider the form = ๐ - _() where _() is the projection on () and โ is Lipschitz. Thenย (<ref>) is satisfied if and only if ^*( - ) + _()^โฅ = 0 and _()( - ()) = 0. With the decomposition = _1 + _0 with _1 โ()^โฅ and _0 โ() we find that _1 = (^* + ๐)^-1^* is defined by classical Tikhonov regularization (variational regularization with the regularizer () = ^2/2). The component _0 is then implicitly defined by the equation 0 = _0 - _()(_1 + _0)
= (๐ - _()) _0
- _()_1 which has solution _0 = ^-1(_1) - _1. Hence, the solution operators defined by the equilibrium equation (<ref>) takes to form
_ = ^-1โ_ = ^-1โ ( (^* + ๐)^-1^* ) .
This is an instance of the nullspace networks proposed in <cit.>. In particular, the null space network is defined implicitly by solvingย (<ref>).
ยง.ยง Inexact regularization
We next study the case where the equilibrium equationย (<ref>) is only solved up to a certain accuracy. We analyze the specific case that the source condition is satisfied.
Let be weakly continuous and satisfyย (<ref>) with L < 1. Further, let (_k)_k โ^ be converge to = with - _kโคฮด_k โ 0. Assume that (_k)_k satisfies __k(_k, _k)โคฮต_k for some ฮต_k > 0 and consider the parameter choice _k โฮด_k. Then, if the source condition -() โ(^*) holds, for constants C_1, C_2 > 0 we have
_k - โค C_1 โ(ฮด_k) + C_2 ฮต_k/ฮด_k (1-L) .
Denote _k = __k(ยท, _k) and let _k, _k^* satisfy _k(_k)โคฮต_k and _k(_k^*) = 0. By Lemmaย <ref>, ฮฑ_k (1-L) _k - _k^*โค_k(_1) - _k(_k^*)โคฮต_k. With the convergence rates in Theoremย <ref> this shows (<ref>).
Supposeย (<ref>) is solved up to accuracy ฮต > 0 independent of the noise-level ฮด. Then, Lemmaย <ref> then gives the bound _k - โค C_1 โ(ฮด_k) + C_2 ฮต / ((1-L)ฮด_k). Hence, for fixed accuracy ฮต one cannot expect that (_k)_k converges to . Instead, one can expect some form of semi-convergence behavior. Note that the estimate includes the term ฮต / (1-L) which means that for L close to 1 the approximation quality might be significantly reduced.
Consider the setting of Lemmaย <ref> where __k(_k, _k)โคฮด_k ฮท_k with (ฮท_k)_k โ 0. Then, as k โโ,
* _k - = ๐ช(โ(ฮด_k) + ฮท_k)
* _k = ๐ช(ฮด_k + โ(ฮด_k)ฮท_k + ฮท_k^2)
* _k - _k = ๐ช(ฮด_k + ฮท_k โ(ฮด_k)).
Moreover, if ฮท_k = โ(ฮด_k) then _k can be constructed ๐ช (log(ฮด_k) / ( (L-1) ฮด_k )) iterations using a fixed point iteration applied to _k()= - ฮฒ (^*( - _k) + ()).
The rates for _k - and for the symmetric Bregman-distance follow from Lemmasย andย <ref>. Further, with _k = __k(ยท, _k) we have _k(_k) - _k(_k^*), _k - _k^*โค C ฮด_k ฮท_k^2. By Lemmaย <ref>, (_k - _k^*)^2 โค C ฮด_k ฮท_k^2 and hence _k - _kโคC(ฮด_k + ฮท_k โ(ฮด_k)) as desired. To show the last claim we assume without loss of generality that = 1. According to the proof of Theoremย <ref> the mapping _k has contraction constant ฮณ_k = (1 + _k L ) / (1 + _k ). By Banach's fixed point theorem, the iterates _k^n _k (_k^n-1) satisfy _k^* - _k^nโค C ฮณ_k^n. It is thus sufficient to have C ฮณ_k^n โคโ(ฮท_k)ฮด_k. Rearranging this inequality we find that we need on the order of log(ฮด_k) / log(ฮณ_k) iterations. Since log(ฮณ_k) = log(1 + _k L) - log(1 + _k) โ (L-1) _k โ (L-1) ฮด_k, we thus get ๐ช (log(ฮด_k) / ( (L-1) ฮด_k )) iterations.
Note that in the context of Theoremย <ref> stability is clear, since in this case the reconstruction is given by a finite number of fixed point iterations for a Lipschitz-continuous mapping. However, using a finite number of iterations, the regularized solution depends on the initial value and thus we loose uniqueness. Further note that the same proof can be given for a sequence (ฮท_k)_k with ฮท_k โฅฮท_* > 0. In this case one might not obtain the solution as characterized by Theoremย <ref> but some element close to this solution where the closeness depends on ฮท_*. Additionally, the rates in the data-fidelity term are only of order โ(ฮด).
ยง.ยง Limitations
For the rest of this section we assume that ๐โ is a set of signals of interest. We assume further that is weakly continuous and satisfiesย (<ref>). The set ๐ could, for example, be a set of natural images, such as a set of natural landscapes in a deblurring task. The solution operator for (<ref>) is denoted by _โโฆ_() and defined by ^*( (_()) - ) + ฮฑ (_()) = 0. For โ๐ we refer to as noise-free data and โ with - โคฮด as noisy data. Further, for โ() we denote by = _0() the unique solution of the limiting problem (<ref>) (see Theoremย <ref>). For any closed subspace โ we denote by _ the orthogonal projection onto . The results so far positively answer the questions of stability and the convergence as ฮฑ = ฮฑ(ฮด) โ 0 of solutions of the equilibrium equation (<ref>). In this subsection we aim to address limitations on the reconstruction error for fixed ฮฑ. The derived lower bounds highlight the need for our asymptotic analysis and motivate constructing the regularization operator based on the limiting problem (<ref>) as will be done below.
In the context of equilibrium methods learning aims at fitting regularized solutions _() as close as possible to the underlying signal โ๐ that generated the data . Ideally, if exact data is available, the regularized solution should coincide with . Recall that according to Theoremย <ref> for any โ the following holds:
If - _kโคฮด_k and _k , ฮด_k, ฮด_k^2 / _k โ 0, then __k(_k) - _0 ()โ 0. This justifies the following definition.
An element โ is called -recoverable if _0 =. The signal-class ๐โ is called -recoverable if all elements in ๐ are -recoverable.
The following theorem gives a characterization of -recoverability and provides a necessary condition on the interplay of ๐ and () for this to be possible.
For any ๐โ the following hold.
* ๐ is -recoverable if and only if (๐) โ()^โฅ.
* (๐) โ()^โฅโ L_* sup__1, _2 โ๐_()(_1 - _2) / _1 - _2 < 1.
* If is a closed linear subspace with ๐โ and _() P_ < 1, then there is a regularization operator with (<ref>) and (๐) โ()^โฅ.
* () โ()^โฅโ_0() - โฅ_()( ) / (1+L).
Item <ref> is an immediate consequence of Theoremย <ref>. If (๐) โ()^โฅ and _1, _2 โ๐, then L _1 - _2โฅ( - ๐)(_1) - ( - ๐)(_2)โฅ_()(_1 - _2) which gives <ref>. If ๐โ where is a closed subspace of with _() P_ < 1, then = ๐ - _()_ satisfies (<ref>) and (๐) โ()^โฅ, which gives <ref>. Finally <ref> follows from the Lipschitz-continuity of . It holds (L+1) _0() - โฅ (_0() ) - ()โฅ_()( ) because (_0()) โ()^โฅ.
Condition <ref> gives a lower bound on the contraction constant L โฅ L_*. For any satisfyingย (<ref>) with L < L_* there is at least one element โ๐ which is not recovered by the limiting problem _0. Notably, this constant solely depends on ๐ and () and thus can be estimated before choosing . While the subspace condition in Itemย <ref> is sufficient for the existence of a regularization operator for ๐ it may fail even in simple examples. Consider for example Figureย <ref>, where the set ๐โ^2 consists of only two points, but the smallest subspace containing ๐ is the whole space.
It is, however, clear that the condition _()(_1 - _2)โค L _1 - _2 can be satisfied, with reasonably small L. This example shows, that the assumption ๐โ for some closed subspace satisfying _()_ < 1 is in general too strict and suggests that non-linear operators should be chosen.
Itemย <ref> in Theoremย <ref> gives a bound on how well a non-recoverable signal can be approximated by solutions ofย (<ref>) for asymptotic case as ฮดโ 0. We next consider the non-asymptotic case.
Let โ and ฮฑ >0. Then, is a solution ofย (<ref>) with data = if and only if () = 0. In particular, at most one element โ๐ can be a solution of equationย (<ref>) with exact data.
The first part is clear. The second part follows, because, according to Lemmaย <ref>, is one-to-one.
Corollaryย <ref> shows that exact recovery usingย (<ref>) and noise-free data is possible for at most one element. If instead we consider the noisy case where = + z_ฮด, then the condition is given by ^* z_ฮด = - (). Clearly, this is only satisfied for specific values of z_ฮด and hence exact recovery is in general not possible for arbitrary noise. Instead, we will always incur some error depending on and .
For any โ and > 0 we have
- _ฮฑโฅ/^2 + (1+L)() .
The mapping โฆ_(, ) is Lipschitz with (_) โค^2 + (1+L). Hence, ( ^2 + (1+L) ) - _ฮฑโฅ_(_ฮฑ, ) - _(, ) = ().
Propositionย <ref> uses the assumption that regularized solutions solveย (<ref>) exactly. In practical applications one typically only has access to near-equilibrium points. Consequently, these lower bounds may be improved. A reasonable question is then how to choose the accuracy of nearly solving the equilibrium equation (<ref>) in order to get the best possible results. While we cannot answer this question at this point in time, we briefly discuss a simple heuristic for the choosing accuracy. If - โคฮด, then _(, )โค C(ฮด + ) which suggests that one can stop at an accuracy ๐ช(ฮด + ). Hence, with the choice โฮด (motivated by Theoremย <ref>) we find, that an accuracy on the order of the noise-level ฮด should be chosen. Interestingly, this differs from the choice according to Theoremย <ref>.
ยง.ยง A novel loss-function
Next we address the issue of constructing a regularization operator . For that purpose we assume that there is some underlying distribution โผฯ_๐ supported some set ๐โ according to which the signal โ๐ of interest are drawn. Existing equilibrium methods take as (near) minimizers of
_() ๐ผ_, z_ฮด - _( + z_ฮด; )^2 ,
where the dependence of _ on the regularization operator is now made explicit. Below we discuss that the analysis in Sectionย <ref> guides the design of alternative loss-functions.
We have seen above that the limiting solutions _0() of the equilibrium points _(^ฮด) are uniquely determined. This means that in order to get regularized solutions closed to a desired signal โ๐ it is sufficient to enforce _0() =. According to Theoremย <ref>, we have to guarantee that () โ()^โฅ. This in turn is equivalent to _()()^2 = 0. These considerations can be translated to a loss function
_0() ๐ผ__()()^2 ,
that is minimized over a class of of weakly continuous mappings โ that satisfyย (<ref>).
Opposed to (<ref>), the proposed loss function (<ref>) does not depend on the regularized reconstruction _ and as a consequence can avoid costly iterative algorithms for the fixed point computations. Additionally, it enforces the equilibrium point method to actually approximate desired solutions in ๐; compare Theoremย <ref>.
Finally, the loss-function is independent of the noise-model. This means, that we can avoid choosing a specific noise-model during training which for practical applications is typically not known anyway.
The loss functionย (<ref>) only restricts the behavior of _() but not _()^โฅ. In order to get appropriate behavior on the latter one can simply add a regularization term. In view of Lemmaย <ref> this could for example be done by restricting the norm of () thus arriving at
_ฮป() ๐ผ_[ _()()^2 + ฮป()^2 ] ,
where ฮปโฅ 0 is an appropriately chosen trade-off parameter.
We now want to briefly discuss the connection of (<ref>) and (<ref>) for โ๐. With the data = and - โคฮด we have
_(_(), ) - _(, ), _) -
โฅ(_()) - (), _() - โฅ (1-L) - _()^2 .
Using the Cauchy-Schwartz inequality and the Lipschitz-continuity of _(ยท, ) and the identity _(_(), ) = 0 we find _(_(), ) - _(, )โฅ (1-L) - _() and _(_(), ) - _(, ) = _(, ) . Moreover, since = we get _(, ) = () and thus ()โฅ (1-L) - _(). Thus, we find that the loss inย (<ref>) is a combination of a term enforcing recoverability conditions and a term which acts as a surrogate for (<ref>) with exact data.
[Linear case]
Consider the form = ๐ - where is linear and bounded and that ๐ satisfies ๐โ for a close subspace with _()_ < 1. This guarantees -recoverability of ๐ by the explicit choice in Theoremย <ref>. Assume for the time being, that we do not know about this explicit construction and consider minimizingย (<ref>). Then
๐ผ__() ()^2 = ๐ผ__()_ - _()^2 .
For this to be minimized we have _() = _()_ for all โ๐. Thus, the loss-functionย (<ref>) yields a construction similar to the one in Theoremย <ref>.
It might differ in the chosen subspace in the sense that a projection onto _0 โ might be selected, but its restriction on ๐ has to be given by _()_. This shows, that the proposed loss-function indeed enforces recoverability of the set ๐, compare Theoremย <ref>.
In the context of Exampleย <ref> it is also interesting to note, that = ๐ - _()_ is in general not self-adjoint, since the projections _() and _ in general do not commute. In particular, is not of gradient form and thus the methods presented in this paper are different from variational regularization even in the linear case usingย (<ref>).
ยง NUMERICAL SIMULATIONS
In this section we present numerical results supporting the theory derived in this paper using simple practical examples. While extensive numerical simulations, comparison and further investigations of the proposed loss (<ref>) are clearly of interest these are out of the scope of this paper.
ยง.ยง General setup
We derive ๐ from the the MNIST dataset <cit.> which consists of 70000 gray-scale images of digits of size 28 ร 28. We choose 10000 of these digits for training and another 10000 for testing. The training-set is used exclusively for constructing the operator whereas the test-set is used to perform the numerical simulations.
Throughout we restrict to linear chosen to minimize the proposed lossย (<ref>) which is done by the explicit construction discussed in Exampleย <ref>. To enforce ๐โ we project the MNIST dataset ๐_0 onto an appropriate subspace and set ๐ = _ (๐_0). To this end, we perform a principal component analysis of the training-set and choose q โ such that the first q principal components span a subspace with _()_ < 1. We set = ๐ - _()_ where the evaluation of _() is discussed below for different choices of .
Choosing linear allows for easier testing of the theory. However, this restriction has a notable practical caveat. Following Remarkย <ref> and the discussion in Sectionย <ref>, an ideal choice of would essentially correspond to a parametrization of the given signal-class using ^-1. With linear we parametrize linear manifolds. It is, however, also clear that even simple signal-classes such as MNIST are highly non-linear and as a consequence any linear is in a sense sub-optimal. Nevertheless as the examples below demonstrate that even linear operators perform well when properly trained.
Noise-free data are computed for โ๐ and noisy data are constructed by adding Gaussian noise with varying standard-deviations to get a desired noise-level ฮด = -. We choose (ฮด) = ฮด and solveย (<ref>) with the built-in PyTorch <cit.> function to obtain regularized solutions _ฮฑ^ฮด = _ฮฑ(ฮด)(). Due to practical limitations we solve equationย (<ref>) only up to a certain accuracy; compare Lemmaย <ref> and Remarkย <ref>. According to Theoremย <ref>, the limiting solution = _0() is uniquely determined by โ^-1(()^โฅ) โฉ^-1 (). Since any solution z โ^-1 () is given by z = + z_0 where z_0 โ(), we have that is the unique z_0 โ() with ( + z_0) โ()^โฅ. Clearly, this is equivalent to _()( +z_0) = 0 which is another linear equation that can be solved for z_0. In our numerical simulations is constructed in this way and convergence of _^ฮด = _() to this solution is tested. The forward operators are described below and closely follow <cit.>.
ยง.ยง Inpainting
The first problem we consider is inpainting where the forward simply sets the first 10 rows of any image to 0. In this case, _() = ๐ -. To define we use the first q = 256 principal components arriving at a relatively large value _()_ = 0.99997. Simulations are performed with this choice and the results are shown in Figureย <ref> and Figureย <ref>. Figureย <ref> shows the original signal โ๐ (top left), the simulated noise-free data (top right), the limiting solution (bottom left) and an example of a regularized solution (bottom right). One can see, that the original signal and the limiting solution are indeed the same; see the discussions in Sectionย <ref>.
Figureย <ref> shows numerical convergence rates for the symmetric Bregman-distance _ฮฑ^ฮด (left) and the residual _ฮฑ^ฮด - (right). In both images we display the reference line ฮดโฆฮด in black and ฮผ(ฮด) ยฑฯ(ฮด) where ฮผ(ฮด) is the average and ฯ(ฮด) the standard-deviation of these quantities calculated on the test-set (note that due to the small standard-deviation only the mean value can be seen). One can see that the residual closely follows the reference line as expected. The convergence rate for the symmetric Bregman-distance, however, shows an even better convergence rate than the one given by Theoremย <ref>. Note that the symmetric Bregman-distance is essentially the square of a norm (see Exampleย <ref>) and hence we numerically find, that convergence in this norm is of order ๐ช(ฮด) improving upon the theoretically proven rate ๐ช(ฮด^1/2) .
ยง.ยง Deblurring
The second problem we consider is deblurring where similar to <cit.> we take the convolution operator to mimic a diagonal motion blur, for which we choose the 2D convolution kernel as 5 ร 5 identity matrix. To get an approximation of _() we consider the singular-value decomposition of and define _() to be the projection onto the subspace spanned by the singular vectors corresponding to singular values below 10^-12. We define the subspace by the first q = 512 principal components which results in a value of _()_ = 0.8958.
Simulation results are shown in Figureย <ref> and Figureย <ref>. Figureย <ref> shows the original signal โ๐ (top left), the simulated noise-free data (top right), the limiting solution (bottom left) and an example of a regularized solution (bottom right). Similar to the inpainting example, the original signal and the limiting solution _ coincide. Figureย <ref> visualizes convergence rates in the symmetric Bregman-distance _ฮฑ (left) and the residual _ฮฑ - (right). As in the inpainting example we show a reference line ฮดโฆฮด in black and ฮดโฆฮผ(ฮด) ยฑฯ(ฮด) where ฮผ(ฮด) is the average distance measure and ฯ(ฮด) is the standard-deviation of these distance for the test-set.
One can see that the residual closely follows the reference line as expected. However, in the symmetric Bregman-distance we obtain a semi-convergent behavior . This might be due to the fact that we solve equationย (<ref>) with fixed finite accuracy only and hence convergence cannot be expected. It is again interesting to note that numerically, we find in the convergent regime (ฮดโฅ 10^-8) a faster numerical convergence rate than the one proven in Theoremย <ref> can be observed.
ยง CONCLUSION AND OUTLOOK
We presented a convergence analysis for solving inverse problems with the equilibrium equationย (<ref>) including the derivation of stability estimates and convergence rates. This analysis was strengthened in the case where the regularization operator satisfies (<ref>). In particular we derived the limiting problem (<ref>) for ฮฑโ 0 which particular lead to a new loss function for training the regularization operator . We have further shown for finite ฮฑ, the equilibrium equation has limited performance on any given signal-class ๐. The results in our paper raise many new questions for future research. One such potential direction of research is to weaken (<ref>) that does not suffer from the lower-bound. Other directions could include the analysis of the novel loss-function provided in Sectionย <ref>, its extension and the comparison with existing ones. Deriving regularization methods different from (<ref>) based on (<ref>) is another promising research direction.
10
arridge2019solving
S.ย Arridge, P.ย Maass, O.ย รktem, and C.-B. Schรถnlieb.
Solving inverse problems using data-driven models.
Acta Numerica, 28:1โ174, 2019.
chen2023imaging
D.ย Chen, M.ย Davies, M.ย J. Ehrhardt, C.-B. Schรถnlieb, F.ย Sherry, and
J.ย Tachella.
Equivariant deep learning: From unrolled network design to fully
unsupervised learning.
IEEE Signal Processing Magazine, 40(1):134โ147, 2023.
ebner2022plug
A.ย Ebner and M.ย Haltmeier.
Plug-and-play image reconstruction is a convergent regularization
method.
arXiv:2212.06881, 2022.
EngHanNeu96
H.ย W. Engl, M.ย Hanke, and A.ย Neubauer.
Regularization of inverse problems, volume 375 of Mathematics and its Applications.
Kluwer Academic Publishers Group, Dordrecht, 1996.
gilton2021deep
D.ย Gilton, G.ย Ongie, and R.ย Willett.
Deep equilibrium architectures for inverse problems in imaging.
IEEE Transactions on Computational Imaging, 7:1123โ1133, 2021.
jin2017deep
K.ย H. Jin, M.ย T. McCann, E.ย Froustey, and M.ย Unser.
Deep convolutional neural network for inverse problems in imaging.
IEEE Transactions on Image Processing, 26(9):4509โ4522, 2017.
kamilov2023plug
U.ย S. Kamilov, C.ย A. Bouman, G.ย T. Buzzard, and B.ย Wohlberg.
Plug-and-play methods for integrating physical and learned models in
computational imaging: Theory, algorithms, and applications.
IEEE Signal Processing Magazine, 40(1):85โ97, 2023.
lecun1998gradient
Y.ย LeCun, L.ย Bottou, Y.ย Bengio, and P.ย Haffner.
Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278โ2324, 1998.
li2020nett
H.ย Li, J.ย Schwab, S.ย Antholzer, and M.ย Haltmeier.
NETT: Solving inverse problems with deep neural networks.
Inverse Probl., 36(6):065005, 2020.
lunz2018adversarial
S.ย Lunz, O.ย รktem, and C.-B. Schรถnlieb.
Adversarial regularizers in inverse problems.
In Advances in Neural Information Processing Systems, pages
8507โ8516, 2018.
mccann2017convolutional
M.ย T. McCann, K.ย H. Jin, and M.ย Unser.
Convolutional neural networks for inverse problems in imaging: A
review.
IEEE Signal Processing Magazine, 34(6):85โ95, 2017.
meinhardt2017learning
T.ย Meinhardt, M.ย Moller, C.ย Hazirbas, and D.ย Cremers.
Learning proximal operators: Using denoising networks for
regularizing inverse imaging problems.
In Proceedings of the IEEE International Conference on Computer
Vision, pages 1781โ1790, 2017.
obmann2023convergence
D.ย Obmann and M.ย Haltmeier.
Convergence rates for critical point regularization.
arXiv:2302.08830, 2023.
pytorch
A.ย Paszke, S.ย Gross, and F.ย e.ย a. Massa.
Pytorch: An imperative style, high-performance deep learning library.
In Advances in Neural Information Processing Systems 32, pages
8024โ8035. Curran Associates, Inc., 2019.
pramanik2022stable
A.ย Pramanik and M.ย Jacob.
Stable and memory-efficient image recovery using monotone operator
learning (MOL).
arXiv:2206.04797, 2022.
riccio2022regularization
D.ย Riccio, M.ย J. Ehrhardt, and M.ย Benning.
Regularization of inverse problems: Deep equilibrium models versus
bilevel learning.
arXiv:2206.13193, 2022.
rockafellar1976monotone
R.ย T. Rockafellar.
Monotone operators and the proximal point algorithm.
SIAM journal on control and optimization, 14(5):877โ898, 1976.
romano2017little
Y.ย Romano, M.ย Elad, and P.ย Milanfar.
The little engine that could: Regularization by denoising (red).
SIAM Journal on Imaging Sciences, 10(4):1804โ1844, 2017.
ryu2019plug
E.ย Ryu, J.ย Liu, S.ย Wang, X.ย Chen, Z.ย Wang, and W.ย Yin.
Plug-and-play methods provably converge with properly trained
denoisers.
In International Conference on Machine Learning, pages
5546โ5557. PMLR, 2019.
Ryu2015APO
E.ย K. Ryu and S.ย P. Boyd.
A primer on monotone operator methods.
2015.
scherzer2009variational
O.ย Scherzer, M.ย Grasmair, H.ย Grossauer, M.ย Haltmeier, and F.ย Lenzen.
Variational methods in imaging, volume 167 of Applied
Mathematical Sciences.
Springer, New York, 2009.
schwab2019deep
J.ย Schwab, S.ย Antholzer, and M.ย Haltmeier.
Deep null space learning for inverse problems: convergence analysis
and rates.
Inverse problems, 35(2):025008, 2019.
sun2019online
Y.ย Sun, B.ย Wohlberg, and U.ย S. Kamilov.
An online plug-and-play algorithm for regularized image
reconstruction.
IEEE Transactions on Computational Imaging, 5(3):395โ408,
2019.
tan2023provably
H.ย Y. Tan, S.ย Mukherjee, J.ย Tang, and C.-B. Schรถnlieb.
Provably convergent plug-and-play quasi-Newton methods.
arXiv:2303.07271, 2023.
|
http://arxiv.org/abs/2306.04528v2
|
20230607153700
|
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
|
[
"Kaijie Zhu",
"Jindong Wang",
"Jiaheng Zhou",
"Zichen Wang",
"Hao Chen",
"Yidong Wang",
"Linyi Yang",
"Wei Ye",
"Neil Zhenqiang Gong",
"Yue Zhang",
"Xing Xie"
] |
cs.CL
|
[
"cs.CL",
"cs.CR",
"cs.LG"
] |
.tocchapter
chapternone
appendixnone
Comparison of SeDuMi and SDPT3 Solvers for Stability of Continuous-time Linear System
Guangda Xulabel1
June 7, 2023
=====================================================================================
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
In response to this vital need, we introduce , a robustness benchmark designed to measure LLMs' resilience to adversarial prompts.
This study uses a plethora of adversarial textual attacks targeting prompts across multiple levels: character, word, sentence, and semantic.
These prompts are then employed in diverse tasks, such as sentiment analysis, natural language inference, reading comprehension, machine translation, and math problem-solving.
Our study generates adversarial prompts, meticulously evaluated over tasks and datasets, with 567,084 test samples in total.
Our findings demonstrate that contemporary LLMs are vulnerable to adversarial prompts.
Furthermore, we present comprehensive analysis to understand the mystery behind prompt robustness and its transferability.
We then offer insightful robustness analysis and pragmatic recommendations for prompt composition, beneficial to both researchers and everyday users.
We make our code, prompts, and methodologies to generate adversarial prompts publicly accessible, thereby enabling and encouraging collaborative exploration in this pivotal field: <https://github.com/microsoft/promptbench>.
ยง INTRODUCTION
Large language models () have gained increasing popularity owing to their unprecedented performance in various downstream tasks such as sentiment analysisย <cit.>, question answeringย <cit.>, logical reasoningย <cit.>, etc..
Promptย <cit.>, serving as the bridge between human and , enables in-context learningย <cit.> in an autoregressive manner.
However, are known to be highly sensitive to promptsย <cit.>, e.g., the order of few-shot examples, minor typos, or different expressions with the same semantic meaning can lead to qualitatively different results.
Given the popular adoption of in both academia and industry, particularly in safety-critical and decision-making domains, it becomes essential to examine the robustness of to prompts, understand the factors that contribute to their robustness (or lack thereof), and identify the key attributes of robust prompts.
Recent studies evaluated from various aspects including natural language processingย <cit.>, ethicsย <cit.>, robustnessย <cit.>, and educationย <cit.>.
Particularly for robustness evaluation, Wang et al.ย <cit.> evaluated and other from the adversarial and out-of-distribution (OOD) perspective using existing adversarial text benchmarks such as AdvGLUEย <cit.> and ANLIย <cit.>.
Zhuo et al.ย <cit.> evaluated the robustness on semantic parsing.
Yang et al.ย <cit.> evaluated OOD robustness by extending the GLUE <cit.> dataset.
Nevertheless, existing evaluations often overlook the robustness of prompts - the instructions provided to LLMs to guide in-context learning.
Since a single prompt can be applied to a multitude of tasks, its robustness is of pivotal importance to .[We delineate the input to LLMs as `prompt+sample', where prompt is the instruction and sample is the input test data. This paradigm is commonly employed in various LLM applications.
For further details, see Sec.ย <ref>.]
In this paper, we introduce , a comprehensive benchmark designed for assessing the robustness of LLMs to adversarial prompts (ย <ref>).
stands out with its ability to dynamically construct adversarial prompts, which are then combined with clean samples to generate adversarial inputs, with the key concepts prompt, sample, and input depicted in ย <ref>.
In particular, an adversarial prompt can be used together with many samples.
This approach, contrasting with the prevalent use of static, pre-computed adversarial samplesย <cit.>, ensures a broader and more diverse set of adversarial inputs for each LLM.
consists of prompts, attacks, models, tasks, and datasets.
We evaluate 4 types of prompts: zero-shot (ZS), few-shot (FS), role-oriented, and task-oriented prompts.
We create 4 tiers of attacks: character-level, word-level, sentence-level, and semantic-level attacks by adopting adversarial attack approaches.
Note that they are not only attacks, but also serve as ideal testbeds for mimicking potential diverse prompts from real users.
Our analysis spans across 8 prevalent LLMs, ranging from smaller models such as Flan-T5-large <cit.> to larger ones like ChatGPT <cit.>.
We adopt tasks for evaluation, namely, sentiment analysis (ย <cit.>), grammar correctness (ย <cit.>), duplicate sentence detection (ย <cit.> and ย <cit.>), natural language inference (ย <cit.>, ย <cit.>, ย <cit.>, and ย <cit.>), multi-task knowledge (MMLUย <cit.>), reading comprehension (SQuAD V2ย <cit.>), translation (UN Multiย <cit.> and IWSLT 2017ย <cit.>), and math problem-solving (Mathematicsย <cit.>).
is a flexible benchmark that supports all open-sourced and proprietary .
In total, we created adversarial prompts and 567,084 test samples, representing diverse, practical, and challenging scenarios.
We carry out extensive experiments and analyses using .
The results highlight a prevailing lack of robustness to adversarial prompts among current , with word-level attacks proving the most effective (33% performance drop).
We delve into the reasons behind this vulnerability by exploring ' attention weights of each input word and erroneous responses associated with clean and adversarial inputs.
Our findings reveal that adversarial prompts cause to shift their focus towards adversarial elements thus response either wrong answer or meaningless sentences.
We also examine the transferability of adversarial prompts between models, and suggest a successful transferability of adversarial prompts from one LLM to another.
Furthermore, we analyze word frequency patterns to guide future research in improving robustness and to aid end-users in crafting more robust prompts.
We conclude by discussing potential strategies for robustness enhancement.
To concisely encapsulate, the contributions of this paper can be categorized into four main areas:
* We introduce , a pioneering benchmark for evaluating the adversarial robustness of prompts in LLMs. The design of ensures its versatility, enabling applications to novel tasks, models, and scenarios.
* Utilizing , we execute an exhaustive analysis of the robustness of LLMs against adversarial prompts, furnishing visual explanations for the observed vulnerabilities and assessing the transferability of adversarial prompts.
* Drawing on our analysis of word frequency, we offer practical guidance for downstream users and prompt engineers to craft more robust prompts.
* In an effort to stimulate future research on prompt robustness, we make our code, compiled prompts, and evaluation benchmark available to the public. Additionally, we build a visualization website (Appendixย <ref>)[<https://github.com/microsoft/promptbench/tree/main/adv_prompts>] to allow for easy exploration of adversarial prompts.
ยง
We introduce the basic modules of : prompts, models, tasks, datasets, and attacks.
ยง.ยง Types of prompts and models
We investigate four different types of prompts categorized based on their intended purpose and the amount of labeled samples they require.
Appendixย <ref> presents examples of these prompts.
Task-oriented prompts explicitly describe the task that the model is required to perform, which is to encourage the model to generate task-specific outputs based solely on its pre-training knowledge.
While role-oriented prompts typically frame the model as an entity with a specific role, such as an expert, advisor, or translator.
By incorporating role information, these prompts aim to implicitly convey the expected output format and behavior.
Each of the two categories of prompts are designed for both zero-shot (ZS) and few-shot (FS) learning scenarios. For few-shot scenario, these prompts not only convey the task requirements but also demonstrate the expected output format and structure through several examples. In our experiments, we randomly select three examples in the training set of a task and append them to a prompt.
In our evaluation, we include a diverse set of to comprehensively assess their performance across various tasks and domains. The models we consider are as follows: Flan-T5-large <cit.> (0.8B), Dolly-6B <cit.>, LLaMA-13B <cit.>, Vicuna-13B <cit.>, Cerebras-GPT-13B <cit.>, GPT-NEOX-20B <cit.>, Flan-UL2 (20B) <cit.>, and ChatGPT.[We did not receive the GPT-4 API during this research, and thus cannot perform evaluation using GPT-4.] By incorporating with different architectures and sizes, we aim to provide insights into their strengths, weaknesses, and robustness, ultimately informing the choice of model for a specific application or use case.
Details of these are in Appendixย <ref>.
ยง.ยง Tasks and datasets
consists of diverse tasks with public datasets (details are in Appendixย <ref>):
* Sentiment analysis: we adopt the ย <cit.> dataset from the GLUEย <cit.> dataset.
* Grammar correctness: we adopt the ย <cit.> dataset from the GLUE dataset.
* Duplicate sentence detection: we adopt the ย <cit.> and ย <cit.> datasets from GLUE.
* Natural language inference: ย <cit.>, ย <cit.>, ย <cit.>, and ย <cit.> from GLUE.
* Multi-task knowledge: we adopt the MMLU datasetย <cit.> which evaluates world knowledge and problem-solving abilities through 57 tasks with multiple-choice questions from diverse domains.
* Reading comprehension: we adopt the SQuAD V2 datasetย <cit.>. SQuAD V2 enhances the original SQuAD dataset for machine reading comprehension by introducing unanswerable questions.
* Translation: we adopt UN Multi<cit.> and IWSLT 2017ย <cit.> datasets. UN Multi evaluates ' ability to translate official documents, while IWSLT 2017 evaluates spoken language translation.
* Math problem-solving: we adopt the Mathย <cit.> dataset, which evaluates ' mathematical reasoning abilities across a diverse range of problems, such as algebra, arithmetic and comparison.
ยง.ยง Attacks
Textual adversarial attacks are popular in AI robustness researchย <cit.>.
Technically speaking, given a dataset ๐ = { (x_i, y_i)}_i โ [N], where x represents the sample and y denotes the ground-truth label, a textual adversarial attack aims to attack an LLM f_ฮธ by perturbing each sample x with ฮด given certain budget ๐: max_ฮดโ๐โ [f_ฮธ(x+ฮด), y], where โ represents the loss function.
Prompt attack.
In this paper, our focus is to attack the prompts rather than samples.
This is due to the popularity of in different applications, which generate responses using in-context learning on prompts (i.e., instructions) and samples.
We define an input in to be the combination of a prompt P and a sample x: [P, x], where [,] denotes the concatenation operation.
For instance, in sentiment analysis, P could be โPlease classify the following sentence into either positive or negative: โ and x could be โI am happy todayโ.
We argue that various applications can be formulated in that manner where P is indispensable.[In fact, x can be missing but P is indispensible. For instance, in a story generation task, P = โPlease write a story about country loveโ which is necessary but x is not needed. This makes our study on prompt robustness more useful in real applications.]
Thus, a non-robust prompt could result in unexpected behaviours of especially in safety-critical applications.
It is essential to investigate the robustness of prompts since they are common instructions to various tasks.
Given an LLM f_ฮธ, a dataset ๐, and a clean prompt P, the objective of a prompt attack can be formulated as follows:
max_ฮดโ๐E_(x; y) โ๐โ [f_ฮธ([P+ฮด, x]), y],
where ฮด is the textual perturbation added to the clean prompt P and ๐ is the allowable perturbation set, i.e., perturbation constraint. This attack is analogous to universal adversarial perturbation (UAP) <cit.> and universal adversarial trigger (UAT) <cit.>, extending these concepts to the realm of prompts.
Different attacks.
We then modify the existing black-box textual attacks to implement Eq.ย (<ref>) due to their efficiency and no reliance on model gradient.
Our instantiations span four distinct levels, capturing a broad spectrum of complexities from simple character manipulations to sophisticated semantic alterations (detailed examples of each attack are in Appendixย <ref>):
* Character-level: We employ TextBuggerย <cit.> and DeepWordBugย <cit.>, which manipulate texts by introducing typos or errors to words, e.g., by adding, deleting, repeating, replacing, and permuting characters for certain words.
* Word-level: We utilize BertAttackย <cit.> and TextFoolerย <cit.>, which aim to replace words with synonyms or contextually similar words to deceive .
* Sentence-level: We implement StressTestย <cit.> and CheckListย <cit.>, which append irrelevant or extraneous sentences to the end of prompts, intending to distract . For instance, in the CheckList attack, we generate 50 random sequences of alphabets and digits.
* Semantic-level: We simulate the linguistic behavior of people from different countries by choosing six common languages (Chinese, French, Arabic, Spanish, Japanese, and Korean) and constructing ten prompts for each language per dataset. These prompts are then translated into English, introducing linguistic nuances and variations that could potentially impact LLMs.
Semantic-preserving of adversarial prompts.
Are adversarial prompts realistic?
We conducted a human test using constructed adversarial prompts in Appendixย <ref>.
The results in ย <ref> demonstrate that these generated prompts are at least 70% acceptable by humans, indicating that our attack is realistic and meaningful.
ยง EXPERIMENTS
Setup.
Owing to the extensive computational requirements of generating single adversarial prompt, which necessitates iterating over the entire dataset 100 times in average.
Thus a full-dataset evaluation, particularly on , is unfeasible.
Therefore, we adopt a sampling strategy that entails selecting a subset of samples from the validation or test sets across various datasets.
The detailed sampling strategy is presented in Appendixย <ref> and the statistics of each dataset and tasks are summarized in ย <ref>.[In ย <ref>, the last column denotes the total evaluation size for each dataset. For instance, there are 872 test samples in SST2 dataset and each sample should go through 4 via 7 adversarial attacks on 3 prompts, thus the test size is 872 ร 4 ร 3 ร 7=73248.]
We initially assess the performance of all LLMs without prompt attacks to provide a performance baseline, which indicates that certain LLMs even do not demonstrate satisfactory performance with clean prompts, narrowing our selection to four: T5, Vicuna-13B, UL2, and .
Further details and discussions on clean prompt performance across all LLMs are available in Appendixย <ref> and <ref>.
r.6
Statistics of datasets used in this paper.
.6!
Task Dataset #Sample #Class #[Adv. prompt, sample]
Sentiment analysis SST2 872 2 73,248
Grammar correctness CoLA 1,000 2 84,000
2*[l]Duplicate sentence
detection QQP 1,000 2 84,000
MRPC 408 2 34,272
4*[l]Natural language
inference MNLI 1,000 3 84,000
QNLI 1,000 2 84,000
RTE 277 2 23,268
WNLI 71 2 5,964
Multi-task knowledge MMLU 564 4 47,376
Reading comprehension SQuAD V2 200 - 16,800
2*Translation Multi UN 99 - 8,316
IWSLT 2017 100 - 8,400
Math reasoning Math 160 - 13,440
We generate 10 distinct prompts for both role-oriented and task-oriented categories.
Each prompt can be augmented with three examples, forming the few-shot prompts.
In total, we have 40 prompts for each dataset on each LLM.
For better efficiency and performance, we select the top 3 best-performing prompts of each type to conduct prompt attacks.
As a result, we evaluate the adversarial vulnerabilities of 4 across 13 datasets, encompassing a total of 4,032 prompts[4,032=3 ร 4 ร 4 ร 13 ร 7 - 336, where each number on the R.H.S. denote #attacked prompts, #prompt types, #, #datasets, and #attacks, respectively. We did not conduct attacks on Vicuna on certain datasets because the outputs of Vicuna on these datasets are meaningless, so that we subtract 336 prompts.] and their respective adversarial counterparts. This comprehensive evaluation allows us to gain valuable insights into the robustness and performance of across a wide range of scenarios and prompt styles.
With adversarial prompts on different samples in each dataset, we evaluate 567,084 samples in total.
Evaluation metrics.
Considering the diverse evaluation metrics across tasks and varying baseline performances across models and datasets, the absolute performance drop may not provide a meaningful comparison. Thus, we introduce a unified metric, the Performance Drop Rate (PDR). PDR quantifies the relative performance decline following a prompt attack, offering a contextually normalized measure for comparing different attacks, datasets, and models.
The PDR is given by:
๐๐ท๐
(A, P, f_ฮธ, ๐) = 1 - โ_(x;y) โ๐โณ [ f_ฮธ([A(P), x]), y]/โ_(x;y) โ๐โณ [f_ฮธ([P, x]), y],
where A is the adversarial attack applied to prompt P, and โณ[ยท] is the evaluation function: for classification task, โณ[ยท] is the indicator function 1[ลท, y] which equals to 1 when ลท = y, and 0 otherwise.
For instance, for reading comprehension task, โณ[ยท] is the F1-score; for translation tasks, โณ[ยท] is the Bleu metric <cit.>. Note that a negative PDR implies that adversarial prompts can occasionally enhance the performance.
ยง ARE ROBUST TO PROMPT ATTACK?
ยง.ยง Results across different attacks, , and prompts
We report and discuss the Average PDR (APDR) across different attacks, , and prompts.
Analysis on attacks.
ย <ref> summarizes the APDR of attacks on datasets. The APDR is calculated by
APDR_A(A, ๐) = 1/|๐ซ|1/|โฑ|โ_P โ๐ซโ_f_ฮธโโฑ๐๐ท๐
(A, P, f_ฮธ, ๐),
where ๐ซ is the set of 4 types of prompts and โฑ is the set of 4 models.
Our results offer several key insights. Firstly, attack effectiveness is highly variable, with word-level attacks proving the most potent, leading to an average performance decline of 33% across all datasets.
Character-level attacks rank the second, inducing a 20% performance dip across most datasets.
Notably, semantic-level attacks exhibit potency nearly commensurate with character-level attacks, emphasizing the profound impact of nuanced linguistic variations on ' performance. Conversely, sentence-level attacks pose less of a threat, suggesting adversarial interventions at this level have a diminished effect. Moreover, we observe notable variation across datasets, even within those concerning the same task, a facet worthy of further investigation. For instance, StressTest attacks on MMLU yield a mere 3% performance drop, while inflicting a 20% drop on MRPC. Interestingly, the same attack paradoxically bolsters model robustness in the case of QQP, a phenomenon we delve into in Sec.ย <ref>.
Lastly, we observe considerable variances in the outcomes of all attacks due to differing APDRs of models, creating significant discrepancies.
Note that while character-level attacks are detectable by grammar detection tools, word- and semantic-level attacks underscore the importance of robust semantic understanding and accurate task presentation/translation for . A comprehensive understanding of these nuances will inform a deeper comprehension of adversarial attacks on .
Analysis on .
ย <ref> summarizes the APDR of 4 on datasets. The APDR is calculated by
๐ด๐๐ท๐
_f_ฮธ(f_ฮธ, ๐) = 1/|๐|1/|๐ซ|โ_ A โ๐โ_P โ๐ซ๐๐ท๐
(A, P, f_ฮธ, ๐),
where ๐ซ is the set of 4 types of prompts and ๐ is the set of attacks.
Our analysis reveals that UL2 significantly outperforms other models in terms of robustness, followed by T5 and , with Vicuna presenting the least robustness. The robustness of UL2, T5, and varies across datasets, with UL2 and T5 showing less vulnerability to attacks on sentiment classification (SST-2), most NLI tasks, and reading comprehension (SQuAD V2). Specifically, UL2 excels in translation tasks, while displays robustness in certain NLI tasks. Vicuna, however, exhibits consistently high susceptibility to attacks across all tasks. Interestingly, there seems to be no clear correlation between model robustness and size. The observed differences in model robustness might stem from the specific fine-tuning techniques employed. For example, both UL2 and T5, fine-tuned on large datasets, and , fine-tuned via RLHFย <cit.>, exhibit better robustness than Vicuna. These findings encourage further investigation of fine-tuning strategies to enhance robustness.
Analysis on prompts.
ย <ref> summarizes the APDR of 4 types of prompts on datasets. The APDR is calculated by
APDR_t(๐) = 1/|๐|1/|๐ซ_t|1/|โฑ|โ_๐โ๐โ_P โ๐ซ_tโ_f_ฮธโโฑ๐๐ท๐
(A, P, f_ฮธ, ๐),
where ๐ซ_t is the set of prompts of certain type t, ๐ is the set of attacks and โฑ is the set of 4 models.
In our analysis, few-shot prompts consistently demonstrate superior robustness to zero-shot prompts across all datasets.
Furthermore, while task-oriented prompts marginally outperform role-oriented prompts in overall robustness, both of them show varying strengths across different datasets and tasks.
Role-oriented prompts present increased robustness within the SST-2 and QQP datasets, whereas task-oriented prompts are more resilient within the MRPC, QNLI, SQuAD V2, and IWSLT datasets.
Insights into different effects of prompt types on model vulnerability can inform better prompt design and tuning strategies, enhancing robustness against adversarial attacks.
ยง.ยง Understanding the vulnerability of to adversarial prompts
We study the magic behind adversarial prompts.
Our erroneous response analysis (Appendixย <ref>) suggests that adversarial prompts can impact performance by inducing misclassification errors and hindering the model's ability to generate meaningful responses.
Thus, we conduct an attention experiment to investigate the influence of adversarial prompts on LLMs' focus on input words.
Attention visualization.
We propose two attention visualization techniques: 1) Attention by Gradient, which assigns an attention score to each word based on the gradient norm, and 2) Attention by Deletion, which assigns an attention score to each word by examining the absolute change in loss when the word is removed. Comprehensive details regarding these methods can be found in Appendixย <ref>. Both techniques produce similar results; hence, we focus on results from the Attention by Gradient method for simplicity. Our key findings, as demonstrated in ย <ref>, are as follows:
* Clean prompts: efficient attention allocation. predominantly concentrate on key terms within clean prompts, aiding in accurate classifications.
For instance, for clean prompts of BertAttack in ย <ref>, mainly allocate attention to the term `lazy', correctly deducing a `Negative' sentiment.
* Adversarial prompts: attention divergence. Adversarial prompts can reroute ' attention from integral text segments, causing misclassifications.
In some attacks like CheckList and StressTest, the model simultaneously concentrates on the target text and adversarial content, amplifying its susceptibility to adversarial perturbations.
For instance, introducing a random sequence `LKF0FZxMZ4' during a CheckList attack distracts the model, reducing focus on the critical word `good' for accurate classification.
In other attacks, such as BertAttack and DeepWordBug, the model's attention is entirely diverted from the text requiring classification towards adversarial prompts, leading to a significant shift in focus.
For example, in DeepWordBug attack, typos in specific words divert the model's attention from `awful' to the altered word `Qetermine'.
Why sentence-level attacks enhance performance?
We further investigate the intriguing observation that proposed StressTest and CheckList attacks can augment model performance on particular datasets, as revealed by our attention analysis techniques. Attention distribution in ย <ref> illustrates that upon adding an irrelevant sequence such as `and true is true', focus intensifies on the `not_entailment' label, while maintaining attention on significant words like `minnow' and `duck', thus making a correct prediction.
ยง.ยง Transferability of adversarial prompts
ย <ref> displays the effectiveness of various attacks in transferring adversarial prompts between distinct .
For each dataset and prompt type, we selected the most vulnerable prompts generated by a source model (e.g., ).
These prompts were then utilized to launch transfer attacks against the target models (e.g., T5).
The impact of these transfer attacks was quantified by calculating ๐ด๐๐ท๐
_transfer(A, f_ฮธ^target) = 1/|๐ซ_source|1/|๐ป|โ_P โ๐ซ_sourceโ_๐โ๐ป๐๐ท๐
(A, P, f_ฮธ^target, ๐), where f_ฮธ^target is the target model, ๐ซ_source is the prompts selected from source model and ๐ป is the set of all datasets.
In general, we observe that while adversarial prompts exhibit some degree of transferability. However, it is marginal compared to ย <ref> and <ref>.
Specifically, the APDR in the target model by adversarial prompts from source model is small compared to the original APDR of the source model.
Furthermore, the standard deviation tends to be larger than the APDR, indicating that the transferability is inconsistent.
Some adversarial prompts can be successfully transferred, causing a performance drop, while others may unexpectedly improve the performance of the target model.
A prime example is the BertAttack transfer from UL2 to Vicuna, which resulted in a -0.70(3.18) value, suggesting an unanticipated enhancement in Vicuna's performance when subjected to these adversarial prompts.
These phenomena illustrate the complex robustness traits of different models.
The transferability to is better compared to T5 and UL2.
This suggests an avenue to generate adversarial prompts to attack black-box models such as by training on small models like T5, which could be used for future research on robustness.
ยง.ยง Which prompts are more robust? Analysis on word frequency
Identifying the frequent patterns in prompts that may affect robustness is essential to both researchers and end-users.
We perform an initial analysis on word frequency towards this study.
r.4
< g r a p h i c s >
Word frequency of adversarial prompts in CoLA dataset.
We divide prompts into two categories: Vulnerable prompts, causing a performance drop of over 10%, and Robust prompts, with a performance drop of 10% or less.
Our analysis uncovers words more susceptible or resilient to attacks. For example, in the task, prompts with `acting', `answering', and `detection' appear less susceptible. But prompts with words like `analyze', `answer', and `assess' seem more vulnerable (ย <ref>). However, identifying robust words is challenging in the MRPC dataset due to almost equal word frequencies (Appendixย <ref>, ย <ref>).
This study suggests that some words and linguistic patterns are more susceptible to adversarial perturbations, thus affecting the performance of .
Such results can inform future research on robustness, guide non-expert users to write better prompts, and help develop defenses against adversarial prompts.
ยง.ยง Countermeasures and defenses
In light of prior insights, we discuss the potential countermeasures.
1) Input preprocessing: One approach involves directly detecting and addressing potential adversaries, such as detecting typos, irrelevant sequences, and enhancing clarity and conciseness of prompts.
2) Incorporate low-quality data in pre-training: Low-quality data can serve as potential adversaries, and explicitly including low-quality data during pre-training may develop a better understanding of diverse inputs and build resilience against adversaries.
3) Explore improved fine-tuning methods: Investigating alternative fine-tuning techniques could lead to enhanced robustness. As we demonstrated before, models such as T5 and UL2 exhibit greater robustness compared to , suggesting potential benefits of large-scale supervised fine-tuning.
ยง LIMITATIONS
We acknowledge several limitations that could be addressed in future research.
First, due to the required substantial computation, we did not perform evaluations on the full datasets but relied on sampling.
Future research may evaluate on the entire datasets to gain more comprehensive insights.
Second, our analysis did not include GPT-4 and some latest due to a lack of APIs and computation resources.
Including more in the future could provide a more diverse perspective.
Third, the tasks and datasets are limited.
Expanding them offers a broader understanding of vulnerabilities.
Fourth, we did not evaluate more advanced techniques of prompt engineering such as chain-of-thought (CoT) <cit.> and tree-of-thought (ToT) <cit.> since it is hard to perform automatic evaluation using these methods.
We believe more evaluations can be done on latest prompt engineering techniques.
ยง CONCLUSION
We thoroughly evaluated the robustness of to adversarial prompts using the proposed .
While the results show that current are not robust enough to adversarial prompts, we further analyzed the reason behind it using attention visualization.
Moreover, we analyze the frequent words to provide a guidance for both experts and non-experts in developing better prompt engineering tools.
We hope that can be a foundational tool for robust research.
ยง DISCLAIMER
This paper leveraged adversarial attacks on prompts for evaluation, which might trigger potential misuse of .
We emphasize that all attacks conducted in this work are only to evaluate the robustness of to adversarial prompts with the intention of facilitating more robust .
Additionally, we leveraged both open-source in Huggingface <cit.> and online APIs for evaluation.
As open-source and online APIs may continuously change, some results may not be reproducible.
However, our code and analysis framework remain versatile and can still be useful for future .
plain
ยง CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
* Did you describe the limitations of your work?
* Did you discuss any potential negative societal impacts of your work?
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
See the code webpage.
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
* Did you mention the license of the assets?
* Did you include any new assets either in the supplemental material or as a URL?
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
.tocappendix
chapternone
appendixsubsection
ยง PROMPTS
ย <ref> shows examples of different types of prompts.
ยง ENVIROMENTS, MODELS AND DATASETS
ยง.ยง Models
For more details about Vicuna, please refer to its Github repository[<https://github.com/lm-sys/FastChat>].
For the other , please refer to Huggingface transformer repository <cit.>.
* Flan-T5-large <cit.>: Flan-T5-large is a derivative of the Text-to-Text Transfer Transformer (T5) model, developed by Google.
* Dolly-6B <cit.>: The Dolly-v1-6b model is a 6-billion parameter causal language model developed by Databricks. It originates from EleutherAIโs GPT-J <cit.> and has been fine-tuned on the Stanford Alpaca <cit.> corpus, which comprises roughly 52K question/answer pairs.
* Vicuna-13B <cit.>: Vicuna-13B, fine-tuned from the LLaMA-13B base model, was developed using approximately 70K user-shared conversations collected from ShareGPT.com via public APIs.
* Cerebras-13B <cit.>: Cerebras-13B is based on the GPT-3 style architecture. All models in the Cerebras-GPT series have been trained according to Chinchilla scaling laws <cit.>, which optimize compute efficiency by maintaining a ratio of 20 tokens per model parameter.
* LLaMa-13B <cit.>: The LLaMA-13B model, developed by the FAIR team at Meta AI, is an autoregressive language model that employs the transformer architecture.
* GPT-NEOX-20B <cit.>: GPT-NEOX-20B is a large-scale implementation of GPT-based models, with NEOX-20B specifically referring to a variant of this series comprising 20 billion parameters.
* Flan-UL2 <cit.>: Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model. It was fine tuned using the "Flan" prompt tuning and dataset collection.
* ChatGPT <cit.>: Developed by OpenAI, ChatGPT is a large language model trained to generate human-like text based on the prompt it's given. It uses the GPT-3 architecture and has been fine-tuned for more interactive and conversational tasks.
ยง.ยง Datasets
GLUE The GLUE dataset (General Language Understanding Evaluation)ย <cit.> is a collection of resources designed to assess and benchmark the performance of natural language processing (NLP) models across various language understanding tasks. In this study, we selected 8 tasks, including Sentiment Analysis (<cit.>), Grammar Correctness (<cit.>), Duplicate Sentence Detection (<cit.>, <cit.>), and Natural Language Inference (<cit.>, <cit.>, <cit.>, and <cit.>).
MMLU <cit.> To evaluate the extensive world knowledge and problem-solving abilities of large language models, the MMLU dataset encompasses 57 tasks consisting of multiple-choice questions from diverse domains, such as mathematics, history, computer science, law, and more. This dataset serves as a massive multitask test.
SQuAD V2 <cit.> SQuAD v2 is a widely-used dataset for training and evaluating natural language processing models in the domain of machine reading comprehension. SQuAD v2 enhances the original SQuAD dataset (SQuAD v1) by introducing unanswerable questions, increasing the challenge for models. For each question, the model must either: (1) identify the correct answer span within the passage (if the question is answerable), or (2) predict that the question is unanswerable (if there is no answer span within the passage).
UN Multi <cit.> The Multi UN dataset is a large parallel corpus of text gathered from official United Nations documents. It comprises texts in six official languages of the United Nations: Arabic, Chinese, English, French, Russian, and Spanish. The Multi UN dataset primarily contains formal texts, which may limit its applicability to more informal language domains or conversational applications.
IWSLT 2017 <cit.> The IWSLT 2017 dataset (International Workshop on Spoken Language Translation 2017) is a collection of multilingual, multi-domain parallel text data specifically designed for evaluating spoken language translation systems. The translation tasks include data from the TED Talks Open Translation Project, featuring parallel text data for multiple language pairs such as English-German, English-French, English-Chinese, and English-Czech. The dataset consists of both spoken language transcriptions and their corresponding translations.
Math <cit.> DeepMind Mathematics Dataset is a collection of math problems aimed at evaluating the mathematical reasoning abilities of artificial intelligence models. The dataset challenges AI models to solve a diverse range of mathematical problems, spanning from algebra to calculus, and tests their ability to comprehend and reason via complex mathematical concepts.
ยง.ยง Enviroments
To reproduce the computational environment used in this study, an environment file, env.yml, is provided in our repository. This YAML file lists all the dependencies and their specific versions used in the study. Users can create an identical Conda environment using the command conda env create -f environment.yml.
The computational experiments were conducted on machines equipped with NVIDIA A100 (80GB GPU memory each), NVIDIA Tesla V100 GPUs (16GB of GPU memory each) and NVIDIA Tesla P40 GPUs (24GB of GPU memory each).
ยง ATTACKS
Attacks on samples.
Note that while this paper only focuses on attacking the prompts, the input samples can also be attacked in the same manner.
We only attack prompts since it is versatile in all tasks where the samples are not necessary.
Attacking both is doable and will obtain worse results, but will be more time-consuming.
ยง.ยง Details of attacks
The majority of our prompt attacks have been developed by adapting and revising strategies from TextAttack[<https://github.com/QData/TextAttack>]ย <cit.>. For the detailed settings of each attack, please refer to our code.
Our proposed adversarial attacks incorporate a critical feature, referred to as LabelConstraint. This feature imposes restrictions on the perturbations, specifically prohibiting alterations to certain task-essential words. For instance, in translation tasks, the word `translation' is preserved, while in the context of sentiment classification tasks, pivotal sentiment indicators such as `positive' and `negative' remain untouched. Moreover, in the few-shot learning scenario, the few-shot examples are also exempt from adversarial attacks. Such constraint preservation ensures the validity of prompts while exploring the model's vulnerability to adversarial manipulation.
Character Level: Techniques such as TextBugger and DeepWordBug manipulate text at the character level by introducing typos or errors within words through insertions, deletions, replacements, and replications. These methods capitalize on the model's vulnerability to minor perturbations in individual characters, frequently resulting in misclassification or erroneous interpretations.
We primarily adopt the settings from TextAttack for TextBugger and DeepWordBug, such as the repeat constraint which prohibits modifying words that have already been altered. Additionally, For TextBugger, TextAttack enforces a constraint on the overall similarity between the sentence encodings of clean and adversarial prompts, utilizing the Universal Sentence Encoder <cit.> to generate text embeddings. In our study, we set this minimum similarity threshold to 0.8. For DeepWordBug, TextAttack set constraint on edit distance (Levenshtein Distance) as 30.
Word Level: In this study, we employ BertAttack and TextFooler for word-level attacks. These approaches focus on replacing words within the text with synonyms or contextually similar words. By making ostensibly minor alterations to the input text, these attacks can deceive large language models into producing incorrect outputs or substantially modifying their predictions. We meticulously fine-tune the hyperparameters of BertAttack and TextFooler to obtain more appropriate synonyms.
For TextFooler, we set the minimum embedding cosine similarity between word and its synonyms as 0.6, and the minimum Universal Sentence Encoder similarity is 0.84. For BertAttack, the minimum Universal Sentence Encoder similarity is 0.8.
Sentence Level: StressTest and CheckList serve as examples of sentence-level attacks, wherein adversaries attempt to distract the model by adding irrelevant or extraneous sentences to the input text. By incorporating misleading information into the text, these methods can potentially cause the model to lose focus on the primary context, leading to inaccurate results. For the StressTest attack, we adopt similar settings to those in <cit.>, appending "and true is true," "and false is not true," or "and true is true" for five times to the end of a prompt. For the CheckList attack, we generate 50 random sequences consisting of alphabets and digits, each with a length of 10, and append this random sequences into the end of a prompt.
Human Level: At the human level, adversaries can construct prompts using various languages, such as Chinese, French, Arabic, Spanish, Japanese, and Korean, subsequently translating these prompts into English. By exploiting the nuances and idiosyncrasies of different languages during translation, it can introduce subtle ambiguities, grammatical errors, or inconsistencies in the input prompt. This poses a formidable challenge for NLP models in generating accurate and coherent responses.
For each language, we first construct 10 prompts based on a English prompt by GPT4 <cit.>, then translate it back to English by Google Translator.
ยง.ยง Samples of adversarial prompts
ย <ref> presented examples of adversarial prompts generated by 7 attacks.
Note that we have generated adversarial prompts. For other examples, please refer to our code and the visualization website (Appendixย <ref>).
ยง.ยง Semantic preserving of adversarial prompts
The objective is to maintain the semantic integrity of adversarial prompts to ensure that they are acceptable (or imperceptible) to human comprehension.
Therefore, it is of paramount importance that our adversarially engineered prompts retain coherence and realism. The ultimate goal is to create prompts that, despite their adversarial nature, could conceivably be constructed by a human interactor, thereby ensuring a practical relevance to our research in the context of real-world language model applications.
We have predominantly focused on word-level attacks, specifically those generated by BertAttack and TextFooler. These present unique challenges, such as the potential for erroneous synonym replacement, leading them to produce the most unacceptable adversarial prompts as per our expectations. On the other hand, we have disregarded sentence-level attacks in this study as they are designed to be inherently imperceptible to humans.
To address the challenges associated with word-level attacks, we have diligently fine-tuned the hyperparameters of each attack, thus striving to maintain semantic continuity. We validated our approach through a questionnaire-based study[Note that our user study does not introduce ethics concerns to the volunteers.]. Here, we provided 48 samples from each word-level attack to five volunteers, who were tasked with determining whether the adversarial prompts retained the same semantic meaning as their clean counterparts. The results are presented in ย <ref>.
Our overall findings demonstrate that our refined attack strategies have been successful in preserving semantic meaning in adversarial prompts, despite the inherently difficult nature of word-level attacks.
ยง EXPERIMENTS
ยง.ยง Details on test sets sampling
For the GLUE datasets, we sample 1,000 instances when the validation set exceeds this size; otherwise, we utilize the entire validation set. With respect to , we adopt a smaller sample size of 200 instances for computational efficiency. For the dataset, we select 10 instances for each of the 57 tasks if the validation set exceeds this size; if not, the entire validation set is used. For the dataset, we randomly select 200 validation instances. Regarding and , we focus on three languagesโEnglish, French, and German, which are primarily supported by T5 and UL2. We select a total of 100 validation instances, evenly distributed among all possible translation pairs, e.g., English to French. For the Math dataset, we select 20 types of math problems, choosing either 5 or 10 instances per type, resulting in a total of 160 instances.
This sampling strategy ensures the formation of a manageable and representative evaluation set for each dataset, thereby enabling an effective assessment of the performance and robustness of across various tasks and domains.
ยง.ยง Results of clean prompts on all
ย <ref> showcases the performance of different models across various datasets when using clean prompts. Certain , including Dolly, Cerebras, LLaMa, and NEXO, encounter difficulties with some datasets. For instance, Dolly's accuracy for the QQP dataset is merely 0.53%, a stark contrast to T5's accuracy of 86.67%. Consequently, we focus our attack study on models that demonstrate superior performance, namely T5, Vicuna, UL2, and ChatGPT.
ยง.ยง Do really understand prompts?
ยง.ยง.ยง Challenges in prompt understanding
We delve into the challenges that some large language models () encounter when attempting to effectively understand and respond to prompts.
Through this analysis, we aim to uncover the limitations of in prompt understanding and guide the development of more effective capable of eliciting accurate and contextually relevant responses.
ย <ref> presents the performance of different models on different datasets. It is evident that models like , , , , and struggle to genuinely understand the prompt and generate appropriate responses.
In contrast, T5, Flan-UL2, and demonstrate a more accurate understanding of the prompts, resulting in significantly higher performance.
This phenomenon raises concerns about the comprehension and reasoning abilities of large models. It is crucial to explore prompt engineering techniques and model fine-tuning strategies that can enhance the LLM's ability to comprehend and respond accurately to various prompts. Here we analyze this phenomenon from two perspectives:
* Training data memorization: are trained on extensive text corpora and can inadvertently memorize certain phrases, patterns, or links. In some cases, appear to recall a memorized pattern instead of comprehending the task at hand.
* Fine-tuning and adaptation: Some may benefit from fine-tuning or additional training on task-specific data to enhance their understanding of specific prompts and improve performance. Customizing models to better comprehend and respond to certain types of prompts can help mitigate this issue.
ยง.ยง.ยง Effectiveness of different prompt types
ย <ref> and ย <ref> presents typical results from our analysis. By examining the performance of various prompt types, we arrive at the following observations:
* As evidenced in ย <ref>, there exists a substantial disparity in performance among different prompts within the same dataset and model, indicative of considerable variability. In some instances, the performance discrepancy between two prompts of identical types can be nearly 15%, accentuating the critical role of prompt selection and design in enhancing model performance. For instance, within the QNLI dataset, the standard deviation of distinct prompts reaches up to 7.48% for ChatGPT and 5.57% for T5.
* As per ย <ref>, few-shot prompts boost the performance of for certain datasets, such as QQP, MRPC, and translation tasks. By presenting a finite set of examples, the model attains a superior understanding of the task's context and requirements, culminating in more precise and consistent translations. Contrarily, few-shot prompts may impede performance for other datasets, such as SST-2 and CoLA.
* A definitive superiority between task-oriented prompts and role-oriented prompts is elusive, as both exhibit nearly equivalent performance.
* Despite their extraordinary abilities, continue to grapple with challenges in carrying out mathematical computations, particularly those involving floating-point numbers. This difficulty underscores the importance of continued research and development in the field of mathematical reasoning and computation for large language models.
ยง.ยง Additional experiment results for Falcon-Instruction-40B
Falcon-40B <cit.>, a decoder-only model boasting 40 billion parameters, was engineered by TII and trained on an extensive corpus of 1,000 billion tokens from RefinedWeb <cit.>, enriched with meticulously curated corpora. With its publication on May 25, we embarked on an empirical investigation to scrutinize its susceptibility to adversarial attacks. Specifically, we conducted a series of seven distinct prompt attacks on Falcon-40B, leveraging the SST-2 dataset in a zero-shot setting for both task-oriented and role-oriented prompts.
The outcomes of these adversarial prompts are comprehensively detailed in ย <ref>. Remarkably, it became evident that Falcon-40B exhibits significant vulnerability to adversarial perturbations at varying levelsโcharacter, word, and semantic. For certain prompts, the effectiveness of these attacks is so pronounced that it induces a precipitous performance plunge, resulting in an accuracy drop rate that reaches unity.
Additionally, we analyze the performance across ten disparate prompts, as illustrated in ย <ref>. Intriguingly, even with clean promptsโthe model exhibits a considerable degree of variability in its performance. For instance, the prompt Determine the overall sentiment of this sentence, categorizing it as `positive' or `negative': yields an impressive accuracy of 90.83%. Conversely, the prompt Review this statement and decide whether it has a `positive' or `negative' sentiment: registers an accuracy of 0.00%, a stark contrast that underscores the model's inconsistency in prompt handling.
ยง ERRONEOUS OUTPUT ANALYSIS
In ย <ref>, our analysis reveals two primary modifications introduced by adversarial prompts:
* Induced Misclassification: As exemplified by BertAttack, CheckList, and Translation attacks, adversarial prompts can lead the model to erroneous classifications. For instance, the sentiment prediction may shift from positive to negative due to the influence of the adversarial prompt. This instance validates the efficacy of adversarial attacks in manipulating the model's decision-making processes.
* Generation of Incoherent Responses: In the case of the DeepWordBug attack, the adversarial prompt results in the model generating incoherent or nonsensical sentences. For example, the response "None of the above choices" does not align with any positive or negative sentiment classification, thereby demonstrating that the model fails to comprehend the intended input. This observation emphasizes the susceptibility of Large Language Models (LLMs) to adversarial perturbations that can potentially hamper their natural language understanding capabilities.
ยง ATTENTION VISUALIZATION TECHNIQUES
ยง.ยง Attention by Gradient
Consider an input x = [t_1^(1), t_2^(1), ..., t_n^(k)] comprised of k words and n tokens, where t_i^(j) represents the i-th token belonging to word w_j, and let y be the corresponding label. Initially, LLM f_ฮธ decomposes each word into tokens. Thus, tokens that correspond to the same word need to be concatenated, let the mapping function w_j = M(t_i^(j)). We first compute the gradient of each token according to:
g_t_i^(j) = โโ[f_ฮธ(x), y]/โ t_i^j.
Once we obtain the gradients, we compute the word-level gradient by summing the token-level gradients corresponding to each word:
g_w_j = โ_i โ0, 1, ..., n g_t_i^(j)s.t. M(t_i^(j)) = w_j.
Finally, we calculate the l_2 norm of each word's gradient, followed by min-max normalization to produce a score s_w_j for each word:
s_w_j = || g_w_j ||2 - min g_w_i/max g_w_i - min g_w_i.
ยง.ยง Attention by Deletion
Attention by deletion is a prevalent method used in black-box textual attacks to determine the significance of each word in the input. Given an input x with the i-th word w_i deleted, denoted as xฬ^(i), the importance score of w_i can be computed by taking the absolute difference of the loss function โ evaluated at the complete input x and the altered input xฬ^(i):
s_w_j = | โ[f_ฮธ(x), y] - โ[f_ฮธ(xฬ^(i)). y] |
This raw score is then normalized using min-max normalization, yielding a final score s_w_j for each word:
s_w_j = s_w_j - min s_w_i/max s_w_i - min s_w_i.
ยง WORD FREQUENCY ANALYSIS
ย <ref> illustrates the distribution of word frequency as processed by the T5 model within the MRPC dataset. This distribution reveals a significant overlap in the occurrence of words in both robust and vulnerable prompts. This commonality suggests that the robustness or vulnerability of a prompt is not solely determined by the presence of specific words but potentially more related to their contextual use or placement within the prompt. It underscores the complexity and challenge of discerning between robust and vulnerable words based solely on frequency counts. This observation motivates the need for further investigation into additional factors, such as semantic coherence or syntactic structures, that may contribute to the robustness of prompts. For more granular analysis on word frequency, please refer to our detailed reports available in our Github repository[<https://github.com/microsoft/promptbench>].
ยง VISUALIZATION WEBSITE FOR ADVERSARIAL PROMPTS
In order to provide an interactive and user-friendly platform for visualizing and exploring adversarial prompts, we developed a web-based application using Streamlit hosted by Hugging Face: <https://huggingface.co/spaces/March07/PromptBench>.
The visualization website, as shown in ย <ref>, enables users to select from a variety of (T5, Vicuna, UL2, ChatGPT), datasets (SST-2, CoLA, QQP, MRPC, MNLI, QNLI, RTE, WNLI, MMLU, SQuAD V2, IWSLT 2017, UN Multi, Math), prompt types (zeroshot-task, zeroshot-role, fewshot-task, and fewshot-role), and attacks (TextBugger, DeepWordBug, BertAttack, TextFooler, CheckList, StressTest, and Semantic). Based on the user's selection, the application generates adversarial prompts tailored to the chosen model, dataset, prompt type and attack.
|
http://arxiv.org/abs/2306.08178v1
|
20230613235329
|
ChatGPT vs. Lightweight Security: First Work Implementing the NIST Cryptographic Standard ASCON
|
[
"Alvaro Cintas-Canto",
"Jasmin Kaur",
"Mehran Mozaffari-Kermani",
"Reza Azarderakhsh"
] |
cs.CR
|
[
"cs.CR"
] |
A. Cintas-Canto is with the School of Technology and Innovation, Marymount
University, Virginia, VA 22207, USA (e-mail: [email protected]).J. Kaur and M. Mozaffari-Kermani are with the Department of Computer
Science and Engineering, University of South Florida, Tampa, FL 33620,
USA (e-mail: [email protected], [email protected]).R. Azarderakhsh is with the Department of Computer and Electrical
Engineering and Computer Science, Florida Atlantic University, Boca
Raton, FL 33431, USA (e-mail: [email protected]).
ChatGPT vs. Lightweight Security: First Work Implementing the
NIST Cryptographic Standard ASCON
Alvaro Cintas-Canto, Jasmin Kaur, Mehran Mozaffari-Kermani, and Reza
Azarderakhsh
July 31, 2023
===============================================================================================
This study, to the best of our knowledge, is the first to explore
the intersection between lightweight cryptography (LWC) and advanced
artificial intelligence (AI) language models. LWC, in particular the
ASCON algorithm which has been selected as the LWC standard by the
National Institute of Standards and Technology (NIST) in Feb. 2023,
has become increasingly significant for preserving data security given
the quick expansion and resource limitations of Internet of Things
(IoT) devices. On the other hand, OpenAI's large language model (LLM)
ChatGPT has demonstrated significant potential in producing complex,
human-like text. This paper offers a novel method for implementing
the NIST LWC standard, ASCON, using the GPT-4 model. Moreover, this
paper details the design and functionality of ASCON, the procedures
and actual Python implementation of ASCON using ChatGPT, and a discussion
of the outcomes. The results contribute valuable insights into the
efficient application of advanced AI language models in cryptography,
particularly in constrained environments. Source code can be found
at: https://github.com/DrCintas/ASCON-with-ChatGPT.
Artificial Intelligence, ChatGPT, Cryptographic Implementation, Lightweight
Cryptography, Security, Software Implementation.
ยง INTRODUCTION
Lightweight cryptography (LWC) has emerged as an approach to secure
resource-constrained devices such as Internet of Things (IoT) devices,
a category that has seen exponential growth over the last few years.
IoT devices, given their severe resource constraints, demand cryptographic
solutions that are capable of working efficiently with such limitations.
This includes limited processing power, small memory sizes, constrained
energy supply, and low-bandwidth communication. Despite these, IoT
devices are expected to maintain the confidentiality, integrity, and
availability of data. Therefore, LWC is designed to provide robust
security for such limited-resource devices. Due to the need of adopting
a LWC standard, the National Institute of Standards and Technology
(NIST) recently concluded its multi-year effort to find it, where
ASCON [1] emerged as the winner among the top 10 NIST LWC finalists
[2] in Feb. 2023 [3]. ASCON is an authenticated encryption
algorithm and associated data scheme which is designed to provide
strong security, high efficiency, and simplicity. Its reduced gate
count and its proven security make it an ideal solution for IoT devices.
With the release of ChatGPT, a large language model (LLM) announced
by OpenAI, the field of deep learning and language models has been
revolutionized. Its implications in academic writing, search engines,
and coding are significant, with particularly impressive results shown
when designing computer systems to satisfy particular user requirements
[4]. As of now, ChatGPT has two different deep learning-based
language models: GPT-3.5 and GPT-4. The term โGPTโ stands for
generative pre-trained transformer, a powerful artificial intelligence
language model that uses deep learning to generate human-like text
based on provided input. The GPT-4 model stands out for its neural
network design, which has a substantially higher number of parameters
than previous versions. The neural network employed by the GPT-4 version
comprises significantly more parameters than other previous versions.
This increase in parameter volume is essential for improving the system's
capacity to produce sophisticated and contextually rich text, making
it more useful and efficient even in contexts with limited resources.
However, it is yet to be seen how well ChatGPT performs in writing
complex codes required for cryptographic algorithms. Kwon et. al.
[5] devise methodologies to implement the current symmetric-key
cryptography block ciphers Advanced Encryption Standard (AES) and
CHAM using ChatGPT; the implementation was shown to be straightforward
and precise, where the generated code compiled without errors. This
paper, for the first time, devises an approach to efficiently utilize
the sophisticated OpenAI LLM ChatGPT to implement the NIST LWC standard
ASCON. Throughout this paper, the model version used to implement
ASCON is GPT-4.
ยง PRELIMINARIES
ASCON is an LWC cipher suite with a collection of authenticated encryption
designs and a family of hash and extensible output functions. However,
throughout this work, we are interested in authenticated encryption.
Therefore, for the sake of brevity, we only discuss ASCON authenticated
encryption. For those readers interested in more details about ASCON,
please refer to [1]. ASCON uses a duplex-based mode of operation,
where ASCON's 320-bit permutation iteratively applies a substitution-permutation
network (SPN) to encrypt or decrypt data. The process takes place
in a bit-slice fashion, a feature that makes it scalable to 8-, 16-,
32-, and 64-bit platforms while remaining lightweight.
ASCON has two different variants for different message lengths: ASCON-128
and ASCON-128a. ASCON-128 uses a message length of 64 bits while ASCON-128a
uses a message length of 128 bits [1]. The parameter sets for
the different ASCON variants are shown in Table I. As shown, the major
difference is the sizes of the data block and the number of rounds
(p^b). The encryption process is depicted in Fig. 1 and consists
of four stages: 1) Initialization operation, 2) Associated data processing,
3) Plaintext processing, and 4) Finalization operation for tag generation.
These stages are updated using two 320-bit permutations, denoted as
p^a and p^b, where a and b represent the number of
rounds. The permutations are bit-sliced into five 64-bit register
words, forming the 5-bit internal state. In a complete 12-round ASCON,
the permutations iteratively apply an SPN-based round transformation,
which involves adding round constants, applying the substitution layer,
and employing the linear layer for diffusion within the internal state.
The substitution layer incorporates a compact and lightweight 5-bit
S-box as shown in Fig. 2. This S-box is applied in parallel 64 times
to update each bit-slice of the internal state. The 5-bit S-box is
designed using Boolean logic, enabling efficient implementation on
both ASIC and FPGA hardware platforms. The linear layer of ASCON updates
each 64-bit word of the internal state by rotating register words
with different shift values and performing a modulo-2 addition on
the shifted word values.
The decryption process is slightly different than the encryption process.
Instead of plaintext processing, there is a ciphertext processing
function, where the plaintext blocks are computed by XORing the ciphertext
block.
ยง IMPLEMENTATION OF THE CRYPTOGRAPHIC STANDARD ASCON USING CHATGPT
The task of implementing cryptographic algorithms poses significant
challenges, primarily due to their complex designs and strong security
requirements. The continuous advancements of AI have opened a new
venue for implementing these algorithms, especially through ChatGPT.
This section is divided into two major parts. The first one provides
a detailed methodology for implementing a cryptographic algorithm
through ChatGPT, while the second one focuses on the actual implementation
of ASCON in Python using ChatGPT.
ยง.ยง Methodology Followed to Implement a Cryptographic Algorithm using
ChatGPT
The process, outlined in Fig. 3, is categorized into five stages,
each of which contributes to the successful implementation and testing
of ASCON-128.
Stage 1 - Algorithm Recognition: The initial stage of the
process is to identify the cryptographic algorithm, ASCON, and to
ensure ChatGPT is familiar with it. This step primarily involves the
determination of whether ChatGPT possesses a sufficient understanding
of the algorithm and its main functions and specifications. This understanding
includes a detailed knowledge of the algorithmโs specifications
and parameter sets. If ChatGPT does possess a sufficient understanding
of the algorithm, the process advances to Stage 3; if not; it goes
through Stage 2 before moving to the third stage.
Stage 2 - Algorithm Education: If ChatGPT is unfamiliar with
the ASCON algorithm, the second stage involves educating ChatGPT about
the algorithm specifics. Using the capabilities of ChatGPT as a sophisticated
conversational model, the algorithm is taught through comprehensive
textual descriptions, mathematical explanations, and examples. The
detailed specifications of ASCONโs functions are communicated
to ChatGPT during this step to ensure a thorough understanding and
prepare it for the following stages.
Stage 3 - Algorithm Implementation: In the third stage, we
request ChatGPT to translate the recognized or learned algorithm into
executable Python code. This step consists of asking ChatGPT for the
implementation of ASCON-128 and since it has memory limitations, it
is expected that several prompts may be required for the completion
of this step. Additionally, in this step, a test function from ChatGPT
is requested to validate the functionality of the implemented code.
This allows us to move to the fourth stage: The execution and testing
of the code given by ChatGPT.
Stage 4 - Code Execution and Testing: The fourth stage involves
executing the Python code and testing its functionality. This two-part
process initially includes basic testing using the test function provided
by ChatGPT. This function needs to compare the original plaintext
with the plaintext derived after decrypting the ciphertext and verifying
the generated tag. Subsequently, the outputs from our Python code
are compared with the results of the original ASCON-128 implementation
provided by its authors. In case of any discrepancies in the output
during either part of the testing, we proceed to the next stage.
Stage 5 - Code Review and Debugging: If there are any errors
or discrepancies, we examine each of the functions in the context
of ASCON-128โs specifications. This involves ensuring
that the code matches the correct specifications of ASCON, as well
as adding test values in different functions to verify their correct
operation. The functions are debugged in the following order: The
initialization function, which should invoke the permutation function;
a process data function, which can operate for both plaintext and
ciphertext or can be divided into two functions; and the finalization
function. Since the process of encryption is carried out first and
then the decryption of the ciphertext takes place, these functions
are revised twice. Also, for simplicity, associated data processing
may be implemented once the version without associated data is operational.
The steps outlined above englobe the overall process to implement
ASCON-128 in Python using ChatGPT. With a detailed explanation of
each stage, a meticulous approach is adopted to ensure the successful
implementation and verification of ASCON with the assistance of AI.
ยง.ยง Implementation of ASCON in Python Using ChatGPT
As mentioned above, the first step is querying ChatGPT regarding ASCON
and its specific characteristics. As demonstrated in Fig. 4, ChatGPT
accurately outputs ASCON's algorithm and primary operations, such
as initialization, associated data processing, plaintext processing,
finalization, decryption, and verification, using its knowledge base
up to September 2021. It also recognizes both versions ASCON-128 and
ASCON-128a and their associated parameters. However, it misses mentioning
the number of permutation rounds during the associated data and plaintext
processing phases for ASCON-128a, which performs 8 instead of 6, as
depicted in ASCON-128. Moreover, ChatGPT highlights the potential
risks of implementing cryptographic primitives from scratch, recommending
the use of well-vetted libraries.
Due to ChatGPT's understanding of ASCON, we progress to Stage 3 โ
the implementation of ASCON-128. When asked to provide the entire
code base with the functions shown in Fig. 4, ChatGPT decides to break
down the code, noting that presenting the complete implementation
would be rather extensive.
As pictured in Fig. 5, it initiates with the R function,
which is an abbreviation for rotation and is needed for the linear
diffusion layer of the ASCON_permutation function, also
provided in the same output. The ASCON_permutation function
contains the round constants and the operations executed during the
permutation process: round constant injection, substitution layer,
and linear diffusion layer. Post these implementations, ChatGPT outlines
the other functions needed. Additionally, it recommends again consulting
open-source projects or engaging with a cryptographic expert for guidance.
The next phase requires the implementation of functions to perform
initialization, associated data processing, plaintext processing,
and finalization, pertaining specifically to the ASCON-128 version.
As presented in Fig. 6, before integrating those functions, ChatGPT
suggests and implements the following helper functions:
- to_bytes function: This function converts strings of bytes
to integers.
- from_bytes function: This function converts integers to
strings of bytes.
- pad function: This function pads associated data and plaintext
to the correct size.
- xor_bytes function: This function XORs two-byte strings
of the same length.
Upon executing these functions and as shown in Fig. 7, ChatGPT progresses
to implement the initialize function, initiating the state
and running the initial permutation process. It follows up with the
process_associated_data function, processing each 8-byte
block of associated data via the permutation function and the process_plaintext
function that, akin to the previous function, processes each 8-byte
block of data through the permutation function to generate blocks
of ciphertext. The finalize function then generates the authentication
tag.
The concluding functions needed are to encrypt the plaintext, decrypt
the ciphertext, validate the resultant plaintext post-decryption against
the original plaintext, and verify the tag. Since the process_plaintext
function closely resembles the process_ciphertext function,
ChatGPT adapts the process_plaintext into process_data
to account for both encryption and decryption. This is shown in Figs.
8 and 9, along with the implementation of the encrypt function,
which invokes the initialize function, process_associated_data,
process_data, and finalize function to yield the
ciphertext and the tag; the decrypt function, which invokes
the same functions as encrypt to verify the tag and yield
the plaintext; and the test function, which ensures the cryptosystem
functions as expected by invoking the encrypt and decrypt
functions and using assertions.
After the provision of the entire code, we advance to Stage 4 โ
testing. Upon execution, the test fails. Given the multitude of issues
identified in the code, we opt to work first on a simplified version
without associated data and, once the simplified version works as
intended, incorporate the associated data part. As outlined earlier,
each function is tested independently by incorporating print statements
to validate the function outputs. If incorrect, we evaluate if the
code aligns with the ASCON specifications. For the sake of brevity,
some parts will be cropped as many interactions are needed to implement
the entire ASCON cryptosystem.
The initialize function, invoking the ASCON_permutation
function, is the first to be tested. The main issues with the ASCON_permutation
function are:
- Incorrect constants values.
- Substitution layer not following the specifications. Even after
providing ChatGPT with the specifications, it is still incorrect since
it is not considering that we need to perform bitwise operations in
an unsigned integer context.
- The R function, which is invoked in the linear diffusion
layer is performing left rotation instead of right rotation.
- States are not being XORed in the linear diffusion layer.
- The number of rounds is always 12 but it should be 12 or 6 depending
on the process.
Moreover, the main issues with the initialize function are:
- Incorrect initial state values and structure.
- Helper to_bytes function can be modified to always convert
to 8 bytes.
- When calling the ASCON_permutation function, it should
provide the number of rounds.
- After the permutation, the internal state is not being XORed with
0^* || K.
The majority of the corrective prompts to accomplish the right implementation
of both permutation and initialize functions are
depicted in Fig. 10 (located in the Appendix). However, despite the
rectifications, the test still fails, indicating the need to revise
the next major function, the process_data function.
While we initially focus on the encryption process to validate the
implementation's production of the correct ciphertext, we also identify
the main issues in the process_data function for both encrypting
and decrypting:
- Non-existent padding, which does not allow to accommodate input
data of arbitrary length.
- Incorrect XOR operations.
- Missing XORing in the decryption process. Moreover, it should perform
an additional XOR operation with 1 || 0 for the last block.
- Improper trimming of padding.
- Extra permutation for the last block.
Fig. 11 (located in Appendix) displays majority of the prompts needed
to correct the process_data function.
Post validation of the process_data function, we transition
to the finalize function. This function has a single line;
therefore, many additions are required:
- XORing the internal state with 0^r || K || 0^c-k.
- Executing a 12-round permutation.
- XORing the permutation result with the key to produce a 16-byte
tag.
As depicted in Fig. 12 (located in Appendix), most of the prompts
educate ChatGPT about the specifications, rather than indicating the
errors. The encrypt and decrypt are slightly modified
from the initial version by ChatGPT, taking into account the lack
of associated data. Post validation of the simplified implementation,
the first draft of the functions from Listing 3 only needs to account
for the domain separator S[4] =0x1, which should
be executed after the initialization. The process_associated_data
function, provided by ChatGPT after the simplified version worked,
only contained a single error: the absence of a permutation after
the final block. The final code can be found at: https://github.com/DrCintas/ASCON-with-ChatGPT.
ยง CONCLUSION
To the best of our knowledge this is the first work in the area of
implementing the fairlly recent NIST standard (Feb. 2023) using ChatGPT.
As it has been shown throughout this paper, the results prove that
the implementation of a NIST standard lightweight cryptographic algorithm
using its specifications and ChatGPT is feasible. By using the methodology
proposed in this paper, any individual has the ability to implement
cryptographic algorithms using several prompts in ChatGPT and by having
a deep understanding of the specifications. There is no need of explaining
the fundamentals of ASCON to ChatGPT as it recognizes the cryptographic
algorithm and its main functions. However, prompting deeper mathematical
specifications are needed as the first code provided does not provide
the intended results. Although it is feasible not to prompt any code,
as we have done, sometimes it may be easier to specify ChatGPT the
exact line of the code that is wrong instead of specifying the details
of the entire function, since ChatGPT can modify other parts that
are already correct.
The resulting code from the ChatGPT interactions does not have any
errors and it is able to match all the test vectors when compared
to the original implementation. However, this does not mean that one
should implement their own cryptosystem and rely on ChatGPT to do
so. Furthermore, ChatGPT reminds the users of this and recommends
them to consult open-source projects or engage with a cryptographic
expert for guidance.
ยง ACKNOWLEDGEMENTS
This work was supported by the U.S. federal agency award 60NANB20D013
granted from National Institute of Standards and Technology (NIST).
1
key-2C. Dobraunig, M. Eichlseder, F. Mendel, and M. Schler,
โASCON v1.2: Lightweight Authenticated Encryption and Hashing,โ
Journal of Cryptology, vol. 34, pp. 1-42, 2021.
key-5NIST, โFinalists,โ Computer Security Division,
Information Technology Laboratory, 2021. Available [Online]: https://csrc.nist.gov/Projects/lightweight-cryptography/finalists.
key-1NIST, โLightweight Cryptography Standardization
Process: NIST Selects Ascon,โ Computer Security Division, Information
Technology Laboratory, 2023. Available [Online]: https://csrc.nist.gov/News/2023/lightweight-cryptography-nist-selects-ascon.
key-4M.Aljanabi, M. Ghazi, A. H. Ali, and S. A. Abed, โChatGPT:
Openpossibilities,โ Iraqi Journal For Computer Science and
Mathematics, vol. 4, no. 1, pp. 6264, 2023.
key-2H. Kwon, M. Sim, G. Song, M. Lee, and H. Seo, โNovel
approach to cryptography implementation using ChatGPT,โ Cryptology
ePrint Archive, pp. 606, 2023.
The following figures contain the prompts used to correct the implemetation
of ASCON in Python using ChatGPT.
|
http://arxiv.org/abs/2306.11662v2
|
20230620163256
|
Expressive Machine Dubbing Through Phrase-level Cross-lingual Prosody Transfer
|
[
"Jakub Swiatkowski",
"Duo Wang",
"Mikolaj Babianski",
"Giuseppe Coccia",
"Patrick Lumban Tobing",
"Ravichander Vipperla",
"Viacheslav Klimkov",
"Vincent Pollet"
] |
eess.AS
|
[
"eess.AS"
] |
Not so fast, not so furious: just magnetic
[
July 31, 2023
==========================================
Speech generation for machine dubbing adds complexity to conventional Text-To-Speech solutions as the generated output is required to match the expressiveness, emotion and speaking rate of the source content.
Capturing and transferring details and variations in prosody is a challenge. We introduce phrase-level cross-lingual prosody transfer for expressive multi-lingual machine dubbing. The proposed phrase-level prosody transfer delivers a significant 6.2% MUSHRA score increase over a baseline with utterance-level global prosody transfer, thereby closing the gap between the baseline and expressive human dubbing by 23.2%, while preserving intelligibility of the synthesised speech.
Index Terms: speech synthesis, cross-lingual, prosody transfer, multi-lingual, end-to-end, machine dubbing
ยง INTRODUCTION
Prosody transfer is the ability to transfer speaking style variations and vocal performances disentangled from the spoken content and speaker identityย <cit.>. Many of recent proposed methods utilize global-level prosody transfer. A single embedding per utterance is used to capture prosody and to condition the models to generate speech with the target prosody. These global embeddings are either explicitly learned from ground truth labels such as emotionsย <cit.> or implicitly learned from a reference audio signal using a reference prosody encoderย <cit.>, or a combination of bothย <cit.>.
In this work, we focus on prosody transfer for cross-lingual machine dubbing. We aim to generate speech for a translated text in a target language with expressiveness and emotion of speech from multimedia content in source language.
Exploration of cross-lingual prosody transfer is scarce.
<cit.> studied cross-lingual style transfer based on categorical labels, but this limits transfer of a wide range of expressions and emotions present in multimedia content. Until recently, existing work on machine dubbing has generated speech from translated text only, without transferring prosodyย <cit.>. To our best knowledge, VIPT (Variational Inference for Prosody Transfer)ย <cit.> is the only known work tackling cross-lingual prosody transfer for machine dubbing. VIPT introduces learning of cross-lingual prosody transfer without parallel datasets using a VITS-based systemย <cit.> with a global reference encoder to capture vocal performance. One limitation in using a global reference embedding is that only utterance-level prosody variations are captured, while detailed local prosody variations
cannot be properly encoded. This potentially impacts the transfer of prosody for generating long-form utterances. Transferring local prosody variations such as syntactic phrasing, topic emphasis and marked tonicity is important for expressive machine dubbingย <cit.>.
In this work, we tackle the aforementioned drawback by exploring more fine-grained cross-lingual prosody transfer. Intra-lingual fine-grained prosody control has been exploredย <cit.> by training predictors of prosody components at phoneme or at word level. However, prosody transfer across different utterances at word level suffers due to word mismatch. This applies as well to cross-lingual prosody transfer for machine dubbing, as it is unlikely to have one-to-one alignment between words in the original and translated text. Several recent worksย <cit.> have proposed machine translation techniques for machine dubbing allowing to generate monotonic alignments between translated texts at the level of a prosodic phrase, where a prosodic phrase is defined as a continuous segment of speech separated by silence regions. Therefore, in this work we explore phrase-level cross-lingual prosody transfer for machine dubbing.
Prosody delivery varies across languages, however the prosody of speech expressing the same emotions is correlated in related languages, as discussed in Section 4.6 inย <cit.>. In our work, we explore these cross-lingual correlations for the purpose of prosody transfer. Our study is limited to European languages comprising English, German, French, Italian and Spanish, and focused on English-Spanish prosody transfer as a common dubbing language pair. We anticipate that more distant language pairs such as English-Japanese exhibit less correlated prosody features.
Our solution follows VIPTย <cit.> in combining a VAE (Variational Auto-Encoder) prosody encoder with VITS and trains on multimedia data without mining of parallel utterances with matching text across different locales. We propose to capture and to transfer phrase-level variations of prosody in a cross-lingual setting. To achieve that, we have devised a new phrase-level reference encoder that learns to condition the phrases of the input text with prosody embeddings extracted from corresponding parts of the reference speech waveform. We have also introduced a novel regularization applied on prosody embeddings based on phrase length, to reduce content leakage from short phrases. We discuss the details in Sectionย <ref>.
We evaluate our proposed method with both MUSHRAย <cit.> subjective perceptual test and objective metrics including Word Error Rate (WER) and conditional Frรฉchet DeepSpeech Distance (cFDSD)ย <cit.>. We compare our method against VIPTย <cit.>, a strong baseline for cross-lingual performance transfer. Both subjective and objective metrics suggest that our method improves expressiveness without compromising on intelligibility. We also show the importance of phrase-level conditioning in training, by comparing against a VIPT variant trained with global-level conditioning, but transferring prosody at phrase-level during inference. We demonstrate a significant 6.2% MUSHRA score increase over VIPT, which closes the gap between machine dubbing and expressive human dubbing by 23.2%. To summarize, our contributions are:
* We present a new method capable of cross-lingual phrase-level prosody transfer for expressive multi-lingual machine dubbing. Robust and more fine-grained transfer compared to global-level prosody transfer improves the quality.
* We propose a length-based regularization method for fine-grained prosody representations.
ยง METHOD
This work extends the VIPTย <cit.> architecture by enabling modelling and cross-lingual transfer of prosody at phrase level. Figureย <ref> provides an overview of the proposed method. This section summarizes the baseline VIPT architecture and describes the extensions that enable the modelling and cross-lingual prosody transfer at the phrase level.
ยง.ยง VIPT architecture
The VIPT architecture is based on conditional variational autoencoder with adversarial learning for end-to-end text-to-speech (VITS)ย <cit.>. VIPT makes a number of architectural changes to VITS. Most notably, VIPT combines VITS with an audio reference encoder that enables cross-lingual prosody transfer at global level. Consequently, VIPT is able to learn the cross-lingual prosody transfer from non-parallel data. This is possible as VIPT learns prosody representations that are agnostic to speakers and languages. Such representations can be transplanted from a reference audio in source language spoken by a source speaker to generate speech in the target language with the voice characteristics of a target speaker.
Furthermore, VIPT proposes a noise modelling variant of the reference encoder that allows clean speech synthesis even when training with noisy data and transferring prosody from noisy reference audio, which is common in multimedia data recorded in the field.
The noise modelling variant consists of two separate reference encoders for denoised and noise streams obtained from the reference audio using a denoiser component <cit.>. The reference encoders produce global-level denoised prosody and noise embeddings. These embeddings are then concatenated and broadcasted to phoneme-level in a prior encoder module. The VIPT architecture is analogous to ours shown in Figureย <ref>, but with a single global-level embedding per reference encoder instead of phrase-level embeddings.
Finally, VIPT adapts a number of additional changes from the literature to the base VITS model. First, VIPT replaces VITS's monotonic alignment search algorithm with an explicit duration predictor and extends the prior encoder module with a frame prior network as in <cit.>. Second, VIPT adds speaker embeddings and language embeddings as in <cit.>. Lastly, VIPT replaces HiFiGAN decoder <cit.> with a BigVGAN-based decoder <cit.>. We keep these components as in VIPT.
ยง.ยง Phrase-level cross-lingual prosody transfer
We extend VIPT by enabling cross-lingual transfer of prosody at phrase level. The global-level reference encoder in VIPT may not sufficiently transfer the prosody variations present in expressive multimedia speech, especially when a dialogue line consists of several short phrases. Capturing prosody variations consequently requires more fine-grained representations. Word-level prosody representations are challenging to transfer in a cross-lingual setting due to lack of monotonic word correspondence between translated texts.
Instead, we propose prosodic phrases as a level of granularity for cross-lingual prosody transfer.
We show experimentally that prosodic phrases are able to capture local variations in prosody which can be robustly transferred between speech in different languages.
At the same time, prosodic phrases can be automatically aligned across translated texts using recently developed prosodic alignment techniques for machine dubbingย <cit.>.
ยง.ยง.ยง Phrase-level reference encoder
We adopt the definition of prosodic phrases fromย <cit.> as continuous speech segments separated by silences.
The silences are extracted by force aligning reference audio and text using an external aligner, such as the Gaussian Mixture Model (GMM) based Kaldi Speech Recognition Toolkitย <cit.> used in our experiments.
We treat the silence regions as part of preceding speech phrases. Each speech phrase is encoded into a single prosody embedding using a reference encoder described below. See Figureย <ref> for an illustration of this approach.
We propose a phrase-level reference encoder architecture that extracts frame-level embeddings from a linear spectrogram and downsamples the frame-level embeddings to the phrase-level.
We base our reference encoder architecture on that from VIPT, but with changes to keep one-to-one frame-embedding correspondence before downsampling to the phrase-level. Namely, our architecture consists of five convolutional layers with channel size of 512, a kernel size of three and stride of one, followed by one bi-directional LSTM layer with channel size of 512. The frame-level outputs of the bi-directional LSTM are then downsampled by selecting the middle embedding per phrase. We experimented with other forms of downsampling (e.g. mean of frames per phrase), but did not observe significant differences. The phrase-level embeddings are then further processed by a fully connected layer that outputs a parameterization of a 32-dimensional diagonal Gaussian distribution, which is regularized using Kullback-Leibler Divergence (KLD) with a standard Gaussian ๐ฉ(0, I). The final phrase embeddings are sampled from this Gaussian.
ยง.ยง.ยง Length-based regularization
We propose a length-based regularization of phrase-level prosody embeddings to reduce content leakage when transplanting prosody embeddings across languages.
As we have observed content leakage for short phrases, we added a length-based regularization as following:
โ_KLD_ฮฒ = 1/Kโ_kโ[1,K] e^-ฮฒ L_k KLD(h_k, ๐ฉ(0,I))
where K is the total number of phrases in one utterance, L_k is length of phrase k defined as the number of phonemes, ฮฒ is a constant hyper-parameter controlling how much L_k affects the scaling factor e^-ฮฒ L_k, h_k is the prosody embedding distribution of the k^th phrase.
This formulation applies stronger regularization to the embeddings of short phrases, thus preventing them from carrying content information that should only be obtained from the text prior. We show experimentally that the proposed regularization improves the intelligibility of synthesised speech for short phrases in Tableย <ref>.
ยง.ยง.ยง Noise modelling at phrase-level
We apply the noise modelling approach from VIPT in the context of the phrase-level reference encoder. Specifically, the reference audio is passed to a denoising componentย <cit.> to extract denoised and noise streams. The two streams are then used as inputs to two separate phrase-level reference encoders with the architecture described in subsectionย <ref>. The reference encoders output phrase-level denoised prosody and noise embeddings that are concatenated, upsampled to the phoneme level and used as conditioning in the text encoder.
During inference, we extract a clean noise embedding from a static clean audio similarly to VIPT. However, we make sure to use a clean audio, which contains exactly one phrase, so that it is feasible to upsample this single clean noise embedding to match the number of the per-phrase prosody embeddings extracted from the denoised reference audio.
ยง.ยง.ยง Alignment of phrase-level audio reference embeddings to target text phonemes
We aim to transfer prosody from phrases of reference speech to corresponding phrases in the translated target text to be synthesised.
More precisely, we concatenate phrase embeddings extracted from the reference audio with encoded phonemes corresponding to a given phrase in the target text. To achieve this, we need a mapping between reference audio phrases and target text phonemes. See the illustration in Figureย <ref>.
During training, the reference audio and the text to be synthesised correspond to each other. This allows us to force align the audio and the text phonemes to compute frame-phoneme correspondences.
During inference, when performing cross-lingual prosody transfer, the reference audio contains speech in a language different from the translated text to be synthesised. Therefore, in such case we cannot align the audio and the text as in training. Instead, we need to insert phrase breaks into the translated text to obtain a monotonic one-to-one alignment between the phrases in the audio and the text.
Such alignments can be automatically generated using recently developed prosodic alignment techniques for machine dubbingย <cit.>. However, the focus of this work is on evaluating the quality of prosody transfer, and thus we assume the prosodic alignment is given.
ยง EXPERIMENTS
ยง.ยง Training setup
Our training setup largely follows that of VIPTย <cit.> by updating the KLD regularization for prosody โ_ProsodyKLD and noise โ_NoiseKLD reference encoders with the phrase-level formulation and the length-based scaling coefficients ฮฒ described in Sectionย <ref>. The final loss can be formulated as:
โ = โ_VITS + ฮฑ_1 โ_ProsodyKLD_ฮฒ_1 + ฮฑ_2 โ_NoiseKLD_ฮฒ_2
where โ_VITS represents the VITS loss terms with replaced adversarial components as in BigVGANย <cit.>.
We performed nine runs of hyperparameter search and set the KLD loss coefficient ฮฑ and the length-based KLD loss scaling coefficient ฮฒ for both the prosody and the noise reference encoder as ฮฑ_1=ฮฑ_2=0.04 (other tested values: 0.02 and 0.08) and ฮฒ_1=ฮฒ_2=0.08 (other tested values: 0.02 and 0.04) respectively. We trained using mixed precision on 8 NVIDIA V100 GPUs, with a batch size of 30 per GPU, and used AdamW optimizerย <cit.>. We trained the model for 600 epochs.
The generative part of our proposed model and discriminators have 100 million and 47 million parameters respectively.
ยง.ยง Data
We used an internal multimedia dataset from which we extract multi-speaker multi-lingual dialogues resulting in 598 hours of speech from 134 female and 162 male speakers in 5 different locales; namely US English, Castilian Spanish, French, German and Italian. Speaker age groups range from children to elderly.
The speech data is resampled to 24 kHz and normalized in terms of loudness. Silences longer than 2 seconds are trimmed. We split the dialogues into separate phrases based on silences of at least 50 milliseconds.
ยง.ยง Evaluated systems
We evaluated the proposed method against human Spanish dubs and two baseline models. We denote our method as Variational Inference for Prosody Transfer with Noise Modelling and Phrase-level Variational Auto-Encoder (VIPT-NM-PVAE). VIPT-NM-GVAE is a strong baseline for cross-lingual performance transfer with global-level reference encoder (corresponding to VIPT-NM-Transfer model from <cit.>). Additionally, we introduce a second baseline named VIPT-NM-GVAE-PP, which uses the same model architecture as VIPT-NM-GVAE during training, but at inference time it computes prosody embeddings per phrase (PP). Namely, during inference, the VIPT-NM-GVAE-PP model passes parts of the source audio corresponding to each of the K phrases separately through the global-level reference encoder to extract the K phrase-level embeddings. We include this baseline to evaluate the importance of training phrase-level embeddings.
ยง.ยง Subjective Evaluation
For the evaluation of cross-lingual prosody transfer, we performed a MUSHRA test on a held-out subset of 100 parallel utterances between US English and Castilian Spanish.
To provide testers context for the assessment of prosody match, all audio samples were overlaid on the corresponding videos. 25 bi-lingual test subjects native in Castilian Spanish and fluent in English were presented with the video samples in a random order side-by-side. The test subjects were tasked to โRate the vocal performance in the Spanish video dubbing samples with respect to the English reference videoโ. Each test case was scored by all 25 testers independently.
Evaluation results are summarized in Figureย <ref> and show that VIPT-NM-PVAE achieved a statistically significant 6.2% MUSHRA score increase over VIPT-NM-GVAE baseline system, which closes the gap to human dubbing by 23.2%.
Inspection of evaluators ratings
suggests that the improvement in VIPT-NM-PVAE results from increased expressiveness of generated speech and more accurate prosody transfer. There is significant difference in MUSHRA scores between VIPT-NM-PVAE and both baseline models for multi-phrase utterances, and for single-phrase utterances (Table <ref>). We hypothesize that training with the proposed phrase-level reference encoder may lead to increased sensitivity to the prosody embedding.
ยง.ยง Objective Metrics
To quantify stability of tested systems and intelligibility of synthesised speech we conducted Word Error Rate (WER) analysis. The results are reported in Table <ref> for a held-out test set of 1200 parallel utterances. First, all generated audio files were transcribed with a Whisper Large <cit.> ASR model. Then, WER scores were computed between sentence texts and corresponding transcriptions. We have observed no significant stability issues with the VIPT-NM-PVAE model, which backs up our conclusion that phrase-level modelling allows for more expressive and accurate cross-lingual prosody transfer without compromising intelligibility.
For all tested systems we also computed the conditional Frรฉchet DeepSpeech Distance (cFDSD) <cit.>, an objective metric measuring the quality of synthesised audio samples based on their distance to a reference set. We closely follow <cit.> in our implementation of the cFDSD metric, only differing in using XLSR-53 Large <cit.> as a backbone network, which was trained on multi-lingual speech data. All tested systems are compared to human Spanish dubs. We observe that VIPT-NM-PVAE has a significantly lower distance to the human dubs (Table <ref>) compared to all other models. This result is inline with the MUSHRA subjective evaluation scores.
Finally, as an ablation study, we trained a VIPT-NM-PVAE model without the length-based regularization described in Section <ref>. This resulted in a significant WER increase for short utterances (Table <ref>), while at the same time cFDSD distance to recordings also increased. We conclude that applying regularization dependent on phrase lengths is crucial to find a good balance between expressivity and stability of our system.
ยง CONCLUSIONS
We have presented a novel solution that enables phrase-level cross-lingual, cross-speaker prosody transfer for expressive machine dubbing. The proposed method can learn to model prosody information at phrase-level, and transfer the phrase prosody embeddings from a source to a target language for translated text. In subjective evaluations, our system outperforms a strong baseline that transfers prosody at global-level. In future work, we plan to extend our evaluation to include wider range of languages and further close the gap between synthesised speech and expressive human dialogues by exploring duration modelling, hierarchical prosody modelling and the usage of parallel data.
IEEEtran
|
http://arxiv.org/abs/2306.05788v1
|
20230609100249
|
Cross-Consensus Measurement of Individual-level Decentralization in Blockchains
|
[
"Chao Li",
"Balaji Palanisamy",
"Runhua Xu",
"Li Duan"
] |
cs.CR
|
[
"cs.CR"
] |
enumienumi
<@>leostyle
ifundefinedselectfont
leo
packed_enum
definitionDefinition
theoremTheorem
lemmaLemma
L[1]>
m#1
.bib
@BookarpachiDusseau18:osbook,
author = Arpaci-Dusseau, Remzi H. and Arpaci-Dusseau Andrea C.,
title = Operating Systems: Three Easy Pieces,
publisher = Arpaci-Dusseau Books, LLC,
year = 2015,
edition = 1.00,
note = <http://pages.cs.wisc.edu/ย remzi/OSTEP/>
@InProceedingswaldspurger02,
author = Waldspurger, Carl A.,
title = Memory resource management in VMware ESX server,
booktitle = USENIX Symposium on Operating System Design and
Implementation (OSDI),
year = 2002,
pages = 181โ194,
note = <https://www.usenix.org/legacy/event/osdi02/tech/waldspurger/waldspurger.pdf>
Cross-Consensus Measurement of Individual-level Decentralization in Blockchains
Chaoย Li1,
Balajiย Palanisamy2,
Runhuaย Xu3 and
Liย Duan1
1School of Computer and Information Technology, Beijing Jiaotong University, China
2School of Computing and Information, University of Pittsburgh, USA
3School of Computer Science and Engineering, Beihang University, China
[email protected], [email protected], [email protected], [email protected]
Received XX; accepted XX, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================
Decentralization is widely recognized as a crucial characteristic of blockchains that enables them to resist malicious attacks such as the 51% attack and the takeover attack.
Prior research has primarily examined decentralization in blockchains employing the same consensus protocol or at the level of block producers.
This paper presents the first individual-level measurement study comparing the decentralization of blockchains employing different consensus protocols.
To facilitate cross-consensus evaluation, we present a two-level comparison framework and a new metric.
We apply the proposed methods to Ethereum and Steem, two representative blockchains for which decentralization has garnered considerable interest.
Our findings dive deeper into the level of decentralization, suggest the existence of centralization risk at the individual level in Steem, and provide novel insights into the cross-consensus comparison of decentralization in blockchains.
Delegated-Proof-of-Stake (DPoS) blockchains have demonstrated how to distribute authority and power across the network for social web applications, where traditional Proof-of-Work (PoW) blockchains experience the scalability conundrum.
However, the limited number of delegates in DPoS blockchains has given rise to centralization risk concerns.
Previous discussions and studies mostly focus on the elected delegates, neglecting the underlying stake distribution across the entire shareholder community.
In this paper, we present the first large-scale longitudinal study of the decentralization in Steem, a prominent DPoS blockchain that has served over one million social media users since 2016.
We analyze the decentralization at two different levels, the pool level where elected delegates produce blocks, and the deeper individual level where Steem coin holders cast stake-weighted votes to elect delegates.
We further characterize coin holders on their loyalty and independence and analyze how these characteristics may affect decentralization.
The results are benchmarked against the mining power decentralization in Ethereum, a leading PoW blockchain.
Our study concludes that, compared with Ethereum, Steem tends to be more decentralized at the pool level but less decentralized at the individual level.
Our results dive deeper into the level of decentralization DPoS blockchain achieves on the web, corroborate the existence of centralization risk and provide novel insights into the comparison of decentralization across DPoS and PoW blockchains.
Blockchain, Decentralization, PoW, DPoS, Measurement Study.
ยง INTRODUCTION
Advances in blockchain technologies are fueling the emergence of various consensus protocols, including the Proof-of-Work (PoW) consensus protocolย <cit.>
and the Delegated Proof-of-Stake (DPoS) consensus protocolย <cit.>.
The PoW consensus protocol requires network-wide consensus to be reached across thousands of nodes, making it challenging to support high transaction throughput.
In contrast, the DPoS consensus protocol only requires a small committee of dozens of members to establish consensus, allowing DPoS blockchains to suit the practical needs of a broad range of decentralized applications.
In a DPoS blockchain, committee members generate blocks in rotation and jointly make decisions such as updating global parameters and restricting specific accounts.
Periodically, committee members are elected by coin holders who acquire and hold blockchain-issued coins (i.e., cryptocurrencies) and vote for their chosen candidates.
While the effectiveness of the underlying coin-based voting system has spawned successful DPoS blockchains such as Steemย <cit.> and EOSIOย <cit.>, the limited number of committee members in DPoS blockchains has raised concerns regarding centralization risk.
Decentralization is widely recognized as a crucial characteristic of blockchains that enables them to resist malicious attacks such as the 51% attackย <cit.> for PoW blockchains and the takeover attackย <cit.> for DPoS blockchains.
For most public blockchains, the distribution of resources that determine who generates blocks is the key metric for evaluating decentralizationย <cit.>. This in turn facilitates further understanding of both security and scalability in blockchainย <cit.>.
Intuitively, having only a few parties possess the majority of the resources suggests a more centralized control of a blockchain, which is potentially less secure.
This is because the collusion of these few parties could jeopardize the immutability of blockchains by enabling the falsification of historical data that should have been confirmed.
Over the past few years,
there has been an ongoing debate in the blockchain community with regard to the degree of decentralization in DPoS and PoW blockchains, resulting in two opposite positions:
* Position I (PoW is more decentralizedย <cit.>):
Theoretically, compared to the vast number of miners in PoW blockchains, the extremely small number of committee members in DPoS blockchains indicates a lower degree of decentralization.
* Position II (DPoS is more decentralizedย <cit.>):
In practice, recent research has demonstrated that Bitcoin and Ethereum have a tendency towards centralizationย <cit.>, therefore the design of rotating block production across committee members makes DPoS blockchains less centralized than the current PoW blockchains, which are dominated by a few mining pools.
In this paper, we present a large-scale longitudinal measurement study comparing the decentralization of PoW and DPoS blockchains.
Our goals are twofold:
to establish a finer-grained framework for cross-consensus comparisons of decentralization in blockchains,
and to develop a common methodology for gaining a deeper understanding of decentralization in DPoS blockchains.
First, we notice that existing comparisons of decentralization between DPoS and PoW lack a multi-level framework capable of differentiating measurement granularity.
For instance, Position II compares the committee-level decentralization in DPoS with the pool-level decentralization in PoW.
In contrast, Position I compares the same committee-level decentralization in DPoS with the finer-grained pool-participant-level decentralization in PoW.
In this paper, to facilitate cross-consensus evaluation, we propose a two-level comparison framework for decentralization between DPoS and PoW blockchains that can distinguish and match measurements of different granularities.
Concretely, the proposed framework divides decentralization in DPoS into committee level and coin-holder level, and decentralization in PoW into pool level and pool-participant level.
The framework then benchmarks the committee level and coin-holder level in DPoS against the pool level and pool-participant level in PoW, respectively.
This is due to the fact that both DPoS and PoW blockchains are witnessing the pooling of resources from individuals (coin holders and pool participants) to representatives (committee members and mining pools), leading to two different levels of resource concentration.
Second, existing studies on blockchain decentralization mainly focus on the decentralization of mining power in PoW blockchainsย <cit.>.
Among them, the state-of-the-art study suggests that decentralization assessed at a deeper level (i.e., across participants of mining pools) could be substantially distinct from that measured across mining poolsย <cit.>.
Inspired by prior research, we argue that both of the two opposite positions (I&II) mainly focus on decentralization at the level of representatives, namely mining pools in PoW and committee members in DPoS.
Neither of these positions goes deeper into the finer-grained level of individuals.
To obtain a more comprehensive understanding of decentralization in blockchains, our analysis is performed at two different levels, the representative level, where committee members in DPoS and mining pools in PoW produce blocks, and the deeper individual level, where coin holders contribute voting power to committee members in DPoS and pool participants contribute mining power to mining pools in PoW.
We apply the proposed framework and methodologies to Ethereum[The Ethereum community has recently launched Ethereum 2.0 that adopts Proof-of-stake (PoS) in Sep. 2022.
Following recent works on measurement study of decentralization in Ethereum,
this paper focuses on a four-year period of time of Ethereum 1.0 that employs PoW.
In this rest of this paper, unless explicitly stated, we use Ethereum to refer to Ethereum 1.0.] and Steem, two representative blockchains for which decentralization has garnered considerable interestย <cit.>.
In Ethereum, inspired by prior research, we estimate the mining power possessed by a pool participant based on the amount of reward the participant received from mining pools.
In Steem, we reconstruct the historical election snapshots on a daily basis for a period of four years.
Our study suggests that, based on the proposed methods, Steem tends to be less decentralized than Ethereum at the individual level and more vulnerable to malicious attacks such as takeovers.
Contributions
In a nutshell, this paper makes the following key contributions:
* To the best of our knowledge, our work presents the first individual-level measurement study comparing the decentralization of PoW and DPoS blockchains.
* Our work dives deeper into the level of decentralization, suggests the existence of centralization risk at the individual level in Steem. and provides novel insights into the comparison of decentralization between PoW and DPoS blockchains.
ยง RELATED WORK
Over the past few years, many studies have measured and evaluated the decentralization of the two major PoW blockchains, Bitcoin and Ethereum, from multiple perspectives.
Inย <cit.>, Gervais et al. showed that some fundamental system modules in Bitcoin, such as decision-making and mining, were not decentralized.
They showed that a limited number of entities dominated these system modules in Bitcoin.
Later, inย <cit.>, Miller et al. developed AddressProbe, a tool to discover links in the underlying peer-to-peer network of Bitcoin and construct the live topology.
Based on the measured topology, their results found that, in contrast to the idealistic vision of distributing mining power across nodes, several prevalent and hidden mining pools were controlling the majority of mining power.
In 2018, Gencer et al. provided a measurement analysis of a variety of decentralization metrics for Bitcoin and Ethereumย <cit.>. Their research evaluated the network resources of nodes and their connectivity, as well as the attack resilience of the two blockchains.
Their results suggested that there was no significant difference between decentralization in Bitcoin and in Ethereum.
In 2021, with different metrics and granularities, Lin et al. revealed that the degree of decentralization in Bitcoin is higher, while the degree
of decentralization in Ethereum is more stableย <cit.>.
The state-of-the-art studyย <cit.> measured the decentralization in Ethereum at the level of mining pool participants. The results indicated that decentralization measured at a deeper level could be quite different from that measured across mining pools.
With the rapid development of DPoS blockchains, several recent studies have evaluated the decentralization in DPoS blockchains.
Inย <cit.>, Kwon et al. quantified the decentralization in dozens of PoW, PoS and DPoS blockchains. Their work helps build a broad understanding of the representative-level decentralization of blockchains adopting different consensus protocols.
However, their results may have some limitations because they only collected blocks generated in Oct. 2018.
Later, inย <cit.>, Li et al. leveraged the Shannon entropy to quantify the decentralization in Steem and Bitcoin. Their results revealed that, compared with Steem, Bitcoin tends to be more decentralized among top miners but less decentralized in general.
However, without reconstructing the historical election snapshots, their study only measured a one-month snapshot of decentralization in Steem.
To sum up, as illustrated in TABLEย <ref>, to the best of our knowledge, we believe that our work presents the first cross-consensus measurement study of individual-level decentralization in Blockchains.
We organize the rest of this paper as follows:
We first introduce the background in Sectionย <ref>.
In Sectionย <ref>, we present a two-level comparison framework and a new metric.
In Sectionย <ref>, we propose a DPoS Individual-Level Decentralization Quantification approach named DPoS-ILDQ.
After depicting data collection in Sectionย <ref>, we focus on measuring decentralization at both the representative level and the individual level between Ethereum and Steem in Sectionย <ref>.
Finally, we discuss related work in Sectionย <ref> and conclude in Sectionย <ref>.
===============================
We define independence as follows:
if two coin holders have never simultaneously changed their voting strategies in the same way, the two coin holders are independent of each other.
That is, if two coin holders, say A and B were employing the same voting strategy {E, F}, and they both changed their strategy to {L, M} on the same day, we won't consider the two coin holders are completely independent of each other.
Our study concludes that, before the takeover, Steem tends to be less decentralized than Ethereum.
During the takeover, Steem became extremely centralized.
After the takeover, Hive tends to be more decentralized than Steem.
To understand the takeover process,
we identify clusters of major players and perform block-by-block analysis.
Our study demonstrates the decisive role of three major exchanges, who gave up their neutral stance and invested $7,469,573 worth of tokens to support TRON founder.
Our work dives deeper into the level of decentralization Steem achieves and provides novel insights into the correlation between decentralization and takeover.
Our work also corroborates the existence of the Sword of Damocles over DPoS. That is, the `neutral' actors may one day collude to takeover blockchains.
To understand the Steem takeover battle, we identify clusters of major players and perform block-by-block analysis.
Our study demonstrates the decisive role of three major exchanges, who gave up their neutral stance and invested $7,469,573 worth of tokens to support TRON founder.
Our work dives deeper into the level of decentralization Steem achieves and provides novel insights into the comparison of decentralization across DPoS and PoW blockchains.
Our work also corroborates the existence of the Sword of Damocles over DPoS. That is, the `neutral' actors may one day collude to takeover blockchains.
We define loyalty as follows:
if a shareholder does not change his or her voting strategy, namely who to vote for, the shareholder is loyal to the voting strategy.
Here, the definition of distinct voting strategies is strict.
Specifically, two voting strategies differ in a single committee member are distinct.
Besides the two-level analysis, we move one more step forward by
extracting distinct voting strategies and identifying clusters of stakeholders.
The results reveal interesting insights about the existence of clusters of coin holders who simultaneously change their voting strategies in the same way.
However, diving deeper into the individual level in Steem is non-trivial.
In Ethereum, the mining power possessed by a miner could be roughly estimated based on the amount of received reward.
In contrast, the stake held by a shareholder in Steem involves up to five different sources, and the amount of stake gained from the most significant source is hard to be estimated due to a missing system parameter.
After careful investigation, we successfully fix the missing parameter and reconstruct the historical stake snapshots on a daily basis for a period of five years.
Existing studies on blockchain decentralization mostly focus on the mining power decentralization in PoW blockchainsย <cit.>.
The state-of-the-art study indicates that decentralization measured at a deeper level (i.e., across participants of mining pools in Ethereum) could be quite different from that measured across mining poolsย <cit.>.
However, previous studies on Steem mostly focus on the elected committee memberes, neglecting the underlying stake distribution across the entire shareholder communityย <cit.>.
For instance, [5] performed pool-level, snapshot study over various blockchains; [4] performed individual-level, longitudinal study over Ethereum; our study performed double-level, longitudinal study over Steem and Ethereum.
existing studies on blockchain decentralization mostly focus on the mining power decentralization in PoW blockchainsย <cit.>.
Inย <cit.>, authors analyzed the degree of decentralization in both Bitcoin and Steem.
However, previous studies mostly focus on the elected delegates, neglecting the underlying stake distribution across the entire shareholder community.
However, the limited number of delegates in DPoS blockchains has given rise to centralization risk concerns.
Previous discussions and studies mostly focus on the elected delegates, neglecting the underlying stake distribution across the entire shareholder community.
In this paper, we present the first large-scale longitudinal study of the decentralization in Steem, a prominent DPoS blockchain that has served over one million social media users since 2016.
We analyze the decentralization at two different levels, the pool level where elected delegates produce blocks, and the deeper individual level where Steem coin holders cast stake-weighted votes to elect delegates.
We further characterize coin holders on their loyalty and independence and analyze how these characteristics may affect decentralization.
The results are benchmarked against the mining power decentralization in Ethereum, a leading PoW blockchain.
Our study concludes that, compared with Ethereum, Steem tends to be more decentralized at the pool level but less decentralized at the individual level.
Our results dive deeper into the level of decentralization DPoS blockchain achieves on the web, corroborate the existence of centralization risk and provide novel insights into the comparison of decentralization across DPoS and PoW blockchains.
* Position I (PoW is more decentralizedย <cit.>):
In theory, compared with the huge scale of miners in PoW blockchains, the very limited scale of committee memberes in DPoS blockchains naturally shows a low degree of decentralization.
* Position II (DPoS is more decentralizedย <cit.>):
In practice, recent research pointed out that Bitcoin and Ethereum show a trend towards centralizationย <cit.>, thus the design of equally generating blocks among committee memberes in DPoS blockchains is less centralized than the current PoW blockchains dominated by a few mining pools.
We agree that Steem is not as generic as Ethereum. Instead, Steem is designed to be a social blockchain supporting a wide range of social applications for social users [1]. There are 324 Steem-based apps [2]. Among them, Steemit (decentralized Reddit) is the first application. Another application is QTube (decentralized YouTube) [3]. The relation between Steem and Steemit is briefly presented in Section 2 โโฆincluding its key social web application Steemitโฆโ. We will add the aforementioned details in Section 2.
<why Steem, relevance>
We have written the paper while keeping in mind that our primary audience is the web community. As described in the Introduction, our first goal is โto understand decentralization in DPoS blockchain on the web at a deeper levelโ. To figure it out, we believe the object of study should be a blockchain targeting web applications and web users, rather than more generic blockchains. To the best of our knowledge, with over 1 million social users and around 1 billion recorded operations, Steem is the most appropriate blockchain for this study, so we choose it. As described in the Introduction, our results โdive deeper into the level of decentralization DPoS blockchain achieves on the web, corroborate the existence of centralization riskโ, which for the first time introduce and demonstrate the existence of centralization risk in emerging blockchain-based web applications for the web community. We believe our work fits the topic โBlockchains and distributed ledgers in the context of Web securityโ included in the โsecurity, privacy and trustโ track of the conference CFP. We will incorporate the discussion in the Introduction.
ยง BACKGROUND
In this section, we introduce the background about the Ethereum blockchainย <cit.> and the Steem blockchainย <cit.>, including their ecosystem in general, their implementation of PoW and DPoS consensus protocols, the main attacks they face and the correlation between these attacks and the degree of decentralization.
ยง.ยง Ethereum
Ethereum is the representative of the second-generation blockchain, and its market capitalization is perennially in the top two.
The underlying Ethereum Virtual Machine (EVM) enables the development of powerful smart contracts and has supported thousands of decentralized applications, ranging from exchanges to games.
Besides, Ethereum issues a cryptocurrency named Ether (ETH) to support the establishment of its decentralized financial ecosystem.
Consensus protocol in Ethereum 1.0.
In Ethereum, a user needs to create an account to interact with the blockchain.
Users may perform three categories of transactions, namely transferring funds, creating smart contracts and invoking functions within deployed smart contracts.
A user needs to build a transaction locally and transfer it to Ethereum's underlying peer-to-peer (P2P) network for the transaction to be executed.
When miners within the P2P network receive a transaction, they cache it in a pool.
In the meantime, miners continue to work on a mathematical puzzle.
After solving the puzzle, miners package the cached transactions into a block.
The block is then broadcast, validated by other miners and finally attached to the blockchain.
51% attack and decentralization.
In PoW blockchains such as Bitcoin and Ethereum, mining power refers to a miner's ability to solve the mathematical puzzle.
If a single miner's mining power is larger than the total of all other miners' mining power, that miner will be able to control the production of blocks and falsify confirmed transactions, such as double spending cryptocurrency.
Consequently, the decentralization of mining power in Ethereum has recently garnered a great deal of interest.ย <cit.>.
ยง.ยง Steem
Steem serves over one million users and has recorded around one billion operations.
Similar to Ethereum, Steem is a blockchain built to serve a wide variety of applications.
For instance, the most popular application, named Steemitย <cit.>, is comparable to a decentralized Reddit, where users can upvote and downvote blog posts published by other users.
There have been around thirty different sorts of operations in Steem.
In Figureย <ref>, we present four representative categories of operations, namely social post operations, committee election operations, voting power operations and coin transfer operations.
Like most blockchains, Steem issues a cryptocurrency known as STEEM.
Consensus protocol in Steem.
On the basis of DPoS, Steem encourages coin holders to cast up to 30 votes for committee candidates.
The top 20 candidates then form a committee charged with maintaining high-quality servers that generate blocks periodically.
Due to the small size of the committee, the transaction throughput could be greatly enhanced.
In committee elections, coin holders' votes are weighted according to the voting power derived from their coins.
Specifically, coin holders must freeze coins (i.e., STEEM) to obtain voting power so that they can support the candidates of their choice at the expense of freezing the liquidity of their coins.
Coin holders may unfreeze their frozen coins at any time, but they will immediately lose the corresponding amount of voting power, and the frozen coins will be automatically divided into thirteen parts and withdrawn in thirteen weeks.
In addition to personally voting for candidates, a coin holder may appoint another coin holder as her proxy to vote on her behalf.
The use of proxies complicates the reconstruction of historical election snapshots, as the proxy appointed by a coin holder may have also appointed a proxy, leading to a proxy chain.
Takeover attack and Decentralization.
In DPoS blockchains such as Steemย <cit.> and EOSIOย <cit.>, committees undertake on-chain governance in addition to producing blocks.
Specifically, committee members may decide on proposals such as updating blockchain parameters and even banning certain accounts.
A proposal must have at least 17 (15) approvals from Steem (EOSIO) committee members in order to be implemented.
After receiving sufficient approvals, a proposal takes immediate effect at the code level.
As can be seen, if a single coin holder possesses a significant amount of voting power, the coin holder may be able to use the 30 votes to directly determine the 20 committee seats and gain entire control of the committee with 20 controlled accounts.
This powerful coin holder could then implement any proposals, such as banning accounts against such a takeover or even reversing confirmed transactions.
Steem was the victim of the first de facto blockchain takeover.
On March 2, 2020, all Steem committee members were suddenly replaced with those controlled by a single coin holder, who then immediately blacklisted some of the original committee members, causing the Steem community to split.
Consequently, the decentralization of committee election in Steem has garnered considerable attentionย <cit.>.
Next, based on the background introduced in this section, we present a two-level comparison framework and a new metric in Sectionย <ref>.
In this section, we introduce the Steemit social media platform that runs over the Steem blockchain<cit.>.
We present the key use cases of Steemit and discuss how Steemit leverages the underlying blockchain to function as a decentralized social website that incentivizes users with cryptocurrency-based rewards.
Steemit.
Users of Steemit can create and share contents as blog posts.
A blog post can get replied, reposted or voted by other users.
Based on the weights of received votes, posts get ranked and the top ranked posts make them to the front page.
Steem-blockchain.
Steemit uses the Steem-blockchain<cit.> to store the underlying data of the platform as a chain of blocks. Every three seconds, a new block is produced, which includes all confirmed operations performed by users during the last three seconds.
Steemit allows its users to perform more than thirty different type of operations. In Figureย <ref>, we display four categories of operations that are most relevant to the analysis presented in this paper.
While post/vote and follower/following are common features offered by social sites (e.g., Redditย <cit.> and Quoraย <cit.>), operations such as committee member election and cryptocurrency transfer are features specific to blockchains.
committee member election and DPoS.
committee memberes in Steemit are producers of blocks, who continuously collect data from the entire network, bundle data into blocks and append the blocks to the Steem-blockchain.
The role of committee memberes in Steemit is similar to that of miners in Bitcoinย <cit.>.
In Bitcoin, miners keep solving Proof-of-Work (PoW) problems and winners have the right to produce blocks.
With PoW, Bitcoin achieves a maximum throughput of 7 transactions/sec ย <cit.>. However, transaction rates of typical mainstream social sites are substantially higher. For example, Twitter has an average throughput of more than 5000 tweets/secย <cit.>.
Hence, Steemit adopts the Delegated Proof of Stake (DPoS)ย <cit.> consensus protocol to increase the speed and scalability of the platform without compromising the decentralized reward system of the blockchain.
In DPoS systems, users vote to elect a number of committee memberes as their delegates. In Steemit, each user can vote for at most 30 committee memberes. The top-20 elected committee memberes and a seat randomly assigned out of the top-20 committee memberes produce the blocks.
With DPoS, consensus only needs to be reached among the 21-member committee member group, rather than the entire blockchain network like Bitcoin. This significantly improves the system throughput.
Cryptocurrency - shares and rewards.
In Steemit, each vote casting a post or electing a committee member is associated with a weight that is proportional to the shares of Steemit held by the voter.
Like most blockchains, the Steem-blockchain issues its native cryptocurrencies called STEEM and Steem Dollars (SBD).
To receive shares of Steemit, a user needs to `lock' STEEM/SBD in Steemit to receive Steem Power (SP) at the rate of 1 STEEM = 1 SP and each SP is assigned about 2000 vested shares (VESTS) of Steemit.
A user may withdraw invested STEEM/SBD at any time, but the claimed fund will be automatically split into thirteen equal portions to be withdrawn in the next thirteen subsequent weeks.
For example, in day 1, Alice may invest 13 STEEM to Steemit that makes her vote obtain a weight of 13 SP (about 26000 VESTS). Later, in day 8, Alice may decide to withdraw her 13 invested STEEM. Here, instead of seeing her 13 STEEM in wallet immediately, her STEEM balance will increase by 1 STEEM each week from day 8 and during that period, her SP will decrease by 1 SP every week.
Through Steem-blockchain, Steemit issues three types of rewards to its users: (1) producer reward; (2) author reward and (3) curation reward.
The amount of producer reward is about 0.2 STEEM per block in 2018/08, meaning that a committee member producing blocks for a whole day can earn about 14,400 STEEM.
Each day, the Steem-blockchain issues a number of STEEM (about 53,800 STEEM per day in 2018/08) to form the post reward pool and posts compete against each other to divide up the reward pool based on the total weight of votes received within seven days from the post creation time. Here, 75% of reward received by a post goes to the post author and the rest is shared by the post curators based on their vote weight.
ยง COMPARISON FRAMEWORK AND METRIC
In this section, we present a two-level framework and also a new metric for comparing decentralization between DPoS and PoW blockchains.
ยง.ยง Two-level comparison framework
In the absence of a multi-level framework capable of discriminating measurement granularity, we propose a two-level comparison framework that can differentiate and match measurements of different granularities.
The framework is shown in Fig.ย <ref>.
The design of the framework is inspired by the observation that both DPoS and PoW blockchains include two levels of resource concentration as a result of the pooling of resources from individuals to representatives.
Representative level.
In PoW blockchains, almost all blocks are now produced by mining pools.
In DPoS blockchains, only committee members are permitted to generate blocks.
Together, we can see that in both DPoS and PoW blockchains, despite the difference in consensus protocol, a small number of entities hold the rights to produce blocks.
In this paper, mining pools and committee members are considered to be representatives of pool participants and coin holders, respectively.
Concretely, mining pools in PoW acquire mining power from individual pool participants to compete with each other.
Similarly, candidates in DPoS request voting power from individual coin holders to enter the top 20.
In other words, pool participants and coin holders utilize their individual power to select representatives who will generate blocks on their behalf.
The difference is that the selection of representatives is voluntary in PoW but mandatory in DPoS.
Therefore, the framework benchmarks the committee-level decentralization in DPoS against the pool-level decentralization in PoW.
Specifically, committee-level decentralization in DPoS could be assessed by analyzing the distribution of the number of blocks produced by each committee member.
Similarly, pool-level decentralization in PoW refers to the distribution of the number of blocks produced by different mining pools.
Individual level.
Most prior discussions and research have focused on the decentralization of blockchains at the representative level.
In this paper, we suggest that the root causes of the 51% attack and the takeover attack should be the individuals, not the representatives.
First, representatives have no control over the power granted to them by individuals.
For instance, if several representatives plan to conclude and launch an attack, individuals would be able to withdraw their power from these representatives to stop the attack.
Second, an individual may control multiple representatives.
Recall that during the Steem takeover, a single coin holder controlled the whole committee.
This coin holder's dominance may be observed at the individual level but not at the representative level.
Therefore, this paper focuses primarily on decentralization at the individual level.
The proposed framework benchmarks the coin-holder-level decentralization in DPoS against the pool-participant-level decentralization in PoW.
ยง.ยง Metric
Before presenting the proposed metric, we will first define the term โindividual impactโ.
Individual impact.
In this paper, we estimate individual impacts in block-production competitions based on a data-driven approach.
Specifically, in Ethereum, we estimate the per-day impact of a pool participant based on the total rewards received by the participant from all mining pools.
In Steem, we first extract blocks produced by committee members and then allocate these blocks to coin holders based on daily election snapshots.
That is, the impact of a particular coin holder refers to the sum of blocks allocated from all committee members who have received votes from either the coin holder or the coin holder's proxy.
In the rest of this paper, we will denote individuals as a n-dimensional vector ๐ฌ, and impacts of individuals as a n ร m matrix ๐, where the entry a_ij denotes the impact of i^th individual on j^th day.
Top-k Normalized Entropy (NE) coefficient.
Whales are individuals with enormous resources and are usually the most influential players in block-creation competitions.
Therefore, an important perspective of decentralization is the evolution of whales and the distribution of impacts among them.
We propose the top-k NE coefficient to quantify entropy-based decentralization among all time top-k whales
on a daily basis by computing normalized entropy for j^th day as:
e_j = - โ_i=1^o_jp_ijlog_2 p_ij/log_2 o_j
where o_j denotes the amount of active top-k whales with non-zero impacts on j^th day,
and p_ij denotes the percentage of i^th whale's impact among active top-k whales on j^th day.
Minimum Threshold (MT) coefficient.
Previous research has extensively employed the Nakamoto coefficientย <cit.> to quantify the decentralization of mining power in PoW blockchains.
The Nakamoto coefficient is defined as the minimum number of mining pools whose combined mining power exceeds fifty percent of the total mining power of the entire blockchain.
In this paper, inspired by the Nakamoto coefficient, we propose the Minimum Threshold (MT) coefficient to quantify daily threshold-based decentralization by computing the minimum number of individuals whose combined power exceeds a particular threshold as:
f_j = min(๐ฌ,> t ยท sum(๐_*,j))
where sum(๐_*,j) represents the sum of individual impacts on j^th day, t represents the threshold,
and min(๐ฌ,condition) represents the minimum number of individuals whose combined power meets the condition.
The suggested metric is compatible with both DPoS and PoW blockchains, thus facilitating cross-consensus comparisons.
ยง QUANTIFICATION OF INDIVIDUAL-LEVEL DECENTRALIZATION IN DPOS
In this section, to quantify the individual-level decentralization of DPoS blockchains, we propose an algorithm named DPoS Individual-Level Decentralization Quantification (DPoS-ILDQ).
Intuitively, the algorithm extracts daily election snapshots from raw data, computes the individual impacts from historical election snapshots and outputs the daily minimum threshold (MT) coefficient.
We show a pseudocode for DPoS-ILDQ as Algorithmย <ref>.
There are three main phases in DPoS-ILDQ:
* Preparation:
The algorithm first parses raw data to obtain committee members ๐ who have produced at least one block, coin holders ๐ก who have ever voted for committee members or set proxies, the number of blocks produced per producer ๐ and the daily election snapshots ๐๐ฌ (line 1).
It then initializes the individual impacts of coin holders in ๐ก as ๐ (line 2).
* Block allocation:
For each day d, given that coin holders may have set proxy based on proxy information recorded in ๐๐ฌ, the algorithm first extracts the direct voter for each coin holder in ๐ก (i.e., the coin holder herself or the end proxy of the proxy chain involving the coin holder), which yields ๐ฏ (line 4).
Then, for each committee member c_i on day d, the algorithm implements three steps:
(1) based on the direct voters ๐ฏ and election snapshots ๐๐ฌ, the algorithm extracts all supportive coin holders ๐กฬ
who have ever cast votes for c_i directly or through a proxy (line 6);
(2) based on ๐กฬ
and ๐๐ฌ, the algorithm computes the percentage of contribution (i.e., voting power) of each supportive coin holder in ๐กฬ
among all supportive coin holders in ๐กฬ
, which gives ๐ฉฬ
(line 7);
(3) based on ๐ฉฬ
, ๐ก, ๐กฬ
and ๐, blocks produced by committee member c_i are distributed among coin holders who have contributed voting power to c_i (line 8-11).
* Decentralization quantification:
Finally, based on Equationย <ref>, the algorithm computes the minimum threshold (MT) coefficient (line 14).
Example.
Figureย <ref> illustrates an example of DPoS-ILDQ usage.
The example demonstrates eleven coin holders, from A to L, at a particular election snapshot.
The election snapshot is as follows:
{K,L} are committee members, {H,I,J} are casting votes directly to {K,L}, and {A to G} have set proxies.
In addition, {A,B,C,I} have one unit of voting power, {G,J} have two units of voting power, {D,E} have three units of voting power, and the rest of them have no voting power.
Next, we show how to allocate blocks from committee members to coin holders using DPoS-ILDQ.
First, we could extract direct voters for the coin holders.
For example, A's direct voter is H since A designates D as a proxy, D designates H as a proxy, and H casts votes directly for K.
In another case, {B,C,E,F} share the same direct voter I, who casts votes for both K and L, therefore they contribute to both K and L.
Then, we could identify supportive coin holders for each committee member.
In this example, we could see that {A,B,C,D,E,F,H,I} are supportive coin holders for K and {B,C,E,F,G,I,J} are supportive coin holders for L.
Finally, we could allocate blocks produced by a committee member to the supportive coin holders based on the distribution of their voting power.
For instance, K produced 10 blocks.
Its supportive coin holders have 10 units of voting power in total, so each unit of voting power could be assigned one block.
Therefore, from committee member K, {A,B,C,I} could be allocated one block, {D,E} could be allocated three blocks.
Similarly, from committee member L, {B,C,I} could be allocated one block, {G,J} could be allocated two blocks, E could be allocated three blocks.
To sum up, coin holders A to L could be allocated {1,2,2,3,6,0,2,0,2,2,0,0} blocks, respectively.
After presenting the framework, metric and methodology, in the next two sections, we apply the proposed methods to Steem and Ethereum by introducing data collection in Sectionย <ref> and evaluate the measurement findings in Sectionย <ref>.
ยง DATA COLLECTION
In this section, we outline our process for collecting data.
The Steem blockchain provides developers and researchers with an Interactive Application Programming Interface (API) at <cit.> to collect and parse blockchain data.
To analyze the decentralization of Steem prior to the takeover on 2020-03-02, we collected over 41 million blocks generated between Steem's birthday (2016-03-24) and the day of the takeover (2020-03-02).
For Ethereum, since we are only interested in its degree of decentralization before the Steem takeover on 2020-03-02, we collected block data from 2016-03 to 2020-02 using the Ethereum official API atย <cit.>.
To explore decentralization at the individual level, we extract the Ethereum participant reward data (i.e., rewards paid from mining pools to pool participants) from the dataset published by the recent workย <cit.>.
Growth of Steemit.
We first investigate the growth of Steemit over time and find that the platform growth is highly impacted by the cryptocurrency market.
In Figureย <ref>, we plot the numbers of newly registered users and newly performed operations per month and the changes of Bitcoin and STEEM price during the period 2016/03 to 2018/08.
Apart from the initial boost in the first month, the platform committee membered three times of robust growth, which happened during 2016/05 to 2016/07, 2017/04 to 2017/06 and 2017/11 to 2018/01, respectively.
We can observe a strong correlation between the monthly increment of users and the changes in the STEEM price.
This is due to the fact that more Steemit users investing in trading may drive up the STEEM price while higher STEEM price may in turn attract more people joining Steemit.
Next, in cryptocurrency market, the changes of Bitcoin price are usually seen as the most important market signals.
During all the three rising periods, we see the surge in Bitcoin price, which illustrates the high influence of the cryptocurrency market on the growth of Steemit and also suggests that most Steemit users may have a background understanding of cryptocurrency and blockchain.
Finally, by comparing the user curve and the operation curve, we find that even though the user growth rate drops after all the three rising periods, the operation growth rate keeps maintaining stability after boosting, which may reflect that most users who joined Steemit during the rising periods remain active after the end of the rising periods.
Usage of operations.
As discussed in sectionย <ref>, users in Steemit may perform a variety of operations and these operations are recorded by the Steem-blockchain. By investigating the usage of these operations, we aim to answer the following questions:
(1) which categories of operations are more frequently performed by users?, (2) do users use Steemit more like a social media platform or like a cryptocurrency wallet?
In Figureย <ref> and Figureย <ref>, we plot the numbers of social/committee member-election operations and value-transfer operations performed in different months, respectively.
The functionality of these operations is described in Tableย <ref>.
Among the three categories of operations, the social operations show the highest utilization rate, which indicates that users are using more social functions offered by Steemit than transfer functions.
Among the three social operations, the vote operation is the most frequently used one. In 2018/08, more than 21 million votes were cast to posts. Unlike other voting-based social media platforms such as Reddit, in Steemit, votes cast by users owning Steemit shares have real monetary value, which may incentivize Steemit users to keep voting for posts with some frequency to avoid wasting their voting power. We will discuss more about voting and rewarding mechanisms in sectionย <ref>.
Among the four value-transfer operations, users perform the transfer operation more frequently.
Since the transfer operation is the only operation among the four that is not associated with VESTS, namely shares of Steemit, the fact may reflect that more trading behaviors are happening in Steemit than investing behaviors.
Finally, the number of performed committee member-election operations are relatively small, compared with the other two categories.
Each month, we see only thousands of users setting or updating their committee member votes through the committee member_vote and committee member_proxy operations and only hundreds of users joining the committee member pool or updating their committee member information through the committee member_update operation.
This reflects a relatively low participation rate in committee member election regarding both elector and electee and could impact the level of decentralization in the platform. We discuss committee member election in more detail in sectionย <ref>.
Follower/following distribution.
In Figureย <ref>, we plot the complementary cumulative distribution function (CCDF) of the number of follower and following owned by users in Steemit.
Similar to many other social sitesย <cit.>, a majority of users have a few followers/followings while only a few users have the most followers/followings.
Specifically, 5.2% of users have no followers and 54.2% have no followings.
We see 98.9% of users have less than 1,000 followers while only 221 users have more than 10,000 followers.
There are 231 users following more than 100,000 other users and 46 of them own more than 10,000 followers at the same time.
Reward distribution.
In Figureย <ref>, we plot the CCDF of author and curator rewards to investigate their distribution among the rewarded authors, curators, posts and votes.
All the four distributions show a heavy-tailed distribution.
Among rewarded authors and curators, we find that 71.7% of authors and 94.5% of curators received less than $10 worth of rewards while only 118 authors and 38 curators received more than $100,000 worth of rewards.
The highest rewards earned by a single author and a single curator are about $842,560 and $871,947, respectively.
The highest rewards issued to a single post and a single vote are about $66,207 and $5,846, respectively.
The results reflect that only a small group of authors and curators have earned significant income from the reward system of Steemit while the majority of Steemit authors and curators did not earn a significant amount.
Trending tags.
Since most Steemit users may have a background in cryptocurrency and blockchain, it is quite possible that many posts in Steemit are discussing topics related to cryptocurrency and blockchain.
In Steemit, each post can attach a number of tags relevant to its topic.
To investigate the trending tags, for each tag, we compute the sum of rewards issued to all the posts with the tag and use the result as the popularity metric of tags.
Then, in Figureย <ref>, we plot a pie chart of the top-10 trending tags receiving the highest accumulated reward.
The sum of rewards of the top-10 trending tags is about $54,092,922.
It is worth noting that due to multiple tags attached per post, the real accumulated rewards issued to posts with at least one of these trending tags should be lower.
From Figureย <ref>, it is interesting to see that it is the `life' tag that actually accumulates the highest amount of rewards, which indicates that users may cast more votes with higher weight to posts with the `life' tag.
Among the top-10 trending tags, there are five tags unrelated to cryptocurrency and blockchain (i.e., life. photography, travel, art and introduceyourself) and these five tags occupy 59% of the total rewards.
This fact indicates that rather than a cryptocurrency forum, Steemit is closer to a social media platform welcoming broader types of topics.
ยง MEASURING DECENTRALIZATION
In this section, we measure and compare both the representative-level decentralization and the individual-level decentralization between Steem and Ethereum.
Methodology
In Steem, the stake, namely the amount of vested shares (or VESTS), held by a shareholder at the moment of a certain block height h could be computed as:
v_h = โ_i=1^h (ฮปยท v^t2v_i+v^r_i-v^fvw_i)
namely โ_i=1^hฮปยท v^t2v_i the total amount of VESTS purchased using token STEEM (recorded in operations transfer_to_vesting in blocks), plus โ_i=1^h v^r_i the total amount of VESTS rewarded by the platform (recorded in four types of operations, e.g., curation_reward and author_reward), minus โ_i=1^h v^fvw_i the total amount of VESTS withdrawn from the platform (recorded in operations fill_vesting_withdraw).
It is not difficult to extract historical v^t2v_i, v^r_i and v^fvw_i from blockchain raw data.
However, an operation transfer_to_vesting only discloses the amount of invested STEEM, but reveals no information about the amount of gained VESTS.
In other words, the VESTS/STEEM exchange rate ฮป is missing in raw data collected fromย <cit.>.
After careful investigation, we found two helpful Steemit blogs [https://steemit.com/steemdev/@jesta/historical-rates-for-vests-and-steem.][https://steemit.com/steem/@crokkon/historic-rates-for-steem-per-vests-2018-2019ytd.]
discussing the missing parameter ฮป.
Both the blogs indicated that parameter ฮป could be computed through operations fill_vesting_withdraw, which are used by coin holders to withdraw VESTS.
An operation fill_vesting_withdraw provides both the amount of withdrawn VESTS and the corresponding amount of deposited STEEM, so the ratio of the two values gives ฮป.
However, the first fill_vesting_withdraw operation was performed at block height 479,660, while the first transfer_to_vesting operation was performed at block height 28,361.
Unfortunately, a value for ฮป estimated at a certain block height could only reflect the exchange rate at that moment.
As a result, we do not know the values for ฮป before block 479,600, namely the period between 2016-03-24 and 2016-04-10, and we could not accurately estimate historical shares for any shareholder who has performed operation transfer_to_vesting before day 2016-04-10.
To overcome this challenge, we continue to investigate other approaches.
Luckily, we found that the Hive blockchain has recently introduced a new type of operation named transfer_to_vesting_completedย [https://gitlab.syncad.com/hive/hive/-/issues/111.]
in 2021, which accompanies transfer_to_vesting to provide supplementary information about the amount of gained VESTS, enabling the estimation for ฮป before day 2016-04-10.
Finally, the results of the very fine-grained estimation for ฮป are shown in Figureย <ref>.
Observation
Figureย <ref> shows significant changes in the value of rate ฮป.
During the first week, namely 2016-03-24 to 2016-03-31, we surprisingly observe a huge value of ฮป, namely 1,000,000.
In contrast, from 2016-12-06 to now, each STEEM could only purchase around 2,000 VESTS.
From 2016-04-01 to 2016-12-05, the value of ฮป dropped by 99.8%.
To illustrate how significant the impact of ฮป would be in reconstructing historical shares, we present a real example in Figureย <ref> between two whales, say A and B.
In this example, whale A purchased VESTS mainly during the first week (before 2016-03-31), while whale B purchased VESTS many times, but only after 2016-03-31.
We can see that, even though whale B gradually invested over 10 times
of the amount of STEEM invested by A, in month 48, A held about 39 times of the amount of VESTS possessed by B.
To investigate the impact of how early coin holders for the first time purchase VESTS, Figureย <ref> shows a scatter plot, where each point refers to a shareholder who has cast at least one vote to committee memberes.
A point presents three variables corresponding to a shareholder, including the total amount of invested STEEM, the total amount of purchased VESTS and the first date of investment.
The results show that most points locate at a hyperplane, which may suggest that most coin holders purchased their VESTS not in the early days, thus having a ratio of their total VESTS over total STEEM close to 2,000.
However, we do observe a group of outliers located far from the hyperplane.
Nearly all these outliers performed their first transfer_to_vesting operation during the first week, and have very high VESTS/STEEM ratio.
Insights
The surprisingly complex change of the VESTS/STEEM exchange rate ฮป may suggest that coin holders could benefit more from earlier investments.
Interestingly, the model is similar to the business investment model, where angel investors provide capital for businesses start-up.
However, for decentralization, the results may suggest that it becomes extremely difficult for second comers to compete with first comers in Steem.
In contrast, in Ethereum, second comers could always compete on the same level with first comers, as the price for purchasing mining power may not show much difference.
ยง.ยง Representative-level Decentralization
Methodology.
There are four key steps.
First, we extract block producers (representatives) from raw data.
Then, we calculate the number of blocks produced per producer per month, order the producers by the total number of blocks produced by each producer during the 48 months from 2016-03 to 2020-02, and identify the top-l producers of all time.
After that, for the top-l producers, we create a l ร 48 matrix ๐, where the entry b_ij represents the number of blocks produced by the i^th top-l producer in the j^th month.
We compute the normalized block creation rates as a l ร 48 matrix ๐, where
c_ij = b_ij/max(๐)
namely the ratio between the number of blocks created by the i^th producer during the j^th month and the maximum number of blocks produced by any producer in any month.
Finally, we visualize the results as a heatmap, where the cell (i,j) corresponds to the entry c_ij.
[innerleftmargin=1.6pt]
[leftmargin=*]
* Extract block producers (representatives) from raw data.
* Calculate the number of blocks produced per producer per month, order the producers by the total number of blocks produced by each producer during the 48 months from 2016-03 to 2020-02, and identify the top-l producers of all time.
* For the top-l producers, create a l ร 48 matrix ๐, where the entry b_ij represents the number of blocks produced by the i^th top-l producer in the j^th month.
Then, compute the normalized block creation rates as a l ร 48 matrix ๐, where
c_ij = b_ij/max(๐)
namely the ratio between the number of blocks created by the i^th producer during the j^th month and the maximum number of blocks produced by any producer in any month.
* Finally, visualize the results as a heatmap, where the cell (i,j) corresponds to the entry c_ij.
Results and insights.
In Figureย <ref>, we observe that Steem has much more c_ij close to 1 than Ethereum.
This is because the top Steem producers create blocks in a rotational manner, whereas the top Ethereum producers follow a winner-takes-all model.
Consequently, most Steem producers have their monthly produced blocks b_ij close to the theoretical maximum[In Steem, during a month that has thirty days, there should be 30*24*60*60/3= 864,000 blocks generated because blocks are generated every three seconds.
For every 21 blocks, the 21 elected committee members are shuffled to determine their order for generating the next 21 blocks, so a committee member can at most produce 864000/21 โ 41,142 blocks in a thirty-day month.], while the Ethereum mining competition is dominated by a few leading mining pools.
Next, by longitudinally observing the change of c_ij, we could find interesting similarities between Steem and Ethereum, such as the fact that first-tier producers do not always change, whereas second-tier producers have experienced at least two generations.
Overall, our results suggest that the Steem blockchain tends to be more decentralized than Ethereum at the representative level.
In other words, by looking only at the producers who directly create blocks, the blocks in Steem are more evenly distributed across producers.
ยง.ยง Individual-level Decentralization
After selecting the top-100 coin holders, our next goal is to quantify the decentralization across them.
Here, we recognize one challenge.
That is, a proxy set by a shareholder may have also set a proxy, forming a chain of proxies.
For example, Figureย <ref> illustrates the proxy chain associated with the top-100 coin holders.
Specifically, during the 48 months, some members of the top-100 coin holders have ever been directly set as proxies by 419 other coin holders.
Meanwhile, some members have directly set 36 other coin holders as their proxies.
Moreover, we could split the chain into two parts, a 5-layer principal sub-chain where the 8 coin holders at the tail have indirectly set some members of the top-100 coin holders as proxies via 5 iteratively direct proxy settings, and a 3-layer proxy sub-chain where the single coin holders at the head have indirectly be set as a proxy by some members of the top-100 coin holders via 3 iteratively direct proxy settings.
Due to the existence of such a proxy chain, we need to scan each election snapshot, search for direct and indirect proxies for the top-100 coin holders, assign votes cast by their proxies also to them and reproduce a new set of election snapshots S' (line 9).
After that, for the k^th top-100 shareholder, we compute its historical accumulated shares a_k^j on the j^th day by multiplying the historical shares v_k^j by the number of votes (directly or indirectly) cast by the shareholder at the j^th snapshot (line 10).
Finally, we could quantify decentralization on a daily basis by computing normalized entropy for j^th day as:
h^j = - โ_i=1^np_i log_2 p_i/log_2 n
where n denotes the amount of active top-100 coin holders that were participanting in the election on j^th day, and p_i=a_i^j/โ_i=1^n a_i^j.
Next, we quantify individual-level decentralization in Ethereum using a more straightforward methodology.
We first select top-100 pool participants by sorting them based on the overall rewards they received from mining pools.
Then, decentralization can also be quantified using Equationย <ref>, where p_i=r_i^j/โ_i=1^n r_i^j and r_i^j denotes the amount of rewards received by i^th pool participant on j^th day.
Methodology.
In Ethereum, we estimate the individual impact of a pool participant based on the amount of reward the participant received from all mining pools.
In Steem, we estimate the individual impact of a coin holder based on the historical election snapshots reconstructed on a daily basis.
Then, based on the estimated individual impacts, we could allocate blocks produced by representatives to individuals and modify the methodology presented in Sectionย <ref> by replacing blocks produced by committee members and mining pools with blocks allocated to coin holders and pool participants.
Results and insights
Figureย <ref> shows the heatmap measured using the above methodology.
We could see that the normalized block creation rates of top individuals are relatively lower than those of top producers displayed in Figureย <ref>, either in Steem or Ethereum.
Compared with Ethereum, Steem tends to have more active top individuals, especially after month 20.
Figureย <ref> shows the results of measurements for top-100 normalized entropy (NE) coefficient.
We can see that, during the early months 2 to 15, Ethereum always has a higher NE coefficient than Steem.
However, after month 15, the two blockchains are getting comparable.
Specifically, in Ethereum, its NE coefficient keeps dropping from month 2 to month 25. After a sudden surge, its NE coefficient stabilizes at around 0.85.
In Steem, its NE coefficient tends to be more stable and is also close to 0.85, on average.
Figureย <ref> presents the results of measurements for the 50% minimum threshold (MT) coefficient.
Compared to Ethereum, Steem has substantially lower MT coefficients over all months.
Specifically, the minimum, maximum, and average MT coefficients for Ethereum are 363, 1916, and 1140, respectively.
In comparison, the minimum, maximum and average MT coefficients for Steem are 1, 49 and 25, respectively.
Despite the vast disparity, we see that Steem's MT coefficient is gradually increasing while Ethereum's MT coefficient is slowly decreasing.
After the takeover (Steem vs. Hive)
Figureย <ref> shows the heatmap measured among top coin holders in Steem and Hive respectively.
Similar to results displayed in Figureย <ref>, we could also observe a clearer demarcation line in Steem.
Specifically, we could see that most whales have left Steem in one or two months after the takeover month (month 0). In contrast, only a small percentage of whales have stopped electing committee memberes in Hive.
It is worth noting that both the 1^st (@steemit) and 15^th coin holders in Steem are accounts restricted by the Fork 0.22.2.
Both these two accounts start to receive allocated blocks in the takeover month 0 and stop receiving allocated blocks in month 5.
It is also surprising to know that @steemit wins first place by only taking part in the committee member election for around six months.
Figureย <ref> measures the top-100 normalized entropy (NE) coefficient for Steem and Hive during 2019-03 and 2021-03.
We could observe that the NE coefficient dropped sharply from 0.8+ to 0.4- around the takeover day 2020-03-02.
Then, the NE coefficient recovered quickly to reach 0.6.
Later, on the Hive fork day 2020-03-20, we could see an upsurge in Hive's NE coefficient while an abrupt reduction in Steem's NE coefficient.
However, after around five months, the NE coefficients of both Steem and Hive return comparable.
Figureย <ref> measures the 50% minimum threshold (MT) coefficient for Steem and Hive during 2019-03 and 2021-03.
Similar to the trends shown in Figureย <ref>, the MT coefficient also dropped sharply around the takeover day, from 49 to 5 on 2020-03-02 and to 2 on 2020-03-03.
After that, the MT coefficient recovered to around 13.
However, on the Hive fork day 2020-03-20, Steem's MT coefficient dropped back to 5 from 14, while Hive's MT coefficient increased to 29 from 14.
After the Hive fork, compared with the changes of NE coefficients shown in Figureย <ref>, the changes of MT coefficients are quite different.
Specifically, we could see that the gap between Hive's MT coefficient and Steem's MT coefficient remains enormous, even one year after the takeover.
Overall, the results suggest that the Steem blockchain tends to be less decentralized than Ethereum at the individual level.
That is, by diving deeper into the individual level, resources, such as accumulated voting power in Steem or mining power in Ethereum, tend to be dispersed more evenly in Ethereum.
The longitudinal observation reveals that Steem shows a significant disadvantage with regard to MF coefficients, which may suggest that Steem is more vulnerable to malicious attacks such as takeovers.
Attacks.
Researchers have proposed several attacks for PoW blockchains.
Eyal et al. proposed a new type of attack termed selfish mining in 2014ย <cit.>.
It is a mining strategy for Ethereum and other blockchains employing the PoW consensus protocol.
In selfish mining, miners do not release blocks immediately after these blocks are mined but instead continue mining and gradually release mined blocks based on the progress of their competitors.
According to Eyal's analysis, this strategy would actually slow down the verification speed of blocks in a blockchain network and also reduce the profit of honest miners.
One year later, Eyal et al. proposed a theory named miner's dilemmaย <cit.>.
That is, mining pools face a prisoner's dilemma. Any mining pool can enhance its profit by attacking other pools, but if they all choose to attack each other, they will receive less profit than if none of them attack.
After that, Sapirshtein et al. further improved selfish mining using a Markov decision process. They found out the optimal mining strategy and demonstrated that miners could further lower the necessary mining power for an attack using their strategyย <cit.>.
Recently, according to the findings of Feng et al. inย <cit.>, selfish mining in Ethereum is also profitable when certain conditions are met. They also pointed out that selfish mining attacks in Ethereum are easier to launch than those in Bitcoin since selfish mining attacks in Ethereum require fewer resources.
These studies demonstrate that the amount of resources required for an attack is reducing as more sophisticated strategies are adopted. In other words, the degree of decentralization necessary to resist attacks in blockchains is growing.
ยง CONCLUSION
In this paper, we present the first double-level longitudinal measurement study comparing the decentralization of PoW and DPoS blockchains.
To facilitate cross-consensus comparison, we present a two-level comparison framework and a new metric named MT coefficient.
We apply the proposed methods to Ethereum and Steem.
Our results suggested that, compared with Ethereum, Steem tends to be more decentralized at the representative level but less decentralized at the individual level.
Our results also suggest that Steem may be more vulnerable to malicious attacks such as takeovers.
We believe the methods proposed in this work, including the double perspective on decentralization and the methodology and metrics, could facilitate future works on drawing a more generic comparison over different consensus protocols.
a large-scale longitudinal measure-
ment study comparing the decentralization of PoW and DPoS
blockchains.
present the first large-scale longitudinal study of the decentralization in DPoS-powered Steem blockchain.
We analyze the decentralization at both the pool level and the deeper individual level, and further characterize shareholders on their loyalty and independence.
The results are benchmarked against the mining power decentralization in Ethereum.
Our key conclusion is that, compared with Ethereum, Steem tends to be more decentralized at the representative level but less decentralized at the individual level.
Our results dive deeper into the level of decentralization DPoS blockchain achieves on the web, and provide novel insights into the comparison of decentralization across DPoS and PoW blockchains.
Our study invites similar comparative studies at either pool level or individual level of other blockchains serving web applications or other major blockchains that adopt PoW, DPoS or even other consensus protocols.
Specifically, to benchmark our results against the state-of-the-art, we propose a two-level measurement model for comparing decentralization across DPoS and PoW.
To facilitate measurements from different perspectives, we propose a measurement algorithm DPoS-ILDQ and two new metrics, the top-k normalized entropy (NE) coefficient and the minimum threshold (MT) coefficient.
Besides, to dive deeper into the individual level in Steem, we fix missing system parameters and reconstruct the historical stake snapshots on a daily basis for a period of five years.
To develop a fine-grained understanding of the major players and detailed steps of the takeover process, we perform clustering analysis across the coin holders.
Specifically, we first extract distinct voting strategies performed by coin holders, namely the various combinations of committee memberes that any shareholder has chosen in the history of committee member election.
Then, for each shareholder, we track his or her history of switching voting strategies.
The results reveal interesting insights about the existence of clusters of coin holders who simultaneously change their voting strategies in the same way.
On the takeover day, we identify a cluster of eight players who jointly confirmed TRON founder's victory.
Our study demonstrates the decisive role of three major exchanges (i.e., binance, huobi and poloniex) among the eight players in the battle.
The exchanges gave up their neutral stance maintained for nearly four years and invested $7,469,573 worth of tokens within 22 minutes to release accounts restricted by the Fork 0.22.2.
The released accounts then immediately cast votes weighted by their pre-mined tokens, making TRON's founder undefeated in the battle.
Contributions
In a nutshell, this paper makes the following key
contributions:
* To the best of our knowledge, our work presents the first
data-driven analysis on the Steem Takeover, the first de facto takeover of blockchain by centralized capital.
* Our work dives deeper into the level of decentralization Steem achieves and provide novel insights into the comparison of decentralization across DPoS and PoW blockchains.
We believe that the individual-level decentralization of any blockchain other than Steem and Ethereum remains unstudied. However, we believe the insights of this work, including the double perspective on decentralization and the methodology and metrics, could facilitate future works on drawing a more generic comparison over different consensus protocols.
* Our work provides novel insights into the correlation between decentralization and takeover.
Our study suggests that the targeted blockchain of the first de facto takeover is relatively less decentralized than the well-studied Ethereum blockchain.
Our study also demonstrates the long-term damage to the decentralization of both Steem and Hive after the takeover, which may suggest that no one won the battle from this perspective.
* Our work corroborates the existence of the Sword of Damocles over DPoS. That is, the neutral actors (e.g., accounts holding pre-mined tokens, exchanges holding users' tokens) may one day collude to defeat the rest users and takeover the blockchain.
We analyze the decentralization at both the pool level and the deeper individual level, and further characterize coin holders on their loyalty and independence.
The results are benchmarked against the mining power decentralization in Ethereum.
Our key conclusion is that, compared with Ethereum, Steem tends to be more decentralized at the pool level but less decentralized at the individual level.
Our results dive deeper into the level of decentralization DPoS blockchain achieves on the web, and provide novel insights into the comparison of decentralization across DPoS and PoW blockchains.
Our study invites similar comparative studies at either pool level or individual level of other blockchains serving web applications or other major blockchains that adopt PoW, DPoS or even other consensus protocols.
To understand decentralization and its correlation with takeover,
we perform measurements across Steem, Hive and Ethereum and compare the results measured before and after the takeover.
Inspired by previous research, to obtain a more comprehensive understanding of decentralization in DPoS blockchains, our analysis is performed at two different levels, the pool level where elected committee memberes produce blocks, and the deeper individual level where Steem coin holders cast stake-weighted votes to elect committee memberes.
Specifically, to benchmark our results against the state-of-the-art, we propose a two-level measurement model for comparing decentralization across DPoS and PoW.
To facilitate measurements from different perspectives, we propose a measurement algorithm DPoS-ILDQ and two new metrics, the top-k normalized entropy (NE) coefficient and the minimum threshold (MT) coefficient.
Besides, to dive deeper into the individual level in Steem, we fix missing system parameters and reconstruct the historical stake snapshots on a daily basis for a period of five years.
Our study concludes that, before the takeover, Steem tends to be less decentralized than Ethereum at the individual level.
During the takeover, Steem became extremely centralized.
After the takeover, Hive tends to be more decentralized than Steem but less decentralized than before.
In this paper, we present the first large-scale longitudinal study of the Steem Takeover.
To understand the decentralization,
we perform measurements at both the pool level and the deeper individual level of Steem, Hive and Ethereum using two metrics.
Our study concludes that, before the takeover, Steem tends to be less decentralized than Ethereum.
During the takeover, Steem became extremely centralized.
After the takeover, Hive tends to be more decentralized than Steem.
To understand the takeover process,
we identify clusters of major players and perform block-by-block analysis.
Our study demonstrates the decisive role of three major exchanges, who gave up their neutral stance and invested $7,469,573 worth of tokens to support TRON founder.
Our work dives deeper into the level of decentralization Steem achieves and provides novel insights into the correlation between decentralization and takeover.
Our work also corroborates the existence of the Sword of Damocles over DPoS. That is, the `neutral' actors may one day collude to takeover blockchains.
ยง REBUTTAL
ยง.ยง comment-1
@R1: not reasonable to compare a generic platform like Ethereum with a specific application of DPoS (i.e., Steem) and generalize the result to DPoS protocol
@R2: Why select Steem and Ethereum? Can you generalize the results and insights to PoS vs. PoW blockchains?
@R2: not fit a Web-focused conference
Response:
<more details on Steem>
We agree that Steem is not as generic as Ethereum. Instead, Steem is designed to be a social blockchain supporting a wide range of social applications for social users [1]. There are 324 Steem-based apps [2]. Among them, Steemit (decentralized Reddit) is the first application. Another application is QTube (decentralized YouTube) [3]. The relation between Steem and Steemit is briefly presented in Section 2 โโฆincluding its key social web application Steemitโฆโ. We will add the aforementioned details in Section 2.
<why Steem, relevance>
We have written the paper while keeping in mind that our primary audience is the web community. As described in the Introduction, our first goal is โto understand decentralization in DPoS blockchain on the web at a deeper levelโ. To figure it out, we believe the object of study should be a blockchain targeting web applications and web users, rather than more generic blockchains. To the best of our knowledge, with over 1 million social users and around 1 billion recorded operations, Steem is the most appropriate blockchain for this study, so we choose it. As described in the Introduction, our results โdive deeper into the level of decentralization DPoS blockchain achieves on the web, corroborate the existence of centralization riskโ, which for the first time introduce and demonstrate the existence of centralization risk in emerging blockchain-based web applications for the web community. We believe our work fits the topic โBlockchains and distributed ledgers in the context of Web securityโ included in the โsecurity, privacy and trustโ track of the conference CFP. We will incorporate the discussion in the Introduction.
<why Ethereum >
As presented in related work and section 3, โthe state-of-the-art study [4] measured โฆ Ethereum โฆโ, so we decide to โobtain the Ethereum participant reward data from โฆ [4]โ. The selection of Ethereum could help us focus on our main object, Steem, and compare the results with the state-of-the-art.
<generalize to PoS vs. PoW>
This study demonstrates the existence of centralization risk in a specific social blockchain running DPoS. We do not generalize the results to all DPoS blockchains but rather argue that DPoS is not free from issues raised by centralization. In other words, this study indicates that DPoS "can also" face these issues. We also believe that the individual-level decentralization of any blockchain other than Steem and Ethereum remains unstudied. However, we believe the insights of this work, including the double perspective on decentralization and the methodology of measuring DPoS decentralization, could facilitate future works. We will incorporate the clarification in the Conclusion.
ยง.ยง comment-2
@R2: consider more PoW and PoS blockchains, draw a more generic comparison over PoW and PoS
@R1: [5] studies various DPoS-based systems
Response:
Thank you for the suggestion and we truly agree with it! In fact, we believe the study of decentralization is developing in three directions, from pool-level to individual-level, from PoW to other consensus protocols, and from snapshot study to longitudinal study. For instance, [5] performed pool-level, snapshot study over various blockchains; [4] performed individual-level, longitudinal study over Ethereum; our study performed double-level, longitudinal study over Steem and Ethereum. We believe the next step is to expand double-level, longitudinal studies to more blockchains. However, this is non-trivial. For instance, [4] experienced difficulties in collecting individual-level data in Ethereum, and our study experienced difficulties in reconstructing historical stake snapshots in Steem. As described in Conclusion, โour study invites similar comparative studies at either pool level or individual level of other blockchainsโฆโ, just like how [4] inspired us, we believe our work would potentially facilitate more future works. We will incorporate the discussion in the Conclusion.
ยง DECENTRALIZATION IN STEEMIT
As a blockchain-based social media platform, Steemit distinguishes itself from traditional social sites through the decentralization brought by the blockchain.
In this section, we investigate the actual level of decentralization in Steemit by analyzing the committee member election process of DPoS in detail.
ยง.ยง Decentralized platform operation
Centralization and decentralization.
In traditional social media platforms, such as Reddit and Quora, a single entity (i.e., Reddit, Inc. and Quora, Inc.) owns the complete data generated by users and operates the websites.
In contrast, Steemit not only open sources its front-end, Condenser and the back-end Steem-blockchainย <cit.>, but also makes all its data in the blockchain available for public accessย <cit.>.
Rather than functioning as a single entity, the Steemit platform is operated by a group of 21 committee memberes elected by its users.
Any user in Steemit can run a server, install the Steem-blockchain and synchronize the blockchain data to the latest block.
Then, by sending a committee member_update operation to the network, the user can become a committee member and have a chance to operate the website and earn producer rewards if he or she can gather enough support from the electors to join the 21-member committee member group.
Steemit-2 and STEEM-2.
As anyone can copy the code and data of Steemit, one may naturally doubt that an adversary, say Bob, may build a `fake' Steemit platform, Steemit-2 that has exactly same the functionality and historical data as Steemit.
To distinguish Steemit-2 from Steemit, we name the cryptocurrency issued by Steemit-2 as STEEM-2 (and also SBD-2).
Here, a natural question that arises is what makes people believe that Steemit, rather than Steemit-2, is the `real' one?
In the decentralized network, the opinion of `which one is real' is determined by the consensus of Steemit users or, to be more precise, the coin holders.
With the DPoS consensus protocol, each block storing data of Steemit is signed by a top committee member elected by the coin holders, which may represent the consensus among the coin holders.
Therefore, unless most of coin holders switch to Steemit-2, the new blocks generated in Steemit-2 will be signed by committee memberes elected by a few coin holders and will not be recognized by the entire community.
Factors affecting decentralization of Steemit.
In our work, we study the characteristics of decentralization in Steemit by analyzing the committee member election process.
Concretely, we aim at answering the following three key questions:
(1) Do the members of the 21-member committee member group operating the platform have a high update rate, or people seating on the 21 seats are rarely changed?
(2) What is the power of big coin holders? Is it possible that a single big shareholder can determine who to join the 21-member committee member group?
(3) Are there any value-transfer operations among the committee member electors and electees? Is it possible that the election involves trading activity such as bribery?
In general, we consider Steemit to have an ideal level of decentralization if members of the 21-member committee member group are frequently updated and if these members all have different interests.
We also consider Steemit to have a relatively high decentralization closer to the ideal level if it allows more people to join the 21-member committee member group, if the power of big coin holders is not decisive and if the election is not heavily correlated with value-transfer operations.
We investigate these aspects in the following subsection.
ยง.ยง Analyzing committee member election
Update of the 21-member committee member group.
To investigate the update of the 21-member committee member group, we extract the producer of each block from block 1 to block 25,563,499 and plot the results as a heatmap in Figureย <ref>.
We first compute the number of blocks produced by each committee member and sort the committee memberes based on their produced blocks in total. For the top-30 committee memberes that have the highest number of produced blocks, we plot their attendance rate in the 21-member committee member group during the thirty months.
During a month that has thirty days, there should be 30*24*60*60/3= 864,000 blocks generated because blocks are generated every three seconds.
For every 21 blocks (63 seconds), the 21 elected committee memberes are shuffled to determine their order for generating the next 21 blocks.
Therefore, if a committee member has a 100% attendance rate in the elected group, it can at most produce 864000/21 โ 41,142 blocks in a thirty-day month.
In Figureย <ref>, we find that most of the top-30 committee memberes, once entering the 21-member committee member group, maintained a high attendance rate closer to 100% for a long time.
From month 12 to month 30, namely one and a half year, the top-12 committee memberes firmly held at least 10 seats, nearly half of positions in the 21-member committee member group.
From month 23 to month 30, 17 seats were held by 18 committee memberes.
From committee member 13 to committee member 30, we can observe a transition period, namely month 15 to month 20, during which the places of nine old committee memberes were gradually taken by nine new committee memberes.
Overall, the 21-member committee member group tends to show a relatively low update rate.
The majority of seats were firmly controlled by a small group of committee memberes.
We do observe some switch of seats but that happened only in a low frequency.
Power of big coin holders.
Next, we investigate the influence of big coin holders in the committee member election.
As described in Tableย <ref>, a user has two ways to vote for committee memberes.
The first option is to perform committee member_vote operations to directly vote for at most 30 committee memberes.
The second option is to perform a committee member_proxy operation to set another user as an election proxy.
For example, Alice may set Bob to be her proxy. Then, if both Alice and Bob own $100 worth of shares, any vote cast by Bob will be associated with a weight of $200 worth of shares. Once Alice deletes the proxy, the weight of Bob's votes will reduce to $100 worth of shares immediately.
In Figureย <ref>, we plot the stacked bar chart representing a snapshot of weighted votes received by the top-60 committee memberes, who have produced the highest amounts of blocks, at block 25,563,499.
The figure shows the distribution of votes cast by the top-29 electors whose votes have the highest weight, either brought by their own shares or shares belonging to users setting these electors as proxy.
The sum of weighted votes cast by all other electors outside the top-29 is represented as `sum of rest'.
From Figureย <ref>, we see a few top electors have their votes weighted by striking amounts of shares.
The top-1 elector has his or her votes weighted by $19,800,000 worth of shares. A deeper investigation regarding the top-1 elector shows that all the shares affecting his or her weight are not directly owned by this user, but owned by another user, who set the top-1 elector as proxy.
The runner-up elector, which is the account belonging to the main exchange used by Steemit users, has its votes associated with a weight of $12,100,000 worth of shares.
From the figure, we see all the 27 committee memberes voted by the top-1 elector enter the top-50, all the 18 committee memberes receiving votes from both the top-2 electors enter the top-30 and the only committee member receiving votes from all the top-3 electors becomes the top-1 committee member.
We can also observe that 19 out of the top-20 committee memberes receive at least two votes from the top-3 electors and 29 out of the top-30 committee memberes receive at least one vote from the top-3 electors.
As illustrated by the results, the distribution of weight of votes in committee member election is heavily skewed, which suggests that the election of 21 committee memberes may be significantly impacted by a few big coin holders,
This phenomenon may not be a good indication for a decentralized social media platform. In the worst case, if the 21-member committee member group is controlled by a single shareholder, the platform will simply function as a centralized model.
Value-transfer operations among election stakeholders.
Finally, we investigate the value-transfer operations performed among the top-30 committee memberes, top-29 electors and the accounts selecting top-29 electors as proxy.
The data is plotted as a directed graph in Figureย <ref>, where edges are colored by their source node color.
The edge thickness represents the amount of transferred value from source to target, which is the sum of value transferred through transfer and transfer_to_vesting operations.
Since most Steemit users use runner-up elector to trade cryptocurrency, the graph does not show edges connected with that account.
Our first observation about this graph is that only two out of top-30 committee memberes and three out of top-29 electors never perform value-transfer operations while all other investigated users form a value-transfer network, which has 3.34 average degree, 0.21 average clustering coefficient and 3.96 average path length.
In the network, we find a cluster of users who select top-29 electors as proxy. After manually checking the profiles of these users, we find that this cluster represents a community of Korean users, which is connected to the rest of the network mainly through several leaders of the Korean community.
Overall, what we observed from the value-transfer operations suggests that the majority of the investigated election stakeholders have economic interactions, which may not be a good indication of a perfectly decentralized committee member group where the members are expected to have different interests.
After confirming the existence of value-transfer operations among the majority of top committee memberes and top electors, we then what to understand the correlation between these operations with the committee member election.
We find 2445 transfer operations and 92 transfer_to_vesting operations performed among the investigated users.
The transfer operation allows senders to leave a message to recipients of the fund at its `memo' area, which may reveal the purpose of the fund transfer.
Among the 2445 transfer operations, we find 1349 operations with filled `memo' while only 29 `memo' messages are related to committee member election.
The 29 `memo' messages are sent by four committee memberes in the top-30 for the purpose of seeking votes. All the transfer operations involving these messages transferred fund with its value lower than $5, which may illustrate that the messages senders are more likely to use the transfer operations as a `secret' channel to deliver messages, rather than a way to offer a bribe.
Overall, what we observed from the value-transfer operations suggests that the majority of the investigated election stakeholders have economic interactions. However, these interactions may not necessarily reveal unexpected behaviors such as bribed committee member votes.
Although the economic interactions may not be a good indication of a perfectly decentralized committee member group where the members are expected to have different interests, it is good to see that there is little correlation between the economic interactions and committee member election.
ยง.ยง Discussion
Our study in this section demonstrates that the 21-member committee member group tends to show a relatively low update rate and these seats may actually be controlled by a few big coin holders.
Our study also indicates that the majority of top committee memberes and top electors form a value-transfer network and have economic interactions.
Together, these results suggest that the actual level of decentralization in Steemit is far lower than the ideal level.
One key reason for the low decentralization is the use of DPoS consensus protocol.
The DPoS consensus protocol has been widely adopted by mainstream blockchain-based platforms such as BTS and EOS and has been proved to be an effective approach of enhancing transaction rates of blockchains.
However, there has been a long debate surrounding decentralization of DPoS.
The opponents of DPoS censure that DPoS-powered platforms trade decentralization for scalability as the consensus in these platforms is only reached among a small committee (e.g., the 21-member committee member group in Steemit), instead of among all interested members (e.g., all miners in Bitcoin and Ethereum powered by Proof-of-Work (PoW) consensus).
The supporters of DPoS argue that those PoW-powered platforms has been controlled by a few mining pools, showing even lower decentralization than the DPoS-powered platforms.
The data-driven analysis in this section deeply investigates the underlying behaviors of participants interested in the core component of DPoS, namely the committee member election.
The results reveal that the current electoral system is making the decentralization quite fragile, as the committee intended to be decentralized is actually quite centralized in practice.
A quick solution to address the symptoms is to restrict the power of big stakeholders, such as cutting the number of times that big stakeholders can vote in the election.
A better way of solving the problem is to replace DPoS with more advanced consensus protocols, that can form the committee without an election involving interactions among users.
For example, Algorandย <cit.>, a recent cryptocurrency, proposed a new Byzantine Agreement (BA) protocol that makes the election be fairly performed by the Verifiable Random Functions in cryptography, rather than by users.
plain
same
|
http://arxiv.org/abs/2306.04904v2
|
20230608031621
|
An adaptive augmented Lagrangian method for training physics and equality constrained artificial neural networks
|
[
"Shamsulhaq Basir",
"Inanc Senocak"
] |
cs.LG
|
[
"cs.LG",
"physics.comp-ph",
"physics.flu-dyn",
"35Qxx, 35Exx, 76Dxx, 68Wxx, 65Mxx, 65Kxx, 49Mxx, 49Kxx,",
"G.1.6; G.1.8; G.1.10; J.2; I.2; I.6"
] |
Spectrum Sharing between High Altitude Platform Network and Terrestrial Network: Modeling and Performance Analysis
Zhiqing Wei,
Lin Wang,
Zhan Gao,
Huici Wu,
Ning Zhang,
Kaifeng Han,
and Zhiyong Feng
Statistics Canada and Carleton University
============================================================================================================================================================
Physics and equality constrained artificial neural networks (PECANN) are grounded in methods of constrained optimization to properly constrain the solution of partial differential equations (PDEs) with their boundary and initial conditions and any high-fidelity data that may be available. To this end, adoption of the augmented Lagrangian method within the PECANN framework is paramount for learning the solution of PDEs without manually balancing the individual loss terms in the objective function used for determining the parameters of the neural network. Generally speaking, ALM combines the merits of the penalty and Lagrange multiplier methods while avoiding the ill conditioning and convergence issues associated singly with these methods . In the present work, we apply our PECANN framework to solve forward and inverse problems that have an expanded and diverse set of constraints. We show that ALM with its conventional formulation to update its penalty parameter and Lagrange multipliers stalls for such challenging problems. To address this issue, we propose an adaptive ALM in which each constraint is assigned a unique penalty parameter that evolve adaptively according to a rule inspired by the adaptive subgradient method. Additionally, we revise our PECANN formulation for improved computational efficiency and savings which allows for mini-batch training. We demonstrate the efficacy of our proposed approach by solving several forward and PDE-constrained inverse problems with noisy data, including simulation of incompressible fluid flows with a primitive-variables formulation of the Navier-Stokes equations up to a Reynolds number of 1000.
ยง INTRODUCTION
Partial differential equations (PDEs) are used to describe a wide range of physical phenomena, including sound propagation, heat and mass transfer, fluid flow, and elasticity. The most common numerical methods for solving PDE problems involve domain discretization, such as finite difference, finite volume, finite element, and spectral element methods. However, the accuracy of the solution heavily depends on the quality of the mesh used for domain discretization. Additionally, mesh generation can be tedious and time-consuming, particularly for complex geometries or problems with moving boundaries. While numerical methods are efficient for solving forward problems, they are not well-suited for solving inverse problems, particularly data-driven modeling. In this regard, neural networks can be viewed as a promising alternative meshless approach to solving PDEs.
The use of neural networks for solving PDEs can be traced back the early 1990s <cit.>. Recently, there has been a resurgence of interest in the use of neural networks to solve PDEs <cit.>, particularly after the introduction of the term physics-informed neural networks (PINNs) <cit.>. In PINNs, the governing equations of a physical phenomenon of interest are embedded in an objective function along with any data to learn the solution to those governing equations. Although the performance of PINNs can be influenced by several factors such as the choice of activation function <cit.>, sampling strategy <cit.>, and architecture <cit.>, the formulation of the objective function that is used to determine the neural network parameters is paramount for satisfactory predictions.
In the baseline PINN formulation <cit.>, several loss terms with different physical scales are aggregated into a single composite objective function with tunable weights, which is a rudimentary approach to a multi-objective optimization problem. Since neural networks are trained using gradient descent type optimizers, the model parameters can be influenced by the larger gradient of the loss function, irrespective of the physical scale. This can lead to unstable training, as the optimizer may prioritize one objective over another, sacrificing either the PDE loss or the boundary loss. Therefore, adjusting the interplay between the objective terms requires manual hyperparameter tuning, which can be a time-consuming and challenging task. Additionally, the absence of validation data or prior knowledge of the solution to the PDE for the purpose of tuning can render the baseline PINN approach impractical for the solution of PDEs <cit.>.
Dynamic determination of the weights in the composite objective function of the baseline PINN approach has attracted the attention of several researchers. <cit.> proposed an empirical method that has several limitations that we discussed in a prior work <cit.>. <cit.> proposed an empirical method by considering a bi-objective loss function for the solution of linear PDEs. <cit.> proposed a dual dimer method to adjust the interplay between the loss terms by searching for a saddle point.
<cit.> studied PINNs using the Neural Tangent Kernel (NTK). NTK provides insights into the dynamics of fully-connected neural networks with infinite width during training via gradient descent. The authors proposed using the eigenvalues of the NTK to adapt the interplay between the objective terms. <cit.> proposed self-adaptive PINNs (SA-PINN) with a minimax formulation to adjust the interplay between the loss terms. Lagrange multipliers are sent through an empirical mask function such as sigmoid, which makes the dual unconstrained optimization formulation not equivalent to the constrained optimization problem. Because of that, equality constraints are not strictly enforced. The SA-PINN method produced results that are comparable or better than the NTK method <cit.>. In the present work, we compare our proposed method with both of these approaches for the solution of the wave equation.
By and larger the aforementioned works adopt an unconstrained optimization approach in the first place to formulate the objective function used to determine neural network parameters. In physics and equality constrained artificial neural networks (PECANNs) <cit.>, we pursued a constrained optimization method to formulate the objective function. Specifically, we used the augmented Lagrangian method (ALM) <cit.> to formally cast the constrained optimization problem into an unconstrained one in which PDE domain loss is constrained by boundary and initial conditions, and with any high-fidelity data that may be available. It is worth noting that ALM combines the merits of the penalty method and the Lagrange multiplier method. It balances feasibility and optimality by updating a penalty parameter to control the influence of constraint violations <cit.>.
In what follows, we show that the conventional ALM with a single penalty parameter used in the PECANN model <cit.> struggles when applied to problems with multiple constraints of varying characteristics. To overcome this limitation, we propose an adaptive augmented Lagrangian method, which introduces multiple penalty parameters and independently updates them based on the characteristics of each constraint. Additionally, we propose a computationally efficient formulation of the objective function to handle a large number of constraints to enable mini-batch training while maintaining a small memory footprint during training. We solve several forward and inverse PDE problems, including the solution of incompressible fluid flow equations with a primitive-variables formulation up to a Reynolds number of 1000, to demonstrate the efficacy of our PECANN model with an adaptive augmented Lagrangian method. The codes used to produce the results in this paper are publicly available at <https://github.com/HiPerSimLab/PECANN>
ยง TECHNICAL FORMULATION
Let us consider a general constrained optimization problem with equality constraints
min_ฮธ๐ฅ(ฮธ), ย such that ย ๐_i(ฮธ) =0, โ i โโฐ,
where the objective function ๐ฅ and the constraint functions ๐_i are all smooth, real valued functions on a subset of R^n and โฐ is a finite set of equality constraints. We can write an equivalent minimax unconstrained dual problem using the augmented Lagrangian method as follows
min_ฮธmax_ฮปโ(ฮธ;ฮป,ฮผ) = ๐ฅ(ฮธ) + โ_i โโฐฮป_i ๐_i+ ฮผ/2โ_i โโฐ๐^2_i (ฮธ).
We can swap the order of the minimum and the maximum by using the following minimax inequality concept or weak duality
max_ฮปmin_ฮธL(ฮธ;ฮป) โคmin_ฮธmax_ฮปL(ฮธ;ฮป).
The minimization can be performed for a sequence of multipliers generated by
ฮป_i โฮป_i + ฮผย ๐_i(ฮธ),ย โ i โโฐ.
We should note that ฮผ can be viewed as a global learning rate for the Lagrange multipliers. ALM combines the merits of the penalty method and the Lagrange multiplier method by updating the penalty parameter in such a way that it balances the trade-off between feasibility and optimality, and ensures convergence. Traditionally, in ALM, textbook descriptions typically rely on a single penalty parameter to address all constraints as in (<ref>), and updating the penalty parameter is often done through empirical strategies. However, as we show in a later section, these update strategies become ineffective when there are multiple constraints with different characteristics. Next, we consider two existing strategies to update the penalty parameter in ALM.
First, in Algorithm <ref>, we monotonically increase the penalty parameter ฮผ at each training iteration until a maximum safeguarding value (i.e., ฮผ_max) is reached. Safeguarding the penalty parameter is a common strategy and prevents it from reaching excessively large values that may lead to numerical instability or overflow. To establish clear distinction from other strategies and facilitate comparison, we will refer to Algorithm <ref> as ALM with monotonically increasing penalty update (MPU).
1em
In the context of training the neural networks, inputs to Algorithm <ref> are parameters for the neural network model ฮธ^0, a maximum safeguarding penalty parameter ฮผ_max, and a multiplicative factor ฮฒ for increasing the penalty parameter ฮผ at each iteration. We should note that in Algorithm <ref>, both the Lagrange multipliers and the penalty parameter are updated at each iteration <cit.>. A similar approach without the maximum safeguarding penalty parameter has been used in <cit.>. To prevent divergence, the maximum penalty parameter was set to 10^4. However, updating the penalty parameter at each epoch may aggressively increase the penalty parameter that could lead to divergence as we will demonstrate in a later section. Additionally, finding a suitable maximum penalty parameter can be challenging.
Another strategy to update the penalty parameter ฮผ is to update it only when the constraints have not decreased sufficiently at the current iteration <cit.>. In Algorithm
<ref>, we present an augmented Lagrangian method in which the penalty parameter is updated conditionally <cit.>. A similar strategy has been adopted in the work of <cit.> to train an encoder-decoder neural network for approximating the Fokker-Planck-Landau collision operator. We will refer to Algorithm <ref> as ALM with conditional penalty update (MPU).
1em
Our inputs to Algorithm <ref> are ฮท which is a placeholder for the previous value of constraints, ฮผ_max is a safeguarding penalty parameter, ฮฒ is a multiplicative weight for increasing penalty parameter. Similar to the previous algorithm, the maximum penalty parameter is set to 10^4 to prevent numerical overflow and divergence. It should be noted that finding a suitable maximum penalty parameter can be challenging.
In our recent investigations, we have discovered that the rate of update for Lagrange multipliers can be a critical factor when dealing with problems that have different types of constraints (such as flux, data, or PDE constraints). Because the penalty parameter behaves like a learning rate for the Lagrange multiplier, employing the same update rate or penalty parameter for all constraints can lead to issues of instability during training. In some cases, the parameter may be too large or too small for certain constraints, which can adversely affect the optimization process. As such, we propose a new approach to address this issue. Our method involves assigning a unique penalty parameter for each constraint type with its own update rate. By tailoring the penalty parameter to each specific constraint type, we can effectively manage the optimization process and ensure greater stability during training. We will show in the results section that we are able to learn the solution of challenging PDEs when other strategies fail or perform poorly.
ยง.ยง Adaptive augmented Lagrangian method
In this section, we propose an augmented Lagrangian method with a novel adaptive update strategy for multiple penalty parameters to handle diverse constraints in the optimization problem. We note that it is common to use a single penalty parameter that is monotonically increased in most implementations of the ALM. However, that strategy becomes insufficient to tackle challenging problems. As we discuss above, we need a unique, adaptive penalty parameter or learning rate for each Lagrange multiplier associated with a constraint. This tailored approach ensures that the penalty parameter conforms to the characteristics of individual Lagrange multipliers, enabling effective handling of diverse constraints. We formulate our unconstrained optimization problem for problem (<ref>) as follows:
max_ฮปmin_ฮธโ(ฮธ,ฮป;ฮผ) = ๐ฅ(ฮธ) + โ_i โโฐฮป_i ๐_i(ฮธ) + 1/2โ_i โโฐฮผ_i ๐^2_i(ฮธ),
where ฮป_i is a vector of Lagrange multipliers and ฮผ_i is a vector of penalty parameters. The minimization can be performed using a variant of gradient descent type optimizer for a sequence of Lagrange multipliers generated by
ฮป_i ฮป_i + ฮผ_i ๐_i(ฮธ), โ i โโฐ,
during training. Upon examining the dual update of the augmented Lagrangian method as shown in Eq.ย (<ref>), we observe that it involves a gradient ascent step with learning rates denoted by ฮผ_i for each Lagrange multiplier ฮป_i. Hence, an suitable approach is to adopt the strategy in RMSprop algorithm by G. Hinton <cit.> in finding an independent effective learning rate or penalty parameter for each Lagrange multiplier. The main reason behind our choice is that the update strategy in RMSprop is consistent with the dual update of ALM as in Eq.ย (<ref>). Hence, we divide our global learning rates by the weighted moving average of the squared gradients of our Lagrange multipliers as follows
vฬ
_i ฮฑvฬ
_i + (1 - ฮฑ)
๐_i(ฮธ)^2, โ i โโฐ,
ฮผ_i ฮณ/โ(vฬ
_i) + ฯต, โ i โโฐ,
ฮป_i ฮป_i + ฮผ_i ๐_i(ฮธ), โ i โโฐ,
where vฬ
_i are the weighted moving average of the squared gradient of our Lagrange multipliers, ฮณ is a scheduled global learning rate, ฯต is a term added to the denominator to avoid division by zero for numerical stability, and ฮฑ is a smoothing constant. In Algorithm <ref>, presents our training procedure.
1em
The input to the algorithm is an initialized set of parameters for the network ฮธ^0, a global learning rate ฮณ, a smoothing constant ฮฑ, collocation points, noisy measurement data and high fidelity data to calculate our augmented Lagrangian. In Algorithm <ref>, the Lagrange multiplier vector is initialized to 1.0 with their respective averaged squared-gradients initialized to zero. In summary, our main approach involves reducing the penalty parameters for Lagrange multiplier updates with large and rapidly changing gradients, while increasing the penalty parameters for Lagrange multiplier updates with small and slowly changing gradients. Next, we delve into the subtle differences in learning the solution of forward and inverse problems and illustrate how they can be formulated as constrained optimization problems, which can then be cast as an unconstrained optimization problem using the augmented Lagrangian method.
ยง.ยง Constrained optimization formulation for solving forward and inverse problems
In this section, we explain our proposed formulation for solving a generic PDE problem with boundary and initial conditions. for demonstration purposes. We will also compare our new formulation to our previous approach and highlight the advantages and improvements of our new approach. We first formulate a constraint optimization problem by minimizing the loss on the governing PDE and constraining the individual points on the boundary and initial conditions assuming these conditions are noise-free and can be defined as equality constraints. Consider a scalar function u(x,t): โ^d+1โโ on the domain ฮฉโโ^d with its boundary โฮฉ satisfying the following partial differential equation
โฑ(x,t;โ u/โ t, โ^2 u/โ t^2,โฏ,โ u/โx, โ^2 u/โx^2,โฏ,ฮฝ) = 0, โ (x,t) โ๐ฐ,
โฌ(x,t,g;u, โ u/โx,โฏ) = 0, โ (x,t) โโ๐ฐ,
โ(x,t,h;u,โ u/โ t,โฏ) = 0, โ (x,t) โฮ,
where โฑ is the residual form of the PDE containing differential operators, ฮฝ is a vector PDE parameters, โฌ is the residual form of the boundary condition containing a source function g(x,t) and โ is the residual form of the initial condition containing a source function h(x,t). ๐ฐ = {(x,t) ย |ย xโฮฉ, t = [0,T] }, โ๐ฐ = {(x,t) ย |ย xโโฮฉ, t = [0,T]} and ฮ = {(x,t) ย |ย xโโฮฉ, t = 0}.
ยง.ยง.ยง Formulation for problems with point-wise constraints
In this section, we present a constrained optimization formulation, as represented in equation (<ref>), within the context of solving partial differential equations (PDEs). Considering Eq.(<ref>) with its boundary condition (<ref>) and initial condition (<ref>), we previously formulated the following constrained optimization problem <cit.>:
min_ฮธโ_i=1^N_โฑโฑ(x^(i),t^(i);ฮฝ,ฮธ)_2^2,
subject to
ฯ(โฌ(x^(i),t^(i),g^(i);ฮธ)) = 0,ย โ (x^(i),t^(i),g^(i)) โโ ๐ฐ,ย i = 1,โฏ, N_โฌ
ฯ(โ(x^(i),t^(i),h^(i);ฮธ)) = 0,ย โ (x^(i),t^(i),h^(i)) โฮ,ย i = 1,โฏ, N_โ,
where N_โฑ, N_โฌ, N_โ are the number of data points in ๐ฐ, โ๐ฐ and ฮ respectively. ฯ is a distance function. We should note that number of Lagrange multipliers scale with the number of constraints N_โฌ, N_โ. When dealing with a substantial number of constraints (i.e.,N_โฌ ,N_โ), the efficiency of the optimization can be compromised due to the size of the computational graph and memory requirements. To mitigate that, we propose to minimize the expected loss on the PDE while incorporating an expected equality constraint on the boundary and initial conditions. Furthermore, we justify our proposed approach by examining the distribution of the Lagrange multipliers for a large number of constraints.
ยง.ยง.ยง Computationally efficient formulation based on expectation of constraints
In this section, we introduce an efficient constrained optimization formulation, represented in the form of equation (<ref>), for solving partial differential equations (PDEs). In our original PECANN approach <cit.>, we learned the solution of PDEs by constraining the optimization problem point-wise using the formulation described in the previous section. However, this strategy becomes computationally expensive for challenging problems, which benefit from increased number of collocation points. Constraining the learning problem pointwise is also not suitable for taking advantage of deep learning techniques designed to accelerate the training process. As we will demonstrate later in the present work, for large number of point-wise constraints, Lagrange multipliers assume a probabilistic distribution with a clear expected value. Based on this observation, we revise our formulation for point-wise constraints such that instead of constraining the loss on individual high fidelity data, we constrain the expected loss on a batch of our high fidelity data. Consequently, Lagrange multipliers for this particular approach will represent expected values, which has a direct impact on computational efficiency because Lagrange multipliers are not attached to any individual data point, we can do mini-batch training if our data does not fit in the memory for full batch training. Considering Eq.(<ref>) with its boundary condition (<ref>) and initial condition (<ref>), we write the following constrained optimization problem:
min_ฮธย ๐ฅ(ฮธ) = 1/N_โฑโ_i=1^N_โฑโฑ(x^(i),t^(i);ฮฝ,ฮธ)_2^2,
subject to
1/N_โฌ โ_i=1^N_โฌฯ( โฌ(x^(i),t^(i),g^(i);ฮธ)) :=0,ย โ (x^(i),t^(i),g^(i)) โโ ๐ฐ,ย i = 1,โฏ, N_โฌ,
1/N_โ โ_i=1^N_โฯ( โ(x^(i),t^(i),h^(i);ฮธ)) :=0,ย โ (x^(i),t^(i),h^(i)) โฮ,ย i = 1,โฏ, N_โ,
where N_โฑ, N_โฌ, N_โ are the number of data points in ๐ฐ, โ๐ฐ and ฮ respectively. ฯ is a convex distance function. In all of the experiments ฯ is a quadratic function unless specified otherwise. Next, we are present a typical PDE-constrained inverse problem.
ยง.ยง.ยง Formulation for PDE-constrained inverse problems
In our original formulation of the PECANN framework <cit.>, we minimized the loss on the governing PDE and noisy data while constraining the loss with high fidelity data <cit.>. However, disparity in physical scales between the partial differential equations (PDEs) and noisy measurement data can exist, which couldcomplicate the inference problem. To address this challenge of solving inverse PDE problems using noisy and high fidelity data, we propose a PDE-constrained formulation for inverse problems by minimizing the loss on noisy data while considering the governing PDE and any available high fidelity data as constraints.
Les us consider a generic inverse PDE problem to demonstrate the new formulation. We assume that the initial condition is known exactly and can be treated as high fidelity data. Additionally, we assume that the underlying physical problem is governed by a typical PDE operator as shown in Eq. (<ref>). Given a set of noisy measurement data {(x^(i),t^(i)), รป^(i)}_i=1^N_โณ, we can minimize the following objective
min_ฮธ,ฮฝย 1/N_โณโ_i=1^N_โณu(x^(i),t^(i);ฮธ) - รป(x^(i),t^(i)) _2^2,
subject to
1/N_โฑโ_i=1^N_โฑฯ (โฑ(x^(i),t^(i);ฮฝ,ฮธ)) :=0,ย โ (x^(i),t^(i)) โ๐ฐ,ย i = 1,โฏ, N_โฑ,
1/N_โ, โ_i=1^N_โฯ( โ(x^(i),t^(i),h^(i);ฮธ)) :=0,ย โ (x^(i),t^(i),h^(i)) โฮ,ย i = 1,โฏ, N_โ,
where N_โฑ, N_โ are the number of data points in ๐ฐ and ฮ respectively. ฯ is a convex distance function. In all of the experiments ฯ is a quadratic function unless specified otherwise. It is worth noting that if additional high-fidelity data is available, we can incorporate it as an additional constraint into the above formulation along with other constraints.
ยง APPLICATIONS
In this section, we apply our proposed augmented Lagrangian method with an adaptive update strategy to learn the solution of forward and inverse problems. We provide additional examples in the Appendix as well. Given an n-dimensional vector of predictions รปโ๐^n and an n-dimensional vector of exact values uโ๐^n, we define the relative Euclidean or ๐^2-norm, infinity norm ๐^โ-norm, and the root-mean square (RMS), respectively, to assess the accuracy of our predictions
โฐ_r(รป,u) = รป - u_2/u_2,
โฐ_โ(รป,u) = รป - u_โ, RMS = 1/nโ(โ_i=1^n(รป^(i) - u^(i))^2)
where ยท_2 denotes the Euclidean norm, and ยท_โ denotes the maximum norm. All the codes used in the following exampls are available as open source at <https://github.com/HiPerSimLab/PECANN>.
ยง.ยง Forward problem: unsteady heat transfer in a composite medium
In this section, we study a heat transfer problem in a composite material where temperature and heat fluxes are matched across the interface <cit.>. This canonical problem has initial condition, boundary condition and flux constraint which makes it ideal for highlighting the implementation intricacies of our method as well as demonstrating the significant improvement we achieve using our proposed formulation. Consider a time-dependent heat equation in a composite medium,
โ u(x,t)/โ t = โ/โ x[ฮบ(x,t) โ u(x,t)/โ x] + s(x,t), (x,t) โฮฉร [0,ฯ]
along with Dirichlet boundary condition
u(x,t) = g(x,t), (x,t) โโฮฉร (0,ฯ],
and initial condition
u(x,0) = h(x), x โฮฉ,
where u is the temperature, s is a heat source function, ฮบ is thermal conductivity, g and h are source functions respectively. The composite medium consists of two non-overlapping sub-domains where ฮฉ = ฮฉ_1 โชฮฉ_2. We consider the thermal conductivity of the medium to vary as follows
ฮบ(x,t) =
1, (x,t) โฮฉ_1 ร [0,2],
3 ฯ, (x,t) โฮฉ_2 ร [0,2],
where ฮฉ_1 = {x | -1 โค x < 0 } and ฮฉ_2 = {x | 0 < x โค 1 }. To accurately evaluate our model, we consider an exact solution of the form
u(x,t) =
sin(3 ฯ x) t, x โฮฉ_1 ร [0,2],
t x, x โฮฉ_2 ร [0,2].
The corresponding source functions s(x,t), g(x,t) and h(x,t) can be calculated exactly using (<ref>).
First, let us introduce an auxiliary flux parameter ฯ(x,t) = ฮบ(x,t) โ u/โ x to obtain a system of first-order partial differential equation that reads
โฑ(x,t) := โ u(x,t)/โ t - โฯ(x,t) /โ x + q(x,t), โฮฉร [0,ฯ],
๐ฌ(x,t) := ฯ(x,t) - ฮบ(x,t) โ u(x,t)/โ x, โฮฉร [0,ฯ],
โฌ(x,t) := u(x,t) - g(x,t), โโฮฉร (0,ฯ],
โ(x,t) := u(x,0) - h(x), โฮฉ, t = 0,
where โฑ is the residual form of our differential equation (used as our objective function), ๐ฌ is the residual form of our flux operator (equality constraint), โฌ is the residual form our boundary condition operator (equality constraint), and โ is the residual form of our initial condition operator (equality constraint).
We use a fully connected neural network architecture parameterized by ฮธ with two inputs for (x,t) and two outputs for u and ฯ consisting of three hidden layers with 30 neurons per layer and tangent hyperbolic activation functions. We use L-BFGS optimizer <cit.> available in the Pytorch package <cit.> with strong Wolfe line search function with its maximum iteration number set to five. For the purpose of this problem, we generate N_โฑ = N_๐ฌ = 10^4 collocation points, N_โฌ = 2 ร 5000 and N_โ = 5000 for approximating the boundary and initial conditions only once before training.
First, we solve this problem using the formulation discussed in section <ref> using point-wise constraints. We present the results at t=1 obtained using the point-wise constraint formulation in Fig.<ref> and compare the impact of three different strategies to update the penalty parameters in ALM. Fig.<ref>(a) and (b) presents the temperature and heat flux distribution in the composite medium, respectively. We observe that adaptive penalty update strategy produces results that are in excellent agreement with the exact solution. However, results using monotonic and conditional penalty updates produce a solution that is markedly different than the exact solution.
With Fig.<ref> we show the importance of adaptively updating the penalty parameter when dealing with diverse constraint types (e.g. boundary condition, initial confition, and flux conservation) that exhibit different physical characteristics. However, we apply the constraints point-wise. As we mentioned previously, the computational cost of this particular implementation scales linearly with the number of points assigned to each constraint, especially when constraining the flux because it is applied to every collocation point in the domain. Furthermore, with the point-wise constraint formulation, we cannot train our model in mini-batch training fashion since each Lagrange multiplier is attached to every constraint data point.
In Fig.ย <ref>, we present the distribution of Lagrange multipliers resulting from the point-wise constraint formulation presented in section <ref>. Our goal is to find out if it is necessary to constrain the solution point-wise. We observe that the distribution of Lagrange multipliers assumes probabilistic distributions regardless of the penalty update strategy. This observation indicates that when there are a large number of point-wise constraints, we can constrain an expected value of the loss on each type of constraints using a single Lagrange multiplier as seen in Figs.ย <ref>(a)-(c). We should note that expected loss is also commonly used as an objective function in machine learning applications. Hence, we employ our proposed formulation to train our models using ALM with different penalty updating approaches to showcase the efficiency of our adaptive ALM and our proposed formulation.
Figure <ref> presents the results of our PECANN model with the adaptive ALM and with the expected loss formulation given in section <ref> applied to the heat transfer in a composite medium problem. Specifically, Fig.<ref>(a)-(b) presents the space and time solution of the temperature and the absolute value of the point-wise error in temperature produced by our model, respectively. Space and time solution of the heat flux and its absolute value error are shown in Fig.<ref>(c)-(d), respectively. Collectively, Fig. <ref> demonstrates our predictions closely match the exact solutions. Our PECANN model using the monotonic (MPU) and conditional (CPU) penalty update strategies do not converge, and therefore we do not report temperature and heat flux predictions using those two strategies. However, we provide the error norms resulting from all three penalty update strategies for comparison purposes in Fig.ย <ref> from which we observe that ALM with adaptive penalty update strategy outperforms the other two strategies. From Fig.ย <ref>, we also observe that adaptive penalty update (APU) strategy can provide acceptable error levels with even smaller number of epochs.
To further gain insight into how ALM with adaptive penalty updates (APU) works, we analyze the evolution of the penalty parameters during training. Fig.<ref>(a) shows the evolution of penalty parameters with each epoch for enforcing boundary conditions ฮผ_BC, initial conditions ฮผ_IC, and flux constraints ฮผ_Flux using the adaptive penalty update (APU). The penalty parameters for each constraint type grow at different rates over the course of training, justifying the need for adaptive and independent penalty parameters for each constraint type. Fig.<ref>(b) illustrates the evolution of the penalty parameter for enforcing boundary conditions, initial conditions, and flux constraints using conditional penalty update (CPU) and monotonically increasing penalty update (MPU) strategies. In both cases, the penalty parameter quickly reaches the maximum penalty value of ฮผ_max = 10^4, causing the problem to become ill-conditioned and leading to divergence during optimization. This observation again highlights the importance of using an adaptive penalty update strategy to avoid ill-conditioning and ensure convergence. Additionally, in Fig.<ref>(c), we present the evolution of the loss on our equality constraints during training, when using the adaptive penalty update strategy. We observe that all our constraints decay at different rates, which provides further justification for the varying rates at which the penalty parameters increase.
Our investigation and findings from the heat transfer in a composite medium problem provide a strong support for our proposed augmented Lagrangian method with an adaptive penalty update strategy (i.e. Algorithm <ref>) and for the computationally efficient formulation of the objective function using expectation of constraints (i.e. formulation given in section <ref>). In the rest of the examples, we will adopt this combination to further showcase its efficacy and versatility.
ยง.ยง Forward problem: wave equation
Baseline PINN model <cit.> with a fully connected neural network architecture is known to struggle for certain type of PDEs <cit.>. <cit.> and <cit.> single out the one-dimensional (1D) wave equation as a challenge case for future PINN models because the baseline PINN model faces severe challenges when applied to this PDE problem. Here, we apply our PECANN model with the proposed adaptive ALM to learn the solution of the same 1D wave equation as studied in <cit.> and compare the accuracy of our predictions and the size of our neural network model against the results and networks presented in those two studies. While comparing the accuracy levels is significant, we also believe it is imperative to take into perspective the size of the neural network architecture, and the number of the collocations points deployed to obtain the level of accuracy in predictions.
Consider the following 1D wave equation
โ^2 u/โ t^2 - 4 โ^2 u/โ x^2 = 0, (x,t) โ๐ฐ,
with the boundary condition
u(x,t) = 0, (x,t) โโ๐ฐ,
and the initial condition
โ u/โ t(x,t) = 0, (x,t) โโ๐ฐ.
The exact solution to the problem is
u(x,t) = sin(ฯ x) + 1/2sin(4 ฯ x ) , (x,t) โ๐ฐ,
where ๐ฐ = {(x,t) ย |ย x โฮฉ, t = [0,1] },
โ๐ฐ = {(x,t) ย |ย x โโฮฉ, t = [0,1]} and ฮ = {(x,t) ย |ย x โฮฉ, t = 0}.
โฮฉ represents the boundary of the spatial domain ฮฉ = { x | x = (0,1) }.
<cit.> used a fully connected feed-forward neural network with five hidden layer (H_L = 5) and 500 neurons per hidden layer (N_H = 500) with N_๐ฐ = 300 collocation points, N_โ ๐ฐ = 300 boundary points and N_ฮ = 300 initial points that are generated at every epoch to solve the same 1D wave equation. In the present work, we use a single hidden layer (H_L = 1) with 50 neurons (N_H = 50) and a tangent hyperbolic activation function. We sample N_๐ฐ = 300 collocation points, N_โ ๐ฐ = 300 boundary points and N_ฮ = 300 only once before training. Our objective function is the residual on the governing PDE given in (<ref>) and our constraints are the boundary condition (<ref>) and initial conditions (<ref>). We assign three Lagrange multipliers and three penalty parameters for this problem. We train our model for 10,000 epochs even though we obtain an acceptable level of accuracy with 2,000 epochs. We use L-BFGS optimizer with its default parameters in the Pytorch package. Table <ref> presents a summary of the prediction errors, architecture size and collocation points for this problem in comparison to other works.
We present the prediction of our model in Fig.ย <ref>.
We can observe that the our model has successfully learned the underlying solution to a good level of accuracy. In Table <ref> we compare our predictions against other works that tackled the same problem. We report the mean and standard deviation of the relative error calculated over 10 different trials. The results show that our method produces results with error levels that are one order of magnitude level than the results obtained with the self-adaptive PINNs <cit.>, while using a much smaller neural network model.
ยง.ยง Forward problem: lid-driven cavity flow simulation up to a Reynolds number of 1000
In this section, we apply our adaptive augmented Lagrangian method to solve the steady-state flow in a two-dimensional lid-driven cavity benchmark problem <cit.> at Reynolds numbers of 100, 400, and 1000. Unlike other works where a stream function vorticity formulation was adopted, we use the primitive-variables formulation of the Navier-Stokes equations, which potentially makes our approach extensible to three-dimensional fluid flow problems. We should also mention that lid-driven cavity at Reynolds of 1000 is considered much more challenging than the same problem at lower Reynolds numbers, such as at Re=100 or Re=400.
Steady-state Navier-Stokes equations in two dimensions and the continuity equation are written as follows:
u โ u/โ x + v โ u/โ y + โ p/โ x - 1/Re(โ^2 u/โ x^2 + โ^2 u/โ y^2) =0, (x,y) โฮฉ
u โ v/โ x + v โ v/โ y + โ p/โ y - 1/Re(โ^2 v/โ x^2 + โ^2 v/โ y^2) = 0, (x,y) โฮฉ,
โ u/โ x + โ v/โ y = 0 , (x,y) โฮฉ
where u = (u,v) is the velocity vector, Re is the Reynolds number and p is the pressure. We aim to learn the solution of the momentum equations constrained by the continuity equation on ฮฉ = {(x,y)ย |ย 0 โค x โค 1, 0 โค y โค 1 } with its boundary โฮฉ. No-slip boundary conditions are imposed on all walls of the square cavity. The top wall of the square cavity moves at a velocity of u(x,1) = 1 in the x direction. Because we have a closed system with no inlets or outlets in which the pressure level is defined, we fix the pressure field at an arbitrary reference point as p(0.5,0) = 0. Fig.<ref> illustrates our simulation setup for the lid-driven cavity flow problem. We should note that our objective function in this problem is the loss on the total momentum โฑ field, and our constraints are the continuity equation (i.e. divergence-free velocity field) ๐ฌ, boundary conditions and the pressure that is fixed at a reference point.
To formulate our constrained optimization problem,
let us define โฑ = (โฑ_x,โฑ_y), which represents the steady-state, two-dimensional momentum equations, and ๐ฌ as the divergence-free condition as follows
โฑ_x(x,y) := u โ u/โ x + v โ u/โ y + โ p/โ x - 1/Re(โ^2 u/โ x^2 + โ^2 u/โ y^2) , (x,y) โฮฉ
โฑ_y(x,y) :=u โ v/โ x + v โ v/โ y + โ p/โ y - 1/Re(โ^2 v/โ x^2 + โ^2 v/โ y^2), (x,y) โฮฉ,
๐ฌ(x,y) := โ u/โ x + โ v/โ y , (x,y) โฮฉ
We sample N_โฌ = 4 ร 64 points for approximating the boundary conditions, N_๐ฌ = N_โฑ = 10^4 points for approximating the loss on conservation of mass and momentum only once before training, respectively. Additionally, we constrain a single point for pressure as shown in Fig.<ref>. We use a fully connected neural network architecture consisting of four hidden layers with 30 neurons in each layer and the tangent hyperbolic activation function. We choose L-BFGS optimizer with its default parameters and strong Wolfe line search function that are available in the PyTorch package. We train our network for 2000 epochs for each Reynolds number separately.
In Fig.ย <ref>, we investigate the effect of different penalty update strategies on the predictions. We observe that our model trained with the adaptive penalty update strategy in ALM learns the underlying solution for Re=100 well and matches the benchmark data closely. Figs.ย <ref>(a)-(b) also show that ALM with monotonic (MPU) or conditional (CPU) penalty update strategies are incapable of producing the expected solution. Next, we consider the case with Reynolds number 400 and train our model for 2000 epochs using the adaptive penalty update strategy only since the other two strategies did not produce acceptable results for Re=100.
We observe from Fig.ย <ref> and Fig.ย <ref> that our model with the adaptive update strategy learns the underlying pattern for Re=400 and Re=1000, respectively. However, the predicted velocity fields do not closely match the benchmark numerical solution, despite the our model's superior performance for the Re=100 case.
It is worth noting that our objective in these two simulation cases is to highlight the fact that despite implementing and constraining appropriate boundary conditions, divergence-free velocity field, and anchoring the pressure at a single point, the task of learning becomes increasingly more challenging as the Reynolds number increases.
Learning the solution of lid-driven cavity problem at a high Reynolds numbers may be achievable with a larger neural network or a higher number of collocations. However, our aim here is not to demonstrate whether such learning is possible or not. Instead, we aim to illustrate the challenges involved for high Reynolds number flow problems, even when formulated properly as a constrained optimization problem. We hypothesize that for high Reynolds number flow problems, the underlying optimization problems become ill-conditioned and require additional regularization. Therefore, in order to further regularize the optimization process, we introduce a small set of exact, ramdomly sampled velocity data within the flow domain as shown in Fig. <ref>(a)-(b). We use these clean data points as equality constraints on the prediction of our model, similar to the way we handle boundary conditions.
We observe from Figs.ย <ref>(c)-(e) that our PECANN model with an adaptive ALM successfully predicts the fluid flow in the cavity at Reynolds number = 400 with a high degree of accuracy by leveraging a small sample of high fidelity data. Specifically, Figs.ย <ref>(c)-(e) show the predicted solution, x-component of the velocity over a line at x=0.5), and y-component of the velocity over a line at y=0.5) for Re=400, while Figs.ย <ref>(f)-(h) show the corresponding results for Re=1000. Compared to the results shown in Fig.ย <ref> and Fig.ย <ref>, our new results suggest that a small set of randomly sampled labeled data is beneficial to regularize the optimization process for high Reynolds numbers. We aim to explore the feasibility of learning high Reynolds numbers in two and three dimensions without any labeled data using larger and different neural network architectures in the future.
ยง.ยง Inverse problem: estimating transient thermal boundary condition from noisy data
Here we aim to learn the transient thermal boundary condition and temperature based on measured data. Let us consider a parabolic partial differential of the form
โ T/โ t = ฮบโ^2 T/โ x^2 (x,t) โ (0,1) ร (0,1],
where T is the temperature, ฮบ is the thermal conductivity. The forward solution of Eq.ย <ref> requires the following boundary information
T(0,t) ย โ t โ (0,1],
T(1,t) ย โ t โ (0,1],
T(x,0) ย โ x โ (0,1).
However, in this problem, we do not know the boundary condition at T(0,t). Instead we have noisy data at the interior part of our domain. We constrain the prediction of our model on the governing physics and the noiseless boundary condition that we treat as high-fidelity data. Fig.ย <ref>(a)-(c) present the collocation points distributed over the space-time domain, and synthetically generated noisy data from the exact solution at two locations.
We use a fully connected artificial neural networks with three hidden layers and 30 neurons per layer. We adopt the tangent hyperbolic activation function. We randomly generate N_โฑ = 512 collocation points, N_โ = 64 number of training points for approximating the initial condition and N_โฌ = 64 number of training boundary data only once before training. For the noisy measurement data we generate N_โณ = 2 ร 128 with 10 percent noise (Gaussian noise with a standard deviation that is 10 % of the standard deviation of the synthetically generated clean data).
We use the L-BFGS optimizer with its default parameters and strong Wolfe line search activation function. We train our model for 5000 epochs. The result of our numerical experiment is presented in Fig.ย <ref>.
Figure ย <ref>(a) presents the exact solution. From Fig.ย <ref>(b) we observe that our predicted solution is in great agreement with the exact solution. We provide the absolute point-wise error distribution in Fig.<ref>(c). We have shown that by having two sensors at two different locations inside the domain that are measuring noisy data, we can sufficiently learn not only the boundary condition, but also the entire temperature distribution across the space-time domain.
ยง.ยง Inverse problem: estimation of transient source term in diffusion problems
In this example, we aim to learn the temperature distribution as well as the heat source using noisy measurements and noiseless boundary or initial conditions, which is a more challenging inverse problem than the previous example. <cit.> proposed a collocation algorithm, based on the piece-wise linear approximation of the unknown heat source. Several works have also been proposed to learn the unknown heat source with a single dependent variable <cit.>. In our approach, we use a neural network model to represent the temperature and the heat source with time and space dependency. We then learn the heat distribution and the heat source simultaneously.
Let us consider the following initial-boundary value problem of the heat conduction equation as follows:
โ u/โ t = ฮบโ^2 u/โ x^2 + s, (x,t) โ (0,1) ร (0,1],
where u is the temperature, ฮบ is the thermal conductivity and s is the heat source. The objective is to learn the solution u and the heat source s from noiseless boundary condition and noisy measurements inside our domain. A schematic representation of data is given in Fig.ย <ref>.
Unlike typical inverse source problems that rely on a single input variable, this paper addresses a more challenging scenario where the problem is dependent not only on the spatial variable x but also on the temporal variable t. We demonstrate that we obtain an accurate prediction without resorting to any stabilizing scheme or assuming any fundamental solution for the temperature or the heat source distribution. It is worth mentioning that our constrained optimization problem may also be called as PDE-constrained optimization problem. For this problem, we use three-hidden layer fully connected artificial neural networks with 30 neurons per layer with tangent hyperbolic activation function. We randomly generate N_โฑ = 4096 collocation points, N_โ = 128 number of training points for approximating the initial condition and N_โฌ = 2ร 128 number of training boundary data only once before training. For the noisy data we generate N_โณ = 512 with 10 percent noise (Gaussian noise with a standard deviation that is 10 % of the standard deviation of the synthetically generated clean data). We use L-BFGS optimizer with its default parameters and strong Wolfe line search activation function. We train our model for 5000 epochs.
We present the result from this numerical experiment is presented in Fig.<ref>. The exact temperature distribution is plotted in Fig.ย <ref>(a). From Figs.ย <ref>(b)-(c) we observe that our predicted solution is in agreement with the exact solution to a great degree. The exact distribution of the heat source is plotted in Fig.<ref>(d). Similar to the temperature distribution, the predicted heat source distribution agrees closely with the exact solution as shown in Figs.<ref>(e)-(f).
ยง CONCLUSION
Constrained optimization is central to the formulation of the PECANN framework <cit.> for learning the solution of forward and inverse problems and sets it apart from the popular PINN framework, which adopts an unconstrained optimization in which different loss terms are combined into a composite objective function through a linear scalarization. In the PECANN framework, the augmented Lagrangian method (ALM) enables us to constrain the solution of a PDE with its boundary and initial conditions and with any high-fidelity data that may be available in a principled fashion. However, for challenging forward and inverse PDE problems, the set of constraints can be large and diverse in their characteristics. In the present work, we have demonstrated that conventional approaches to update penalty and Lagrange multipliers within the ALM can fail for PDE problems with complex constraints. Consequently, we introduced an adaptive ALM in which a unique penalty parameter is assigned to each constraint and updated according to a proposed rule. This new method within the PECANN framework enables us to
constrain the solution of PDEs flexibly toward convergence without the need to manually tune the hyperparameters in the objective function or resort to alternative strategies to balance the loss terms, which may not necessarily satisfy the optimality conditions.
When the problem size gets larger, constraining each collocation point associated with a constraint becomes taxing computationally. We have empirically demonstrated that Lagrange multipliers associated with a particular type of constraint exhibits a distribution with a clear mean value. Consequently, we have revised our constrained optimization formulation based on the expectation of each loss and constraint terms. Based on our experiments, the resulting formulation does not affect the efficacy of our PECANN framework, but makes it computational more efficient and enable mini-batch training.
We applied our proposed model to several forward and PDE-constrained inverse problems to demonstrate its efficacy and versatility. A noteworthy example is our simulation of the lid-driven cavity benchmark problem at a Reynolds number 1000 using the primitives-formulation of the Navier-Stokes equations. The Re=1000 case is known to be a challenging case to simulate because of the dominant effect of advection in the flow physics and development of strong circulations in the bottom corners of the cavity. Unlike, streamfunction-vorticity formulation of the two-dimensional Navier-Stokes equation, primitive-variables formulation is more general, but at the same more challenging to solve computationally because the formulation retains pressure as one of the variables in addition to the velocity field and. In our flow simulations, we found that the key to obtaining results that are in close agreement with benchmark data <cit.> was to constrain the momentum equations with the divergence-free condition, boundary conditions, and with a small set of randomly sampled high-fidelity data while handling each type constraint with its own unique, adaptive penalty parameter. Another PDE problem that has been viewed as a challenge case for PINNs is the one-dimensional wave equation. Again, our proposed model performed better than the state-of-the-art PINN models for the wave equation problem as well. The codes used to produce the results in this paper are publicly available at <https://github.com/HiPerSimLab/PECANN>. Two additional examples are provided in the Appendix.
ยง ACKNOWLEDGMENTS
This material is based upon work supported by the National Science Foundation under Grant No. 1953204 and in part in part by the University of Pittsburgh Center for Research Computing through the resources provided.
ยง ADDITIONAL EXAMPLES
ยง.ยง Inviscid scalar transport in one dimension
In this section, we study the transport of a physical quantity dissolved or suspended in an inviscid fluid with a constant velocity. Numerical solutions of inviscid flows often exhibit a viscous or diffusive behavior owing to numerical dispersion. Let us consider a convection equation of the form
โฮพ/โ t + u โฮพ/โ x = 0, ย โ (x,t) โฮฉร [0,1],
satisfying the following boundary condition
ฮพ(0,t) = ฮพ(2 ฯ ,t) ย โ t โ [0,1],
and initial condition
ฮพ(x,0) = h(x), ย โ x โโฮฉ,
where ฮพ is any physical quantity to be convected with the velocity u, ฮฉ = {xย |ย 0 < x < 2ฯ} and โฮฉ is its boundary. Eq.(<ref>) is inviscid, so it lacks viscosity or diffusivity.
For this problem, we consider u=40 and h(x) = sin(x). The analytical solution for the above problem is given as follows <cit.>
ฮพ(x,t) = F^-1(F(h(x) )e^-i u k t),
where F is the Fourier transform, k is the frequency in the Fourier domain and i = โ(-1). Since the PDE is already first-order, there is no need to introduce any auxiliary parameters. Following is the residual form of the PDE used to formulate our objective function,
โฑ(x,t) = โฮพ(x,t)/โ t
+ u โฮพ(x,t)/โ x,
โฌ(t)= ฮพ(0,t) - ฮพ(2ฯ,t),
โ(x) = ฮพ(x,0) - sin(x),
where โฑ is the residual form of our differential equation, โฌ and โ are our boundary condition and initial condition constraints.
For this problem, we use a fully connected neural network architecture consisting of four hidden layers with 50 neurons and tangent hyperbolic activation functions. We generate N_โฑ = 512 collocation points from the interior part of the domain, N_โฌ = 512 from each boundary, and N_โ = 512 for approximating the initial conditions at each epoch. We use L-BFGS optimizer with its default parameters and strong Wolfe line search function that is available in the PyTorch package. We train our network for 5000 epochs. We present the prediction of our neural network in Fig.<ref>. We observe that our neural network model has successfully learned the underlying solution as shown in Figs.ย <ref>(b)-(c).
We also present a summary of the error norms from our approach and state-of-the-art results given in <cit.> in Tableย <ref>. We observe that our method achieves a relative โฐ_r = 8.161 ร 10^-4, which is two orders of magnitude better than curriculum learning method presented in <cit.>, and three orders of magnitude better than the baseline PINN model performance as reported in the same study.
ยง.ยง Mixing of a hot front with a cold front
In this section, we explore formation of cold and warm fronts in a two-dimensional environment which can be described by the following convection equation:
โฑ:=โฮพ/โ t + u โฮพ/โ x + v โฮพ/โ y = 0 ย โฮฉร (0,T],
where ฮฉ = {(x,y) ย |ย -4 โค x,y โค 4 } and T=4. The boundary conditions considered throughout this work are zero flux of the variable ฮพ along each boundary as shown in Fig.ย <ref>.
The problem has the following analytical solution:
ฮพ(x,y,t)=-tanh[ y/2cosฯ t - x/2sinฯ t],
where ฯ is the frequency, which is defined as
ฯ=ฮฝ_t/rฮฝ_t, max.
The velocity field
ฮฝ_t = r ^2 tanhr
is the tangential velocity around the center with a maximum value ฮฝ_t_max = 0.385. The velocity components in the x and y directions can be obtained, respectively, as follows:
u(x,y) = -ฯ y, v(x,y) = ฯ x.
We also present the velocity field in Fig.<ref>(a). The initial scalar field varies gradually from positive values at the bottom to negative values at the top as can be seen Fig.<ref>(b).
where the maximum and minimum values of the field is ฮพ_max= 0.964 and ฮพ_min= -0.964,
respectively.
We use a deep fully connected neural network with 4 hidden layers each with 30 neurons that we train for 5000 epochs total. We use a Sobol sequence to generate N_โฑ = 10,000 residual points from the interior part of the domain, N_โฌ = 512 points from each of the boundaries (faces) and N_โ = 512 points for enforcing the initial condition only once before training. Our optimizer is LBFGS with its default parameters and strong Wolfe line search function that is available in the PyTorch package.
Figure <ref> presents the temperature contours obtained from exact solution along with our predictions. We observe that our PECANN model is in good agreement with the exact solution. We also provide a summary of RMS errors obtained from our neural network model along with conventional numerical methods used in other works in Table.ย <ref>.
Tableย <ref> compares the RMS error level of our predictions against the RMS error levels produced by five different finite-volume based advection schemes as presented in the work of <cit.>. Among the finite-volume based methods, the most accurate numerical results were obtained with the QUICK scheme on all meshes consistently. We should note that our results on the Cartesian mesh sizes shown in Tableย <ref> are post calculations from the trained neural network. For this reason, our error levels are nearly the same across all mesh sizes.
plainnat
|
http://arxiv.org/abs/2306.03304v2
|
20230605231353
|
Plasma flows during the ablation stage of an over-massed pulsed-power-driven exploding planar wire array
|
[
"R. Datta",
"J. Angel",
"J. B. Greenly",
"S. N. Bland",
"J. P. Chittenden",
"E. S. Lavine",
"W. M. Potter",
"D. Robinson",
"T. W. O. Varnish",
"E. Wong",
"D. A. Hammer",
"B. R. Kusse",
"J. D. Hare"
] |
physics.plasm-ph
|
[
"physics.plasm-ph"
] |
AIP/123-QED
Plasma Science & Fusion Center, Massachusetts Institute of Technology, MA 02139, Cambridge, USA=-1
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
Blackett Laboratory, Imperial College London, London SW7 2BW, United Kingdom
Blackett Laboratory, Imperial College London, London SW7 2BW, United Kingdom
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
Plasma Science & Fusion Center, Massachusetts Institute of Technology, MA 02139, Cambridge, USA=-1
Plasma Science & Fusion Center, Massachusetts Institute of Technology, MA 02139, Cambridge, USA=-1
Plasma Science & Fusion Center, Massachusetts Institute of Technology, MA 02139, Cambridge, USA=-1
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
Laboratory of Plasma Studies, Cornell University, Ithaca, NY, USA
[email protected].
Plasma Science & Fusion Center, Massachusetts Institute of Technology, MA 02139, Cambridge, USA=-1
We characterize the plasma flows generated during the ablation stage of an over-massed exploding planar wire array, fielded on the COBRA pulsed-power facility (1 MA peak current, 250 ns rise time). The planar wire array is designed to provide a driving magnetic field (80-100 T) and current per wire distribution (about 60 kA), similar to that in a 10 MA cylindrical exploding wire array fielded on the Z machine. Over-massing the arrays enables continuous plasma ablation over the duration of the experiment. The requirement to over-mass on the Z machine necessitates wires with diameters of 75- 100, which are thicker than wires usually fielded on wire array experiments. To test ablation with thicker wires, we perform a parametric study by varying the initial wire diameter between 33-100. The largest wire diameter (100) array exhibits early closure of the AK gap, while the gap remains open during the duration of the experiment for wire diameters between 33-75. Laser plasma interferometry and time-gated XUV imaging are used to probe the plasma flows ablating from the wires. The plasma flows from the wires converge to generate a pinch, which appears as a fast-moving (V โ100) column of increased plasma density (nฬ
_e โ2e18) and strong XUV emission. Finally, we compare the results with three-dimensional resistive-magnetohydrodynamic (MHD) simulations performed using the code GORGON, the results of which reproduce the dynamics of the experiment reasonably well.
Plasma flows during the ablation stage of an over-massed pulsed-power-driven exploding planar wire array
J.D. Hare*
July 31, 2023
========================================================================================================
ยง INTRODUCTION
Inverse (or โexplodingโ) cylindrical wire arrays are a commonly-used pulsed-power-driven source of magnetized plasma for laboratory astrophysics applications. These arrays consist of a cylindrical cage of thin conducting wires surrounding a central cathode. This magnetic field configuration drives radially-diverging outflows into a vacuum region, providing good diagnostic access.<cit.> These arrays have previously been fielded on 1-MA university-scale facilities to study a variety of astrophysical phenomena, including magnetized plasma shocks,<cit.> laboratory magnetospheres,<cit.> and magnetic reconnection.<cit.> For such applications, the wire arrays are typically over-massed, so that they provide continuous sustained plasma flows over the duration of the experiment.<cit.>
On larger pulsed-power machines, such as the Z machine (30 MA peak current, Sandia National Labs), <cit.> over-massed exploding wire arrays require a larger initial mass due to the higher driving current.<cit.> For the same wire material, this necessitates more wires and/or the use of larger diameter wires. Although arrays with thin (5-40 diameter) wires have been characterized extensively in pulsed-power-driven experiments,<cit.> there has been little systematic effort to study ablation from thick (>50 diameter) wires, especially with Z-relevant >100 T driving magnetic fields.<cit.>
In cylindrical wire arrays, the maximum driving magnetic pressure is limited by the size of the central cathode and the AK gap (the gap between the anode/wires and cathode). The driving magnetic field in a cylindrical array can be determined from Ampere's law: B(t) = ฮผ_0 I(t) / (2 ฯ R), which shows that it varies inversely with the radius R of the array. <cit.>
For a R = 10 array, the peak driving field on a typical 1-MA university-scale machine is B = 20 T. The finite size of the central cathode makes it difficult to achieve โผ 100 T driving magnetic fields using cylindrical arrays on 1-MA university-scale facilities. In order to overcome this limitation, we explore the use of planar wire arrays to test ablation from thick wires in Z-relevant driving magnetic fields.
These planar wire arrays consist of a linear arrangement of wires separated by a small AK gap from a planar return electrode. This โexplodingโ planar geometry, which has previously been fielded on 1-MA pulsed-power devices,<cit.> allows us to achieve higher driving magnetic fields than in cylindrical arrays.
We also investigate the use of planar wire arrays as a platform for laboratory astrophysics experiments.
Since exploding cylindrical arrays generate azimuthally symmetric flows, the majority of the plasma (and therefore the stored energy) is lost in directions that are not of interest.<cit.> Moreover, due to radially-diverging flows, the density and advected magnetic field decrease rapidly with distance from the wires.<cit.>
In contrast, planar wire arrays could provide directed flows of denser plasma with higher advected magnetic fields, which can be desirable for many laboratory astrophysics applications. In magnetic reconnection experiments, for instance, this would increase dissipation in the current sheet, which is necessary for studying radiative-cooling effects.<cit.>
Planar wire arrays were primarily developed as an efficient X-ray radiation source for indirect-drive inertial confinement fusion (ICF) experiments. <cit.> In contrast to the โexplodingโ geometry used in this paper, the wire arrays used for X-ray generation typically consist of a linear row of wires between the cathode and anode of the pulsed-power device, without the planar return electrode placed adjacent to the wires. Furthermore, these arrays use thin 5-20 diameter wires, which implode during the course of the experiment.<cit.> The arrays are designed such that the implosion time matches the time of peak current, in order to maximize X-ray emission (hence, they are also called matched arrays).<cit.> The implosion stage typically proceeds in a cascade-like fashion, where the imploding wires, starting from the outermost wires, accelerate toward the geometric center of the array, to form a strongly-radiating inhomogeneous plasma column.<cit.> These array configurations have been reported to exhibit peak X-ray power and yield higher than imploding cylindrical arrays with similar number of wires.<cit.> were the first to field the planar wire array in an exploding geometry. This geometry, which consisted of a matched planar array with thin 7.5 tungsten wires, exhibited a 5-6ร higher ablation rate compared to cylindrical wire arrays, consistent with the increased driving magnetic pressure inside the AK gap.<cit.> Furthermore, the ablating plasma converged to form a magnetic precursor column offset from the plane of the wires, before exhibiting the cascade-like implosion described above.<cit.>
In contrast to the previous planar wire array experiments described above, which use matched arrays with thin wires, we use over-massed arrays with thick 33-100 aluminum wires. This suppresses the implosion stage, and generates continuous plasma ablation over the course of the experiment. In wire arrays, the initial flow of current through the wires forms dense cold wire cores surrounded by low-density coronal plasma. <cit.> Current density is concentrated in a thin skin region that surrounds the wire cores, and includes the coronal plasma. The coronal plasma is redirected by the j ร B force of the driving magnetic field in the AK gap between the wires and the cathode. In a planar wire array, this generates plasma flows directed away from the AK gap.<cit.> The ablating plasma advects some magnetic field from the AK gap as it flows outwards, creating outflows of magnetized plasma. In matched arrays, when the stationary wire cores begin to run out of mass (typically โผ 50-80% of the initial mass), periodic breaks appear in the wires, driven by the growth of a modified m=0-like axial instability. <cit.> This marks the end of the ablation phase, and the beginning of the implosion phase.<cit.>
Large wire cores can be undesirable for the ablation process. A large wire core diameter relative to the inter-wire separation inhibits the ablation of mass and the advection of magnetic field with the ablating plasma. <cit.> A large core size may also increase the likelihood of AK gap closure in pulsed-power-driven systems. In wire arrays, closure of the AK gap is undesirable, as it short-circuits the current path, leading to decreased current flow through the wires and reduced/terminated ablation. Previous experiments aimed at characterizing wire core size in imploding wire arrays show that core diameter varies with wire material and initial wire diameter, but is largely independent of the current per wire, and the inter-wire separation. <cit.>
In this paper, we explore the use of an over-massed exploding planar wire array as a platform for laboratory astrophysics experiments, and as a scaled experiment to investigate the ablation of thick wires in cylindrical wire arrays driven by Z-relevant driving magnetic fields. The array is driven by the COBRA pulsed-power machine (1 MA peak current, 250 ns rise time),<cit.> and is designed to exhibit a magnetic driving pressure, current per wire, and inter-wire separation, comparable to that of a 40 diameter exploding wire array driven by a 10 MA current pulse from the Z machine.<cit.> These experiments, therefore, allow us to investigate wire ablation on smaller 1 MA facilities, in loads designed for use on โผ 10 MA machines. We note that also aimed to match the driving magnetic field of a 20 MA, 100 ns rise time current pulse on the Z Machine, to understand imploding cylindrical wire array ablation at higher current per wire and driving magnetic fields. In this paper, we target the driving conditions generated when Z operates in a synchronous long-pulse configuration, with 20 MA peak current (split between two arrays) and a 300 ns rise time.<cit.> As such, we use the long-pulse mode on COBRA, as described in Sec. <ref>.
The requirement to over-mass on the Z machine necessitates wires with diameters of 75-100, which are thicker than wires usually fielded on wire-array z-pinch experiments. To investigate ablation with thicker wires, we vary the initial wire diameter between 33-100 over multiple shots. The load hardware, as well as the magnetic field and current distributions in the load, are described in Sec. <ref>. We characterize the plasma ablation and the reduction in the AK gap for the different wire sizes using laser shadowgraphy, Mach-Zehnder imaging interferometry, and XUV pinhole imaging (Sec. <ref>). These experimental results are provided and discussed in Sec. <ref> and Sec. <ref>. Finally, in Sec. <ref>, we compare the experimental results with three-dimensional resistive magnetohydrodynamic (MHD) simulations performed using GORGON.
ยง EXPERIMENTAL AND DIAGNOSTIC SETUP
ยง.ยง Load Hardware
<ref> shows the load hardware configuration for this experiment. The load consists of a linear array of 15 equally-spaced aluminum wires. The wire-to-wire separation is 0.83, and the array height is 12. The wires are separated from a 10 wide stainless-steel cathode by a 2 wide AK gap, and are held in position by clamps on the anode plate and on the top of the cathode post. We perform a parametric study by varying the wire diameter between 33โค d_wireโค100 for different experimental shots. The COBRA pulsed-power machine (Cornell University),<cit.> when operated in long pulse mode, drives a 1 MA peak current pulse through the load.<cit.> A calibrated Rogowski coil placed around the central cathode monitors the current delivered to the load. <ref> shows the variation of the current delivered with time, as measured by the Rogowski coil. We show the current pulse averaged over 6 successive shots. The current pulse has a double-peaked structure, as it is formed by triggering two Marx generators with a delay between them. The first peak has a magnitude of about 0.75 MA, and appears roughly 125 ns after current start, while the second peak has a magnitude of approximately 1 MA, and appears 250 ns after current start. The shot-to-shot deviation in the current pulse for this experimental series is < 10%.
To gain insight into the current distribution and driving magnetic field for the planar wire array, we perform magnetostatic inductance and Biot-Savart calculations of the load hardware.<cit.> The magnetostatic magnetic field distribution in the planar wire array is shown in <ref>a. The magnetic field inside the AK gap is nearly uniform with y-directed field lines, which curve around the outermost wires to form closed loops outside the array. The mean driving magnetic field (at peak current) inside the AK gap is about 80-100 T. <ref>c shows lineouts of the magnetic field strength along the y-direction inside the AK gap. At the center of the gap (x = -1), the driving magnetic field is uniform in the middle of the array (|y|โค 4 mm) with a strength of about 81 T, but drops sharply to about 50 T near the position of the outermost wires (y = ยฑ6). This is because in contrast to previous experiments, <cit.> we use a cathode whose width is smaller than the linear extent of the wires, which decreases the magnetic field strength around the outermost more inductively-favorable wires. Closer to the position of the wires, the magnetic field is dominated by the local magnetic field around each wire, resulting in a periodic variation in the field strength, as seen in <ref>c. Finally, unlike cylindrical exploding wire arrays, where field lines form closed loops inside the AK gap, and the field decays to zero outside the wires,<cit.> here the magnetic field lines must form closed loops outside the AK gap in the planar wire array. This means that there is a non-zero vacuum magnetic field in the flow region to the right of the wires, which is expected to be about 10% of the driving magnetic field from the magnetostatic calculations.
The simulated current distribution in the wires at peak current (1 MA) is shown in <ref>b. The current in the wires is symmetric about the y = 0 mm plane, and increases slightly with distance from the centerline for the inner wires. The current per wire is about 60-65 kA for the inner wires, and increases sharply to approximately 90 kA for the outermost wires. The higher current in the inductively-favorable outermost wires has also been reported previously in inductance and wire dynamics model computations of the planar wire array.<cit.> considered both the resistive and inductive division of current between the wires, and found the experimental observations to be more consistent with the inductive current division, driving much higher current to the outermost wires.<cit.> Due to the higher current, the rate of mass ablation from the outermost wires is expected to be higher. From a rocket model calculation,<cit.> assuming 50 diameter wires, we expect the outermost wires to ablate 50% of their initial mass around 200 after current start.
When current flows through the wires, the wires heat up resistively to generate a low-density coronal plasma surrounding the dense wire cores. The ablated plasma is accelerated by the j ร B force; this results in an outward flow of plasma into the region to the right of the wires. <ref>a also shows the direction and relative magnitude of the j ร B force acting on the wire locations. The j ร B force at the wires points in the +x-direction for the inner wires, and its magnitude remains roughly consistent for the inner wires. The outer wires experience a j ร B force directed towards the center of the array. This is due to the bending of the field lines around the outer wires, as observed in <ref>a. In designing the wire array, we explored different cathode sizes and AK gap widths in the magnetostatic simulations, however, the magnetic field always curves around the outer wires, similar to that in previous experiments,<cit.> leading to an inward-directed a j ร B force. A shorter cathode relative to the linear extent of the wires, however, allows us to reduce the magnetic field strength around the higher current-carrying outermost wires, and thus, make the magnitude of the j ร B force relatively more uniform.
ยง.ยง Diagnostic Setup
We use laser shadowgraphy to visualize the plasma flow from the planar wire array. The shadowgraphy system is set up to provide a side-on view (xz plane) of the experimental setup. This view is shown in <ref>a, which is a pre-shot image of the load. As the laser beam propagates through the plasma, electron density gradients deflect the light away from regions of higher density (lower refractive index) towards regions of lower density (higher refractive index). The intensity measured by the detector is thus related to gradients of electron density.<cit.>
In addition to shadowgraphy, we use a Mach-Zehder imaging interferometry system to measure the spatially-resolved line-integrated electron density of the plasma. Our interferometry system is set up to provide both an end-on (xy plane) and a side-on view (xz plane) of the experimental setup (see <ref>). When the probe beam propagates through the plasma, the resulting phase accumulated by the beam distorts the fringe pattern, and introduces a spatially-varying fringe shift, <cit.> which we use to reconstruct the phase difference between the probe and reference beams, and to determine the spatially-resolved line-integrated electron density. <cit.> The field-of-view of our interferometer includes volume devoid of plasma, where the fringes remain undistorted. This region of zero fringe shift is chosen as the region of zero density. Both the shadowgraphy and interferometry systems use a 532 Nd:YAG laser (150 pulse width, 100) with a 1" diameter field-of-view. In the end-on system, the laser beam enters through a 26.4 diameter hole in the anode plate, as shown in <ref>c. The interferograms and the shadowgraphs are captured simultaneously using Canon EOS DIGITAL REBEL XS cameras. The interferometry and shadowgraphy systems record 1 frame per shot. The shots are reproducible, and we build up dynamics over multiple shots with identical initial conditions.
We also use a time-gated micro-channel plate (MCP) camera to capture extreme-ultraviolet (XUV) self-emission from the plasma. The camera captures 4 frames (10 inter-frame time, 5 exposure time) recorded on isolated quadrants of the MCP via 200 diameter pinholes. The XUV camera looks onto the wires in the yz-plane, with an azimuthal viewing angle of 7.5 with respect to the x-axis, and a 5 polar angle to the horizontal (xy) plane. The diffraction-limited spatial resolution of the system, for photon energies between 10-100 eV, is about 180-18, while the geometric resolution is about 300.
ยง RESULTS
ยง.ยง Shadowgraphs for different wire diameter
<ref> shows the side-on (xz-plane) shadowgraphs for different wire diameters d_wire = 33-100. In each image, plasma flows from the left to the right, and we mark the initial position of the wires, determined from the pre-shot images, using a white line. We record the shadowgraphs at 150 ns after current start for the 100 diameter wire array, and at 200 ns for the 33-75 diameter arrays. In each shadowgraph, the wires expand to form an opaque region around the initial wire position. In this region, the propagating laser beam is lost, either because the density exceeds the critical density of the propagating light (n_e,critโ4e21), or due to strong density gradients which refract the light out of the optical system's field of view. This dense region of plasma expands in the +x-direction, driven by the outwardly-directed j ร B force in the AK gap. Adjacent to the high-density region, we observe a region of relatively uniform density, followed by a narrow column of intensity fluctuations (indicated by a red rectangle in <ref>). This plasma column, consistent with observations of a magnetic precursor column in the literature,<cit.> can be observed further away from the wires for the 33-75 diameter wires, compared to the 100 wire array, which was recorded at an earlier time.
In addition to expansion in the +x-direction, the wire cores and the coronal plasma also expand into the AK gap. <ref> shows a magnified view of the AK gap for the 75 wire array, both before the experiment, and at 200 ns after current start. Reduction in size of the AK gap occurs due to wire core and coronal plasma expansion, as well as expansion of plasma from the cathode surface. The cathode surface plasma arises from current-driven ablation at the cathode surface, and photoionization via soft X-ray radiation generated by the wire cores. As observed in <ref>, the array with 100 wires exhibits the largest reduction in the AK gap size, while the AK gap remains open for wire diameters d_wireโค75.
We use the intensity of the shadowgraphs to estimate the diameter of the wire cores, and the size of the cathode surface plasma. We crop the shadowgraph to a smaller window which includes the cathode surface, the AK gap, and the expanding coronal plasma (-3โค x โค0, |z| โค5) (see <ref>b); then integrate the pixel intensity along the z-direction. <ref>c shows the integrated pixel intensity as a function of position x for the 75 diameter array, both for the preshot and the experimental shadowgraphs. The dark-to-light transitions in the preshot intensity profile represent the positions of the cathode surface and the wires, while those in the shot intensity profile represent the `edges' of the cathode plasma and the coronal plasma respectively. We fit a sigmoid function โ the cumulative density of a normal distribution โ to the light-to-dark intensity transitions, and determine the wire coronal and cathode plasma `edges' from the means ฮผ of the fitted functions (or equivalently, the full-width-at-half-maximum of the transition). We estimate the uncertainty from the fitted function's standard deviation ฯ. We can then estimate the width of the cathode plasma from the distance between the cathode plasma edge and the position of the cathode surface (quantity A in <ref>c). Similarly, we determine the coronal plasma radius from the distance between the initial wire position and the position of the coronal plasma edge (quantity B in <ref>c).
<ref>a shows the variation of the coronal plasma radius and the cathode plasma width with varying initial wire diameter. The coronal radius increases with increasing initial diameter of the wires, while the width of the cathode plasma remains relatively constant at roughly 0.5. <ref>b shows the variation of the width of the AK gap with the initial wire diameter. Consistent with the shadowgraphs in <ref>, the AK gap width decreases with increasing wire diameter. The 33 wires exhibit the smallest reduction in gap size, where the gap decreases from about 2.7 initially to roughly 1.8 during the experiment. In contrast, the 100 wires exhibit the largest decrease in gap size, from about 2 initially to roughly 0.2 150 ns after current start. The smaller gap size for large wire diameters is primarily due to the increased core and coronal plasma radius of the larger wires. This is in contrast to , in which the AK gap closure was almost entirely due to the expansion of plasma from the return electrode, and not from the thin 7.5 diameter tungsten wires.
The methodology described above provides an upper limit on the wire core diameter. This is because the opaque region in the shadowgraph includes both the wire core and the surrounding coronal plasma. Moreover, density gradients in this region refract the light away from the relatively denser wire cores, resulting in a magnified image. Nevertheless, the shadowgraphs reflect the general trend observed in the variation of the core size with wire diameter. X-ray backlighter imaging, which can probe deeper into the core region, can provide a better estimate of the core size. However, this diagnostic was not available for this experimental series. Previous experiments aimed at characterizing the core size report that values determined from shadowgraphs can be 5-10ร larger than that determined from simultaneous X-ray imaging. <cit.>
ยง.ยง Temporal evolution of ablation from the array
We compare side-on shadowgraphs for 50 diameter wire arrays at 150 ns, 200 ns, and 250 ns in <ref>. These shadowgraphs are recorded in separate experimental shots, with identical load hardware. The plasma column (red box in <ref>) on the right of the image travels in the +x-direction, from x โ 12 mm at 150 ns to x โ 22 mm at 250 ns after current start. This corresponds to an average velocity of about 100, which is consistent with the magnitude of flow velocity observed in previous wire array experiments. <cit.> The outward translation of the plasma column in our over-massed array is in contrast to that observed in the under-massed case, where the column remains mostly stationary (V <15) between the time of formation and implosion.<cit.>
The AK gap remains open throughout the experiment. The temporal evolution of the coronal plasma radius and the cathode plasma width are shown in <ref>a. The measured coronal radius decreases weakly with time, from about 1.2 at 150 ns to about 0.75 at 250 ns after the current start. In contrast, the width of the cathode surface plasma remains roughly constant at about 0.5. Due to the decreasing coronal plasma radius, the AK gap also becomes slightly larger with time, as observed in <ref>b.
ยง.ยง Instability Growth
Axial perturbations of the coronal plasma appear in the AK gap, as observed in <ref>b. The presence of this axial instability is consistent with previous studies of wire array ablation. <cit.> In <ref>, we characterize the amplitude and wavelength distribution of the instability as a function of the initial wire diameter. We determine the amplitude from half the peak-to-valley distance of the plasma-vacuum interface in the AK gap, which we characterize using the interface-detection technique similar to that described in Sec. <ref>. Similarly, we estimate the wavelength of the instability from the peak-to-peak separation of the perturbations at the plasma-vacuum boundary. In <ref>, the red solid line and the blue dashed line represent the median and mean of the distribution respectively. The bottom and top sides of the rectangle represent the 25^th and 75^th percentile (i.e. the interquartile range), while the end caps show the full range of the distribution. The mean and median values for the perturbation amplitude are similar for most wire diameters, and remain largely invariant of the initial wire diameter, with a value of about 50. Both the 33 and 100 wires exhibit a relatively higher upper range of the perturbation amplitude, showing the existence of large amplitude perturbations. The wavelength distribution remains largely independent of the initial wire diameter, with a median peak-to-peak separation of about 200. We can also measure the temporal variation of the amplitude and wavelength from the shadowgraphs in <ref>. Both the amplitude and wavelength of the instability exhibit little variation with time, which indicates saturation of the instability growth. We note that the shadowgraphs provide a line-integrated (along the y-direction) view of the perturbation. Therefore, the process of extracting wavelength from the peak-to-peak separation, for the case where peaks from multiple wires overlap along the line-of-sight, becomes more complicated.
ยง.ยง Electron density measurements
<ref>a & b show the side-on (xz-plane) interferogram, together with the line-integrated electron density, recorded at t = 150 after current start for the array with 50 diameter wires. We indicate the initial position (x = 0 mm) of the wires using a red line. Close to the wires, the high-density plasma forms an opaque region where the probing beam is lost, similar to that in the side-on shadowgraphy images (<ref>). Adjacent to this opaque region, where the density is lower, interference between the probe and reference beams forms periodic bright and dark fringes. In this region, plasma flow from the wires distorts the fringe pattern, whereas, in regions devoid of plasma, the fringes appear undistorted. We trace the fringes by hand, and post-process the traced images in MAGIC2 to calculate the line-integrated electron density from the distortion of the fringes.<cit.> As expected due to time-of-flight effects, electron density is high close to the wires, and decreases with distance from the array. At the plasma-vacuum boundary (x โ 12-15 mm), the plasma forms a discontinuous column of enhanced electron density. The sharp rise in the electron density in this region indicates the presence of a shock-like structure. The width of the transition is about 2. The shape of the plasma column exhibits significant modulation in the axial direction, consistent with what we observe in the simultaneously-recorded shadowgraph of the load (<ref>a).
<ref>c & d show the end-on (xy-plane) interferogram and line-integrated density recorded 150 ns after current start. The probing laser beam in the region x < 4 mm is blocked by the load hardware; but the flow region x โฅ 4 mm is illuminated via the laser feed shown in <ref>c. As seen in <ref>a & b, plasma flow emanating from the wires is redirected towards the center (y = 0 mm) of the array. The converging flows collide or `pinch', forming a region of enhanced density, roughly 12-15 from the wires. The pinch has also been typically referred to as the `magnetic precursor column' in wire array literature, as it is the precursor to the final implosion phase which we suppress in these experiments by over-massing the wire array. <cit.> The position of the pinch is consistent with that of the column of enhanced electron density observed in the side-on electron density map (<ref>b).
In <ref>b, at x โ 4 mm from the wires, the line-integrated density is โจ n_e L_y โฉโ 5-6e18, which falls to โจ n_e L_y โฉโ0.4e18 at 11 mm from the wires, right before the position of the pinch. From end-on interferometry (<ref>d), we estimate the integration length scale L_y by computing the extent of the plasma in the y-direction. This gives us values of L_y (x = 4) โ8, and L_y (x = 11) โ4. The average electron densities, inferred from โจ n_e L_yโฉ / L_y, are therefore nฬ
_e โ4e18 at x = 4, and nฬ
_e โ1e18 at x = 11 from the wires. In <ref>b, the pinch exhibits a line-integrated density of โจ n_e L_y โฉโ0.8e18 at approximately 13 from the wires. Assuming a length scale L_y โ4, the average electron density in the pinch is nฬ
_e โ2e18. This represents a roughly 2ร jump in the electron density at the pinch compared to the flow upstream of the pinch.
ยง.ยง XUV Self-Emission
XUV self-emission images from the load hardware with the 50 diameter wires are shown in <ref>. The wires and the cathode appear as regions of bright emission. Emission from the inner wires appears uniform in intensity, indicating roughly equal current distribution in the wires, as predicted by the magnetostatic calculation (<ref>). The outer wires, however, appear dimmer, which may indicate that the current has switched to the inner wires due to the higher initial rate of ablation from the outer wires, as indicated by the rocket model calculation in Sec. <ref>. Although the wires appear as well-separated columns of emission, the resolution of the optical system prevents us from making quantitative measurements of the core diameter from the XUV images. We observe that the flows from the wires converge to form the pinch, which appears as a brightly-glowing column oriented in the z-direction. The increased emission from the pinch is consistent with its higher electron density (Figure <ref>). Furthermore, shock heating and Ohmic dissipation in the pinch may also contribute to a higher temperature, and consequently, higher radiative emission. The XUV images also exhibit the axial non-uniformity in the shape of the pinch, consistent with the interferometry and shadowgraphy results (<ref>a and <ref>b). Finally, the structure of the plasma ablation and the pinch remains roughly invariant across the different frames over the observation window of 150-185. This is in contrast to the shadowgraphy and interferometry images (<ref> & <ref>) which show significant (V โ100) motion of the pinch. This may indicate that this is a `ghost' image of the pinch, recorded when the MCP is not triggered, due to radiation bleed-through at the time when emission from the pinch is at a maximum.
In <ref>, we compare the XUV emission from the planar wire arrays with 50 and 100 diameter wires respectively. In both cases, the pinch is visible as a bright column of emission, and the shape of the pinch is similar between the two images. For the 50 diameter case, the wires appear as discrete columns of enhanced emission, as can be observed in lineouts of the intensity at z = 4 mm (<ref>c). In contrast, the wires are not easily distinguishable for the thicker 100, consistent with the core size becoming comparable to the inter-wire separation, as seen in <ref>.
ยง DISCUSSION OF RESULTS
ยง.ยง AK Gap and Wire core size
Previous wire array experiments with โผ10 diameter wires show that the diameters of the wire cores and the surrounding corona both increase with initial wire diameter, and are largely independent of the current per wire and the inter-wire separation. <cit.> Our experimental results are consistent with this effect โ <ref>a exhibits a roughly linear increase in the coronal radius with increasing wire diameter, and the measured coronal diameter is roughly 20-25ร the initial wire diameter. While the coronal radius increases with the initial wire diameter, the size of the cathode plasma remains relatively constant (see <ref>a). This is expected since changing the initial wire diameter is not likely to affect the current distribution through the cathode. The larger coronal radius is, therefore, the primary reason for gap closure in the thick 100 case. The gap closes at 150 ns after current start, which makes d_wire = 100 an undesirable wire diameter, both in these planar wire experiments and on the Z experiments for which these experiments are a scaled test. Furthermore, the coronal radius also becomes larger than the inter-wire separation in this case, which inhibits plasma ablation and magnetic field advection from the array.<cit.>
The early gap closure for the 100 diameter case could be a consequence of lower Ohmic heating in the skin region around the wire core. The initial electrical explosion of the wires forms a dense cold wire core consisting of vapor and microscopic liquid metal droplets. Without further Ohmic heating by the current, we would expect the wire core radius R(t) to expand isotropically at a rate comparable to its local sound speed C_core into the vacuum, i.e. dR/dt โผ C_core. For thin wires, the current flowing over the wire core surface in the skin region Ohmically heats the material at the edge of the core, forming coronal plasma that is redirected by the global ๐ฃ ร ๐ force. However, if the wire core is sufficiently large, current density in the skin region j_skin = I(t)/(2 ฯ R(t) ฮด) will be lower. Here, I(t) is the driving current, and ฮด = โ(2 ฮท/ฯฮผ) is the resistive skin depth, which depends on the material resistivity ฮท, the angular frequency ฯ of the driving current, and the medium permeability ฮผ. Consequently, for a large initial wire diameter, the Ohmic heating rate ฮท j_skin^2 may be too small to ionize all of the expanding gas. This will allow neutral gas expanding out of the wire core to remain unionized, and thus, unaffected by the ๐ฃ ร ๐ force expelling it from the AK gap. It is this neutral gas which may be responsible for the observed gap closure. In future experiments, the importance of neutral gas expansion could be tested by exploiting the different refractive indices of plasma and neutral gas using two-color optical measurements.<cit.>
For the 50 diameter wire array, the AK remains open late in time (t = 250 ns), and the coronal radius does not exceed the inter-wire separation, which is desirable for good ablation from the array (<ref>). In imploding wire arrays, the coronal radius increases initially in time, and then saturates to a constant value. <cit.> The time of saturation, typically 80-100 ns after current start, corresponds to a change in the magnetic field topology around the wires, when the driving global j ร B force becomes strong enough to overcome the expansion of the coronal plasma, and redirects it to generate ablation streams. <cit.> In our experiments, we image the wires after the expected time of saturation, and therefore, do not expect significant temporal variation in the size of the wire cores between 150-250 ns after current start. As observed in <ref>a, the coronal plasma radius decreases weakly with time at a rate of about 2 for 50 diameter wires.
The axial instability of the wires is ubiquitous in wire array z-pinch experiments, and is thought to be a modified m=0-like instability, which exhibits a constant amplitude and wavelength later in time, and is largely independent of initial wire diameter, and current per wire.<cit.> The time of saturation of the instability also corresponds to the time at which the wire cores cease to grow.<cit.> In <ref>, we observe that the distributions of the amplitude and the peak-to-peak separation of the perturbations remain largely independent of the initial wire diameter, consistent with observations of the axial instability in imploding wire array z-pinches. The amplitude and the peak-to-peak separation also exhibit minimal variation in time between 150-250 ns, which is well after the expected time of saturation of the instability (โผ 80-100 ns).<cit.>
ยง.ยง Pinch formation and comparison with 3D resistive MHD simulations
To obtain greater insight into the ablation process, we perform three-dimensional simulations of the planar wire array using the resistive magnetohydrodynamic (MHD) code GORGON, a 3D (cartesian, cylindrical, or polar coordinate) Eulerian resistive MHD code with van Leer advection, and separate energy equations for ions and electrons. <cit.> We use an optically thin recombination-bremsstrahlung radiation loss model, modified with a constant multiplier to account for line radiation, and a Thomas-Fermi equation-of-state to determine the ionization level.<cit.> We simulate a planar wire array with the same geometric dimensions and wire material as in the experimental setup. The current pulse applied to the load was determined from a three-term sum-of-sines fit [โ_i a_i sin(b_i t + c_i)] to the integrated Rogowski signal shown in <ref>. The simulation domain is a cuboid with dimensions 51.2 ร 50.4 ร 38 . We use an initial wire diameter of 50 in the simulation, with the initial mass of the wire distributed over a 400 diameter circular pre-expanded wire core. The simulations are performed with a grid size of 50. The driving magnetic field, calculated from Ampere's law, is applied as a boundary condition at the bottom-most cells in the simulation domain between the anode and cathode of a coaxial transmission line. The load geometry is implemented as stationary realistic conductivity electrode material on top of the coaxial line.
<ref> shows the simulated current density distribution at a slice through the array mid-plane (z = 0 mm) at 100, 150, and 200 ns after current start. As expected due to the skin effect, current density is concentrated on the outer surfaces of the cathode and the wires. The plasma from the outermost wires carries significantly more current than that from the inner wires, consistent with our magnetosonic prediction (<ref>). Similar to the experiment, the converging plasma flows collide at y = 0 mid-plane to form the pinch. The pinch appears as a region of high current density, comparable to that in the wires. <ref>a shows a three-dimensional rendering of the current distribution in the load at t = 150. The pinch carries a significant amount of current, and provides a secondary path for the current to close between the anode and the cathode. In our simulation, the current in the pinch, determined from the area integral of the current in the dashed box shown in Figure <ref>, is roughly 30% of that in the wires. This is consistent with the estimate of 30-40% provided by <cit.>
<ref> also shows the distribution of magnetic field lines in the planar wire array. We determine the field lines from contours of the z-component of the magnetic vector potential A_z. Near the AK gap, the magnetic field topology is similar to that calculated from our magnetostatic simulation (see <ref>). The magnetic field lines are straight and uniform inside the AK gap, and bend around the outer wires to form closed field lines outside the array. The field inside the AK gap is significantly stronger than that outside, as observed from the relatively high density of lines in this region. The ablating plasma advects some of the magnetic field from inside of the array to the outside. Near the inner wires, the advected magnetic field lines are straight and uniform, oriented along the y-direction. However, away from the centerline (y = 0) and toward the edges of the plasma flow, the field lines bend, driven by the inward-directed ablation from the outermost wires. The spatial variation of the driving magnetic field along the z-direction inside the AK gap is small (<5%).
The arrows in <ref> show the direction of the j ร B force acting on the ablated plasma. Near the inner wires, where the magnetic field is oriented in the y-direction, the force is directed in the +x-direction, whereas at the outermost wires, the curvature of the magnetic field results in a force directed towards the center of the array. As the plasma propagates away from the wires, the bending of the field lines due to the flows from the outermost wires results in an inward-directed (along the x-direction) j ร B force. This drives the collision of the plasma flows emanating from the wires, and the formation of the pinch. Similar to a traditional z-pinch, the magnetic field lines form closed circular loops, and the j ร B force is directed towards the center of the pinch. However, unlike a traditional z-pinch, the pinch experiences both mass and momentum injection from the left side of the pinch. Driven by the magnetic and thermal pressure of the plasma behind the pinch and the magnetic tension of the bent field lines, the pinch accelerates in the +x-direction. In the simulation, the center of the pinch travels about 8 between 150-250, resulting in an average velocity of 80. This is about 20% lower than that inferred from the translation of the pinch in the experimental shadowgraphs (<ref>). The flow upstream of the pinch is both supersonic (M_S โ 6) and super-Alfvรฉnic (M_A โ 2), similar to that observed in previous pulsed-power-driven experiments of aluminum wire arrays.<cit.> Due to the high current density in the pinch, it is a site of strong Ohmic dissipation. The electron temperature inside the pinch is T_e โ 100 eV, which is significantly higher than that in the plasma flow behind the pinch T_e โ 6.5 eV. The temperature of plasma near the wires is also about 5 eV, which may explain why the wires appear dimmer in the XUV images compared to the pinch (<ref>).
<ref>b shows a slice of the simulated electron density at the array mid-plane (z = 0) at 150 ns after current start. The electron density distribution appears similar to that in the experiment (<ref>). Electron density is high closer to the wires, and falls with increasing distance in the x-direction, consistent with time-of-flight effects. The plasma flow from the inner wires is directed outwards along the x-direction, while that from the outermost wires is directed towards the center of the array. The flow converges, similar to that in the experiment, to form a pinch. <ref>c shows a side-on slice of the electron density through the array mid-plane (y = 0). The pinch appears as a discontinuous region of enhanced electron density at the vacuum-plasma boundary, similar to that in the experiment. Immediately behind the pinch, both the end-on and side-on slices show a region of lower density, consistent with the relatively-uniform intensity region observed in the experimental shadowgraphs (<ref>). The electron density inside the pinch is n_e โ4e19, while that in the plasma flow just behind the pinch is 2e18. Our experimentally inferred value of the density behind the pinch is consistent with the simulation, while that for the pinch is lower than the density observed in the simulation. This may indicate that the integration length scale used to determine the experimental estimate of density is an overestimation of the true value. As observed in <ref>a, the width of the simulated pinch is approximately 1.5, whereas we estimate a value of about 4 from our experimental end-on density map (<ref>d). This discrepancy can result from line integrating through the axially-modulated pinch, which widens the observed width of the pinch in the line-integrated density map (<ref>). Using an integration length of 1.5 results in an electron density of nฬ
_e โ5e18, which is closer to, although still lower than, the density of the pinch in the simulation.
<ref>e and <ref>f show the line-integrated electron density in the end-on (xy-plane) and side-on (xz-plane) planes respectively. In <ref>d, we compare line-outs of the simulated electron density integrated along the y-direction with that from the experiment at z = 2. In both the experiment and the simulation, the line-integrated density falls with distance from the wires, and the pinch appears as a local enhancement of the density at the plasma-vacuum boundary. The magnitude of the line-integrated electron density in the simulation is comparable to that from the experiment. Note that the sharp increase in density at the pinch, as observed in <ref>c, is muted by line integration. The high density and the temperature of the pinch both contribute to the strong emission from the pinch, visible in the experimental XUV self-emission images (<ref>).
In contrast to the experiment, where the pinch is located at x โ 12-15 mm from the wires, the simulated pinch is closer to the wires (x โ 10 mm) at this time. The slower velocity of the pinch in the simulation indicates a comparatively smaller driving force behind the pinch. We expect the pinch to be driven outwards due to the magnetic and thermal pressures of the plasma behind it, and by the magnetic tension of the bent field lines. A simple similarity argument can be used to show that the characteristic velocity of the pinch should be comparable to the local magnetosonic velocity V_pinch^2 โผ (V_A^2 + C_S^2). Here, V_A is the Alfvรฉn speed, and C_S is the sound speed of the plasma right behind the pinch. Previous comparison of experimental results with simulations indicates that the local thermodynamic equilibrium Thomas-Fermi model implemented in the simulation underestimates the temperature of the plasma.<cit.> The Alfvรฉn speed V_A = B/โ(ฮผ_0 ฯ) is a function of the magnetic field, which in turn, depends on the magnetic Reynolds number R_m = UL/ฮทฬ
. Here, U and L are the characteristic velocity and length scales of the plasma, ฯ is the mass density, and ฮทฬ
โผZฬ
T_e^-3/2 is the magnetic diffusivity, which varies with the electron temperature T_e and the average ionization Zฬ
of the plasma. A lower temperature leads to a lower R_m, and thus a relatively smaller advected field. This can reduce the magnetic pressure behind the pinch, and therefore contribute to a smaller velocity. In future experiments, optical Thompson scattering could be used to simultaneously characterize the velocity and temperature of the plasma. <cit.>
Finally, in both the experiment and the simulation, the shape of the pinch exhibits an axial non-uniformity. As observed in <ref>b and <ref>c, the pinch appears closer to the wires at the bottom of the array, and further away near the top. Our simulations indicate that this axial non-uniformity in the shape of the pinch may be related to flow over the extended anode plate (see <ref>), which prevents the expansion of the pinch at the bottom of the array, and also modifies the current distribution in the electrodes. When the simulations are repeated without an extended anode plate, the axial modulation in the pinch structure is reduced. In future experiments, we can mitigate this effect by exploring array geometries that do not require the extended anode plate.
ยง CONCLUSIONS
In this paper, we explore the use of an over-massed planar wire array as a platform for laboratory astrophysics experiments, and as a scaled experiment to investigate the ablation of thick wires in cylindrical wire arrays driven by 10 MA current pulses. We characterize the ablation of plasma from a planar wire array fielded on the COBRA pulsed-power machine (1 MA, 250 ns rise time). The wire array comprises a linear arrangement of 15 equally-spaced aluminum wires separated from a planar cathode surface by a 2 mm AK gap. The planar wire array is designed to provide a driving magnetic field (80-100 T) and current per wire distribution (about 60-65 kA), similar to that in a โผ 10 MA cylindrical exploding wire array fielded on the Z pulsed-power machine. Magnetostatic calculations show that the driving magnetic pressure inside the AK gap at peak current (1 MA) is about 81 T, which is higher than that in a typical cylindrical wire array fielded on 1-MA university scale facilities (about 20-40 T). In contrast to previous planar wire array experiments, the wire arrays are over-massed, so that they provide continuous ablation for the duration of the experiment, without experiencing the implosion stage.
We perform a parametric study by varying the initial wire diameter between 33-100. Laser shadowgraphy images show that the largest wire diameter (100) exhibits early closure of the AK gap (150 ns after current start), while the gap remains open during the duration of the experiment for wire diameters between 33-75. The early closure of the AK gap for the 100 diameter case is primarily due to the larger coronal radius of the wires, which may be a consequence of reduced Ohmic heating in the skin region surrounding the wire cores. For these large diameter wires, the coronal radius also becomes comparable to the AK gap size and the inter-wire separation, which is undesirable for good ablation from the wire array. Axial instabilities appear in the vacuum-plasma interface in the AK gap. The distributions of the amplitude and peak-to-peak separation of the perturbations remain largely invariant of the initial wire diameter, as has been previously observed on imploding and exploding cylindrical wire arrays.
Laser interferometry and time-gated XUV imaging are used to probe the plasma flows. Plasma ablating from the wires is redirected towards the array mid-plane (y = 0 mm), and the resulting collision of the converging flows generates a pinch, which propagates away from the wires at an average velocity of about 100. The pinch appears as a discontinuous column of enhanced plasma density (nฬ
_e โ2e18) and strong XUV emission. Three-dimensional resistive MHD simulations reproduce the primary characteristics of the ablation observed from the experiments. Visualization of the current density and magnetic field in the load demonstrates that flows converge under the action of a pinching j ร B force. This arises from the bending of magnetic field lines due to the inward-directed flows from the outermost wires. The pinch is a site of high current density, and exhibits a magnetic field topology similar to that of a z-pinch. The simulated pinch also exhibits a significantly higher temperature, compared to the plasma behind it, which combined with the enhanced density, accounts for the strong XUV emission observed in the experiment.
ยง ACKNOWLEDGEMENTS
The authors would like to thank Todd Blanchard and Harry Wilhelm for their work in support of the experiments. This work was funded by NSF and NNSA under grant no. PHY2108050, and by the EAGER grant no. PHY2213898. Simulations were performed on the Engaging cluster funded by DE-FG02-91-ER54109.
ยง DECLARATION OF CONFLICTS OF INTEREST
The authors have no conflicts of interest to disclose.
ยง DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
http://arxiv.org/abs/2306.02013v1
|
20230603060654
|
Instabilities of longitudinal vortex rolls in katabatic Prandtl slope flows
|
[
"Chengnian Xiao",
"Inanc Senocak"
] |
physics.flu-dyn
|
[
"physics.flu-dyn"
] |
Instabilities of longitudinal vortex rolls in katabatic Prandtl slope flows
Dong Zhao, Yang Shi, Fellow, IEEE, Steven X. Ding, Yueyang Li, Fangzhou Fu
Dong Zhao is with the School of Cyber Science and Technology, Beihang University, Beijing 100191, China (e-mail: [email protected]).
Yang Shi is with the Department of Mechanical Engineering, University of Victoria, Victoria, BC, V8W 2Y2, Canada (e-mail: [email protected]).
Steven X. Ding is with the Institute for Automatic Control and Complex Systems, University of Duisburg-Essen, 47057, Duisburg, Germany (e-mail: [email protected]).
Yueyang Li is with the School of Electrical Engineering, University of Jinan, Jinan 250022, China (e-mail: [email protected]).
Fangzhou Fu is with the School of Aeronautics and Astronautics, Sun Yat-sen University, Shenzhen 518107, China (e-mail: [email protected]).
July 31, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Stationary counter-rotating longitudinal vortex pairs emerge from one-dimensional Prandtl slope flows under katabatic as well as anabatic conditions due to a linear instability when the imposed surface heat flux magnitude is sufficiently strong relative to the stable ambient stratification. For anabatic flows, these vortices have already been identified to exhibit an unique topology that bears a striking resemblance to speaker-wires since they stay coherent as a single unit without the presence of another vortex pair. Under katabatic conditions and at a constant Prandtl number, we find that the longitudinal vortices emerging at a range of different slope angles possess the similar topology as their anabatic counterparts. We determine the existence of both fundamental and subharmonic secondary instabilities depending on the slope angle for the most likely transverse base flow wavelength. Our results indicate that the most dominant instability shifts from a fundamental to subharmonic mode with increasing slope angle. At shallow slopes, this dynamic contrast with the speaker-wire vortices in anabatic slope flows at the same angle which for which the subharmonic instability is clearly dominant. These modes are responsible for the bending and movement of single or multiple speaker-wire vortices, which may merge or reconnect to lead to dynamically more unstable states, eventually leading to transition towards turbulence. We demonstrate that at sufficiently steep slopes, the dynamics of these vortex pairs are dominated by long-wave reconnections or two-dimensional mergers between adjacent pairs.
ยง INTRODUCTION
Prandtl's model for katabatic and anabatic slope flows is a popular abstraction used to understand the fundamental characteristics of stably stratified flows in cold weather conditions over non-flat terrains, such as nocturnal winds over hills or katabatic flows over the (Ant-)arctic ice sheets <cit.>. The model can be solved exactly for laminar conditions to obtain buoyancy and velocity solutions that are sinusoidal profiles damped exponentially with increasing height <cit.>. The profiles demonstrate a dominant near-surface along-slope jet which is topped by a weak reverse flow that decays with growing height, as shown in Fig. <ref>.
In previous investigations, the stability of Prandtl's slope flows and travelling waves along a vertical wall has been studied by <cit.>.
We have analyzed the linear stability of katabatic <cit.> and anabatic <cit.> Prandtl slope flows subject to constant surface heat fluxes for inclination angles 2^โ<ฮฑ<70^โ and found the dominating mode to be a stationary longitudinal roll instability at shallow slopes and a traveling wave instability at slope angles larger than 10^โ (anabatic) or 70^โ(katabatic).
While the nonlinear travelling wave solutions arising from the corresponding linear mode of the Prandtl slope flows over a vertical wall have been analyzed by <cit.>,
we instead directed our focus to an investigation of the dynamics of longitudinal rolls arising from the primary longitudinal roll instability under anabatic conditions over a shallow slope with ฮฑ=3^โ subject to constant surface heat flux <cit.>, which is a novel vortex configuration that has not been studied before.
Our prior studies have demonstrated that the vortex instabilities of stably stratified flows are determined by the slope angle ฮฑ, the transverse vortex spacing b_y, as well as
the dimensionless stratification perturbation parameter ฮ _s, which can also be interpreted as the surface heat flux normalized by the background stratification. In this present study, we intend to focus on the dynamics of stationary vortices in katabatic slope flows.
In contrast to anabatic slope flows which appear only fleetingly during morning transition before the positive surface heat flux overpowers the surrounding stable stratification, katabatic conditions can be stable and long-lasting
as manifested in nocturnal boundary layers or above the (Ant-)arctic ice sheet. Another key difference is the ability of katabatic conditions to support stationary longitudinal vortices emerging from a primary instability at very steep slopes up to 65^โ <cit.>, whereas anabatic conditions do not admit this for slopes larger than 10^โ<cit.>.
Numerous experiments under simpler configurations, i.e. without stratification or surface inclination, have confirmed the emergence and persistence of highly organised two-dimensional (2-D) vortex structures in the course of a developing shear
layer downstream of a splitter plate<cit.>. These vortices have been noticed to arise under a multitude of different flow
conditions, and appear to be remarkably robust with respect to external disturbances <cit.>. It is evident that any progress in the understanding of turbulent transition will need to accommodate the dynamics and interaction of these large vortex structures.
In a seminal work on vortex dynamics, <cit.> has studied the linear stability for parallel counter-rotating vortex pairs suspended in quiescent air with neutral stratification and discovered that sinusoidal bending of each individual vortex arises due to a symmetric mode now named after him, which then results to vortex merger and ring formation at precisely those locations where the the neighboring vortices approach each other due to bending. This work has been extended by <cit.> to include the analysis of anti-symmetric modes in vortex pairs, whereas <cit.> studied the dynamics of two unsteadily tumbling vortex pairs.
For more complicated flow configurations which involve the simultaneous presence of heat transfer, ambient stratification and non-flat surfaces, the scenario outlined above is expected to be insufficient to describe the entire flow physics.
However, despite the fact that many real-life phenomena such as geophysical flows contain some or all of these complicating factors, there has been very few attempts to properly comprehend vortex instabilities in these challenging situations.
The role of stratification on Stuart vorticies has been
studied by <cit.> who showed that elliptic instabilities causing anti-symmetric distortion of vortex cores are suppressed by the presence of stable stratification. However, <cit.> has found that stable stratification can also destabilize a single vertical columnar vortex as a result of resonance with internal gravity waves, which is a so-called radiative instability.
Stable stratification can also give rise to a vortex instability in rotating flows called the "zig-zag" instability, first identified and investigated by <cit.>. Its formation is due to additional self induction as well as mutual induction between vortices as a result of stratification and rotation that are aligned with the main vorticity. Its main effect is symmetric bending of individual vortices in co-rotating pairs and anti-symmetric bending in counter-rotating pairs.
Most of the current and ongoing studies on vortex instabilities such as those listed above have implicitly assumed that the main merger dynamics depend on no more than two vortices, which can be either counter-rotating or co-rotating. A notable departure from this tradition is our recent work on the instability of speaker-wire vortices arising in anabatic Prandtl slope flows with surface heating <cit.>, which required at least two counter-rotating pairs with four vortices to enable vortex mergers or reconnections. This is due to the fact that all fundamental modes are anti-symmetric such that only subharmonic instabilities can move vortices from adjacent pairs closer toward each other.
It has been found that the spacing between adjacent vortex pairs is a main factor for determining the strength of the most dominant modes, which is directly correlated to the likelihood of vortex mergers. An intuitive interpretation of the results presented in <cit.> is that the further the initial spacing of the base vortices is from that of the most stable configuration, the stronger is the subharmonic instability of the vortex configuration which would cause vortex mergers or the formation of new vortices in order to attain the most optimal vortex spacing.
We have also observed in <cit.> that pure mergers, i.e. the process of fusing two neighboring vortices into a single one without any bending, can be a dominant dynamics due to the fact that the two-dimensional vortex instability is almost as strong as the 3D mode with the maximal possible growth rate.
The present study aims to extend our previous analysis to investigate the dynamics of the longitudinal rolls that emerge as the steady configuration of a linear instability under katabatic conditions at shallow slope angles, i.e. subject to surface cooling.
Several distinctions of vortex instabilities under the Prandtl's model sets our work apart from the other aforementioned studies on vortex instabilities such as Crow's instability, elliptic instability, zig-zag instability, and secondary convection rolls. Most significantly, Prandtl's model includes the following key components missing in other configurations: Firstly, a constant ambient stratification that is positioned at an oblique angle to the longitudinal rolls aligned with the main flow direction and secondly, a solid surface wall containing its own boundary layer as part of the Prandtl base flow. Unsurprisingly, the combination of these complicating factors renders a theoretical approach highly difficult and, to the best of our understanding, is not present in the published literature besides our own earlier related work under anabatic conditions. As demonstrated in <cit.>, the dominant vortex instability in anabatic Prandtl slope flows involves two counter-rotating vortex pairs, also termed speaker-wire vortices, thus four individual vortices in total.
Following the analysis in ย <cit.>, we apply a modal linear bi-global stability analysis to identify different instabilities which can arise from the base flow vortices under katabatic conditions.
We will further determine how these instabilities are controlled by external conditions as well as their role in the subsequent transition of slope flows to more unstable configurations.
ยง SPEAKER-WIRE VORTICES IN KATABATIC SLOPE FLOWS
ยง.ยง Governing equations and characteristic flow scales
Let us consider the idealised Prandtl slope flow configuration as shown in figure <ref>, where ฮฑ is the slope angle, and gravity ๐ acts downward in the vertical direction. A constant uniform negative buoyancy flux B_S is imposed at the surface to achieve katabatic conditions. We consider a rotated Cartesian coordinate system whose x axis is aligned with the planar inclined surface. The direction normal to the slope surface is represented by the z axis, whereas the cross-flow transverse direction is aligned with the y coordinate, as shown in Figs. <ref>-<ref>. Let u be the along-slope (longitudinal), v be the cross-slope (transverse), and w be the slope-normal velocity components, such that ๐ฎ=u_i=[u, v, w] is the velocity vector. The unit gravity vector in the rotated coordinate system is then given by g_i=(g_1,g_2,g_3)=[sinฮฑ, 0, cosฮฑ].
The potential temperature, buoyancy, and the Brunt-Vรคisรคlรค frequency are denoted by ฮธ,b, N, respectively, where N is related to the ambient potential temperature as N=โ(g/ฮ_rโฮ_e/โ z'). The buoyancy is defined as a perturbation potential temperature as b=g (ฮ-ฮ_e)/ ฮ_r, where ฮ_r is a reference potential temperature and ฮ_e is the environmental (ambient) potential temperature. The kinematic viscosity and thermal diffusivity of the fluid are denoted by ฮฝ,ฮฒ, respectively, and they are assumed to be constant.
The transport equations for momentum with a Boussinesq approximation and perturbaton buoyancy fields are written as follows:
โ๐ฎ/โ t+โยท(๐ฎโ๐ฎ)
= ย -1/ฯโ p +
bg ๐ง_ฮฑ+ ฮฝฮ๐ฎ ,
โ b/โ t + โยท( b๐ฎ) = ย ฮฒฮ b - N^2 (๐ง_ฮฑยท๐ฎ) .
where ๐ง_ฮฑ=(sinฮฑ,0,cosฮฑ) is the slope-normal unit vector.
The conservation of mass principle is imposed by a divergence-free velocity field
โยท๐ฎ=0.
At the surface z=0, a negative buoyancy flux B_s is imposed to generate anabatic flow conditions against a constant stable ambient stratified environment quantified with N^2.
For the one-dimensional flow problem, <cit.> extended the exact solution of <cit.> to include a constant buoyancy flux at the surface instead of a constant temperature surface condition and introduced the following characteristic flow scales <cit.>:
l_0 = ย Pr^-14 ฮฝ ^12 N^-12sin^-12ฮฑ ,
u_0 = ย Pr^14 ฮฝ ^-12 N^-32B_ssin^-12ฮฑ ,
b_0 = ย Pr^34 ฮฝ ^-12 N^-12B_ssin^-12ฮฑ ,
where Pr=ฮฝ/ฮฒ is the Prandtl number. A time scale t_0:=l_0/|u_0|=โ(ฮฝฮฒ)ย N |B_s|^-1, and a shear scale S_0:=|u_0|/l_0=โ(Pr/ฮฝ)N^-1|B_s| can also be defined from the above scales. We observe from (<ref>)-(<ref>) that the length scale characterizing the laminar boundary layer thickness is independent of the surface flux B_s, whereas the magnitude of both the reference velocity and buoyancy scale varies linearly with B_s. Since only the magnitude of the surface buoyancy flux appears in the flow scale definitions, they are the same for both katabatic and anabatic conditions at the same surface buoyancy flux magnitude but with opposite signs.
Subsequently, these characteristic scales will be applied to normalize all flow equations and quantities presented herein.
Specifically, the dimensionless stratification perturbation number ฮ _s as introduced in <cit.>, can be regarded as the imposed surface buoyancy flux B_s normalised by the background stratification scale ฮฒ N^2. This unique parameter is determined from the given external flow parameters as follows:
ฮ _s โก|B_s|/ฮฒ N^2.
It should be mentioned that the ฮ _s is not restricted to Prandtl slope flows; it is a necessary additional parameter whenever there exists multiple independent stratification mechanisms. In <cit.>, we demonstrated the significance of ฮ _s as an independent dimensionless parameter in stable open channels flows that are stratified by the simultaneous action of surface cooling as well as a static ambient stratification.
ยง.ยง Steady speaker-wire vortices as longitudinal rolls
The governing flow equations (<ref>)-(<ref>) can be solved on the 2-D cross-slope yz plane to arrive at the steady longitudinal rolls that arise as a saturated linear instability for slope angles less than 70^โ when ฮ _s is sufficiently large <cit.>. To create the steady vortices that serve as base flows for the secondary stability analysis, the initial flow field is set to be the laminar Prandtl flow profile superposed with a weak sinusoidal disturbance varying along the transverse y direction. For the simulated cross-slope yz plane, the width is chosen to be an integer multiple of the targeted transverse spacing b_y=ฮป_y of the vortex pairs, and the the total height of the domain is chosen to be at least 50l_0. Each length scale l_0 along both the vertical and transverse direction is resolved by at least two mesh points.
The evolution of the flow field is simulated until steady state is reached, which happens within a suitable range of ฮ _s values and results in stationary vortices that are the saturation of the growing linear instability mode.
For the spacing of the base vortices, we choose the transverse wavelength ฮป_y of the most dominant longitudinal roll instability, i.e. the primary mode with the largest growth rate for the one-dimensional Prandtl flow profile.
The dependence of ฮป_y
as a function of the stratification parameter ฮ _s for different slope angles is shown in Fig. <ref>, which indicates a clear decrease of the transverse wavelength with increasing value of ฮ _s and slope angle ฮฑ. Further details about the structure of the vortices that serve as base flow for the secondary stability analysis are provided along with the stability results below.
ยง LINEAR SECONDARY INSTABILITY ANALYSIS OF SPEAKER-WIRE VORTICES
Let (U,V,W,B) be the 2-D flow profile of the steady longitudinal rolls as computed from Eqn. (<ref>)-(<ref>), and assuming that modal disturbances to this base flow are of the form
๐ช(x,y,z,t) =[รป(y,z), vฬ(y,z), ลต(y,z), pฬ(y,z),bฬ(y,z)]exp(ik_x x + ฯ t),
then the equations governing the evolution of the flow disturbances approximated to first the order have the following form:
ik_xรป+โvฬ/โ y+ โลต/โ z = 0,
ฯรป+iUk_xรป +โ U/โ yvฬ + โ U/โ zลต
+โรป/โ y V + โรป/โ zW
= -ik_xpฬ - Pr/ฮ _ssinฮฑ(-k_x^2รป+ โ^2รป/โ y^2+โ^2รป/โ z^2 +bฬ),
ฯvฬ+iUk_xvฬ+โ V/โ yvฬ +โ V/โ zลต
+โvฬ/โ yV+โvฬ/โ zW
= -pฬ/โ y-Pr/ฮ _ssinฮฑ(-k_x^2vฬ+โ^2vฬ/โ y^2 +โ^2vฬ/โ z^2),
ฯลต+iUk_xลต +โ W/โ yvฬ +โ W/โ zลต +โลต/โ yV+
โลต/โ zW
= -โpฬ/โ z - Pr/ฮ _ssinฮฑ(-k_x^2ลต+โ^2ลต/โ y^2 +โ^2ลต/โ z^2+ bฬฮฑ),
ฯbฬ+iUk_xbฬ+โ B/โ yvฬ +โ B/โ zลต +โbฬ/โ yV+โbฬ/โ zW
= -sinฮฑ/ฮ _s(-k_x^2bฬ+โ^2bฬ/โ y^2 +โ^2bฬ/โ z^2 -( รป+ลตฮฑ) ),
where รป,vฬ,ลต,pฬ,bฬ describe the shape of the flow disturbance along the slope normal and transverse directions, normalised by the flow scales given in (<ref>)-(<ref>). The normalised base flow field describing the steady vortices is denoted by (U,V,W, B), and the slope angles studied here are ฮฑ =3^โ, 12^โ, 22^โ, 30^โ.
The linearised equations for bi-global stability analysis as shown above can be written as a generalised eigenvalue problem in the following way:
A(k_x)๐ชฬ(y,z)=ฯ B(k_x)๐ชฬ(y,z),
The shape of the complex disturbance vector
๐ชฬ(y,z)=[รป(y,z),vฬ(y,z),ลต(y,z),pฬ(y,z),bฬ(y,z
)]^T
varies in the slope-normal (z) and transverse direction (y), where (รป,vฬ,ลต) are the along-slope, cross-slope (transverse) and slope-normal disturbance velocity components.
As a bi-global stability analysis, the slope-normal and transverse dimensions are fully resolved and the disturbance variation along the streamwise direction is approximated by only one Fourier mode with wave number k_x. When k_x is zero, then the corresponding mode is 2-D without any streamwise variation, whereas a positive k_x implies a full 3-D disturbance. The appropriate boundary conditions for this problem are no slip for disturbance velocities at z=0 and free-slip at zโโ, whereas for the buoyancy disturbance, โbฬ/โ z|_z=0= โbฬ/โ z |_zโโ=0 are imposed. The slope-normal derivative of pressure disturbance pฬ is also set to zero at both z=0 and zโโ. On both transverse boundaries, periodic conditions are imposed for all variables. The generalised eigenvalue problem (<ref>) is discretised via spectral elements in the transverse plane,
which is available in Nektar++ <cit.>. For a base flow containing two full transverse spatial periods, i.e. two speaker-wire vortex pairs, the cross-slope dimension L_y=2ฮป_y is chosen for the simulation domain; 2 degrees of freedom are used to resolve a unit length scale l_0 in the cross-slope and vertical directions, and the resulting generalised eigenvalue problem are solved with the modified Arnoldi algorithm as implemented in Nektar++. Linear stability of the problem is associated with the real part of the eigenvalues ฯ, where {ฯ}>0 represents a positive exponential growth for the corresponding eigenmode, thus an unstable mode. The imaginary part of ฯ
is the temporal oscillation frequency for the corresponding eigenmode, and {ฯ}=0 represents a stationary mode.
ยง.ยง Instabilities of katabatic speaker-wire vortices at different slope angles
To investigate the secondary linear instability of speaker-wire vortices, eigenvalues with the highest maximal real values for a range of streamwise wave numbers k_x are computed for different stratification perturbation parameters ฮ _s of the base flow at different slope angles ฮฑ. The transverse wavelength ฮป_y of the base flow vortices was chosen to be the most dangerous wavelength at the given ฮ _s as determined from linear stability analysis of the 1D Prandtl flow profile, which is shown in Fig.ย <ref>. Thus in contrast to the study for vortices under anabatic conditions <cit.>, the focus here is on the effects of the slope angle instead of the vortex spacing. Throughout our analysis, we also assume a constant Prandtl number Pr =0.71 for all the cases.
We observe from Fig. <ref> that the slope inclination ฮฑ exerts a significant effect on the optimal transverse wavelength of the base vortices when all other flow parameters are held constant. Hence, we expect that the secondary instabilities arising from these base flow vortices will also be qualitatively different depending on ฮฑ. At a given slope angle and Prandtl number, three independent parameters determine the growth rates and oscillation frequencies of the secondary instability, which are the stratification parameter ฮ _s, the streamwise instability wave number k_x=2ฯ/ฮป_x and the transverse base flow wave number k_y=2ฯ/ฮป_y, where ฮป_y=b_y equals the spacing of the base speaker-wire vortex. In the following, we will separately present and discuss the results of the stability analysis for base speaker-wire vortex at four different slope angles ฮฑ, going from ฮฑ=3^โ to ฮฑ=30^โ.
The base flow used for modal analysis of each case consists of two speaker-wire vortices arising from the primary linear instability mode, i.e. the transverse domain size is twice the wavelength of the primary vortex instability as described in <cit.>. This choice of domain size ensures that potential subharmonic eigenmodes with twice the transverse wavelength of the base vortices can be picked up.
ยง.ยง.ยง Shallow slope: ฮฑ=3^โ
At the shallow slope ฮฑ=3^โ, it turns out that the transverse spatial period of all the secondary instability modes equals the transverse wavelength ฮป_y of the base vortices which is also the spacing b_y between the core of adjacent vorte pairs, hence they are all fundamental modes. The shape of this mode on the transverse plane is visualized via the contour plot for the streamwise vorticity ฯ_x as shown in Fig. <ref>. Thus, this observation is in contrast with the anabatic slope flow at the same angle ฮฑ=3^โ where the most dominant instability is a subharmonic mode <cit.>.
Figure <ref> shows the growth rates of the most unstable secondary modes as a function of streamwise wavenumbers k_x within the range [0,0.15] for four different values of stratification parameter ฮ _s =1.6,1.7,1.8,1.9. The growth rate of any mode at any streamwise wavenumber k_x grows with increasing ฮ _s, which implies a stronger surface cooling for the katabatic slope flow.
2-D modes have zero longitudinal wave number, i.e. k_x=0, and thus only vary on the transverse yz plane but are constant along the streamwise direction.
For the 3-D modes, which have non-zero wave numbers k_x>0, we observe from figureย <ref> that at large wave numbers k_x>0.15, the growth rates tend to increase with decreasing k_x until reaching a maximal value exceeding the 2-D growth rate (k_x=0) at a optimal wave number k_xโ 0.07, and from that point on they decrease with decreasing wavenumber to reach the 2-D growth rate at k_x=0. In contrast to both the subharmonic and fundamental eigenmodes of speaker-wire vortices under anabatic conditions at the same slope angle of ฮฑ=3^โ <cit.>, the most dominant fundamental modes at katabatic conditions are clearly 3-D since the growth rate of the mode at the optimal non-zero wave number is an order of magnitude larger than that of the zero-wavenumber 2-D mode.
All 3-D fundamental modes with positive longitudinal wavenumbers k_x>0 are oscillatory whose frequency increases monotonically with growing k_x or decreasing wavelength, as shown in figure <ref>. The 2-D mode with k_x=0 is stationary, i.e. with zero imaginary part of its eigenvalue. At small streamwise wavenumbers k_x, the oscillation frequencies of the 3-D mode decrease to zero with decreasing k_x to converge to the stationary 2-D mode at k_x=0. As figure <ref> shows, within the small wave number range, the normalised frequencies of all three cases shown here appear to obey a simple linear dispersion relation given by (ฯ)=ฮทยท k_x, where ฮทโ 0.025 is empirically determined to fit all three curves. The accuracy of this fit shows that the group speed of the long-wave fundamental vortex instabilities given by c=โ(ฯ)/โ k_x=ฮท is nearly constant and equals ฮท =0.025. However, this simple relation no longer holds for modes with wave numbers k_x>0.05 whose oscillation frequencies increase faster than linear with growing wave numbers.
Existence of both 2-D and 3-D fundamental vortex instabilities has also been identified in the study of co-rotating Stuart vortex arrays by <cit.> or as instabilities in a shear layer by <cit.>, where they are shown to be responsible for the displacement or bending of neighboring vortices. In contrast to the anabatic slope flow at a slope angle of ฮฑ=3^โ <cit.>, there are no subharmonic modes in the katabatic slope flow with angle ฮฑ=3^โ . As will be shown later, the fundamental modes for this case as well as at larger slope angles are all anti-msymmetric. This suggest that at the initial onset of instabilities, the neighboring vortex tubes will bend and move synchronously in parallel fashion. This is confirmed by direct numerical simulations via solving the full nonlinear governing equations, as shown in the animation available as supplementary movie 1. From the animation, the consistent growth of the anti-symmetric fundamental mode which eventually generates novel structures on top of the original vortices can be seen.
ยง.ยง.ยง Moderately steep slopes ฮฑ=12^โ
At a slope angle of ฮฑ=12^โ, both fundamental and subharmonic secondary instabilities are found to emerge. The transverse wavelength of fundamental mode equals the spacing b_y between adjacent base flow speaker-wire vortices, whereas the subharmonic mode have transverse wavelengths twice as large. Hence the additional appearance of the subharmonic mode is a consequence of the increasing slope angle compared to the shallow slope at ฮฑ=3^โ.
The shapes of the subharmonic and fundamental modes on the transverse plane are visualized together with the base flow vortices via the contour plot for the streamwise vorticity ฯ_x as shown in Figs. <ref> and <ref>. It can be seen that compared to the 3^โ case as shown in Fig. <ref>, the base vortices are around half as wide along the transverse direction, but also clearly taller in the vertical. As per definition, the subharmonic modes have a transverse wavelength twice as large as their fundamental counterparts.
Figure <ref> presents the growth rates of the most unstable fundamental modes as a function of streamwise wavenumbers k_x within the range [0,0.15] for four different values of normalized surface heat flux ฮ _s =2.6,2.7,2.8,2.9.
The growth rates of the strongest subharmonic modes for streamwise wavenumbers k_x within [0,0.03] are shown in Fig. <ref>. The smaller wavenumber range for the strongest subharmonic modes displayed here implies that their streamwise wavelengths are longer compared to the fundamental modes. For comparison, the growth rate for the strongest subharmonic mode at ฮ _s=2.6 is also displayed together with all the fundamental modes Fig. <ref>.
Compared to the shallow slope with ฮฑ=3^โ, larger ฮ _s values are needed to sustain the base vortices that arise as the primary roll instability from the 1D Prandtl slope flow profile <cit.>. It is evident from Fig. <ref> that a higher value of ฮ _s, implying a larger normalized surface heat flux magnitude, increases the growth rate of all fundamental modes at any given wave number. However, the same ฮ _s dependence is not true for the subharmonic modes. As Fig. <ref> shows, the growth rate for subharmonic modes attains its maximum at ฮ _s=2.7 and is smaller for larger as well as smaller ฮ _s values.
From figure <ref>, we observe that for modes with sufficiently large wave numbers k_x>0.01, the fundamental mode is stronger than its subharmonic counterpart. The fundamental modes attain the strongest growth at an optimal non-zero wave number k_x between 0.08 and 0.1, and they become monotonically weaker with decreasing wave number below that value. On the other hand, as we observe from figure <ref>, the subharmonic modes gain strength with decreasing wave number such that the most dominant mode is purely 2-D at k_x=0. Hence, for very large streamwise wavelengths with k_x<0.01, fundamental modes are weaker than their subharmonic counterparts.
From figure <ref>, we observe that all fundamental modes with positive streamwise wavenumbers k_x>0 are oscillatory. For k_x=0, the fundamental modes are all stationary, and their frequencies as a function of k_x attains one local maximum in the low-wavenumber range at k_xโ 0.004 and begin to increase with growing wavenumbers for k_x>0.012, as shown in figure <ref>.
However, at higher wavenumbers beyond k_x=0.1, this trend no longer holds, as the fundamental mode frequencies either stagnate (for ฮ _s=2.8,2.9) or even slightly decrease (for ฮ _s=2.6,2.7) with increasing k_x. On the other hand, figure <ref> shows that the frequency of subharmonic modes increases monotonically with growing wavenumber. At k_x=0, the 2-D subharmonic mode can be seen to be stationary. As Fig. <ref> shows, for wavenumber values less than k_x=0.01, the normalised frequencies for all four ฮ _s values fit a linear dispersion relation where the group velocity given by ฮท=โ(ฯ)/โ k_x increases with growing ฮ _s from ฮท=0.011 at ฮ _s=2.6 to ฮท=0.05 for ฮ _s=2.9. This differs from the behavior of subharmonic modes in anabatic slope flows where the group velocity ฮท remains the same for different ฮ _s values.
Fig. <ref> shows that at larger wavenumbers, the linear dispersion relation no longer holds because the oscillation frequencies increase at a faster rate than linear rate with growing wavenumbers for k_x>0.01.
The above-mentioned properties of fundamental and subharmonic modes have significant impact on the dynamics of the base vortices. Since the strongest subharmonic mode is 2-D, we can expect fusion between neighboring vortex pairs instead of wavy reconnections that result in vortex rings like in the Crow instability found in <cit.>. The fundamental modes which dominate at shorter wavelengths are antisymmetric and will bend each individual vortex parallel to its adjacent neighbors.
This co-existence of both dynamics can be seen from animations obtained from direct numerical integration of the full nonlinear governing equations as shown in the supplementary movie 2. The animation displays the initial presence of the anti-symmetric fundamental mode which gradually gives way to the subharmonic mode dynamics which moves neighboring vortex pairs closer for mergers.
ยง.ยง.ยง Steep slopes ฮฑ=22^โ
For even steeper slopes with an angle of ฮฑ=22^โ, subharmonic and fundamental secondary instabilities again manifest themselves. The typical shapes of these two different kinds of modes on the transverse plane are visualized alongside the base flow vortices via the contour plot for the streamwise vorticity ฯ_x as shown in Figs. <ref> and <ref>. Compared to the previous two cases at smaller angles as shown in Fig. <ref> and Figs. <ref>-<ref>, the base vortices are narrower along the transverse direction, but also slightly taller in the vertical direction.
The growth rates of the most unstable fundamental modes as a function of streamwise wavenumbers k_x within the range [0,0.2] are shown
for four different values of normalized surface heat flux ฮ _s =3.9,4.1,4.3,4.5
in figure <ref>. Again, a higher ฮ _s value than at the lower slope angles is needed to trigger and maintain the primary roll instability for the base vortices. The growth rates of the strongest subharmonic modes for streamwise wavenumbers k_x within [0,0.04] are shown in figure <ref>. It can be seen that a higher value of ฮ _s strengthens all instability modes at any given wave number. By comparing the growth rate plots shown in Figs. <ref> and <ref> side by side,
We observe that the fundamental modes and subharamonic modes are of a similar strength across all wave numbers. The main distinction between the two mode types is the fact that the strongest fundamental mode are 3-D and achieve the highest growth at an optimal non-zero wave number k_x between 0.02 for ฮ _s=3.9 and 0.11 for ฮ _s=4.5, displaying a clear trend of increasing streamwise wavelength with decreasing ฮ _s value. On the other hand, the subharmonic modes become monotically stronger with decreasing wave number such that the most dominant mode is purely 2-D at k_x=0, same as in the previously studied slope angles.
All the fundamental modes with non-zero streamwise wavenumbers studied in this case are oscillatory. However, the two-dimensional mode with k_x=0 is stationary. As shown in figure <ref>, within the very small wavenumber range 0<k_x<0.01, the frequencies for modes with different ฮ _s values attain a local maximum at k_xโ 0.005. In the wavenumber range 0.01<k_x<0.08, the fundamental mode frequencies increase with growing wavenumber. However, at higher wavenumbers k_x>0.1, this trend no longer holds, and the frequencies even slightly decrease with increasing k_x. In contrast, figure <ref> indicates that the frequency of subharmonic modes is a monotonically growing function of their wavenumber k_x. All 2-D subharmonic modes (k_x=0) are stationary, and for the low wavenumber range, the frequencies for all four ฮ _s values can be described via a linear dispersion relation given by (ฯ)=ฮทยท k_x. In contrast to the previous case at smaller slope angle, however, the group velocity ฮท decreases with growing ฮ _s from ฮท=0.045 at ฮ _s=3.9 to ฮท=0.007 for ฮ _s=4.5. At higher wavenumbers beyond k_x=0.03, the oscillation frequencies of the subharmonic modes start to grow slower than the previous linear rate with increasing wavenumbers.
From these computed properties of fundamental and subharmonic secondary instabilities, we can derive helpful inferences about the dynamics of vortices at the steep slope angle of ฮฑ=22^โ.
Since the strongest subharmonic mode dominate at lower wave numbers, we can expect long-wave reconnections or pure mergers between neighboring vortex pairs involving four vortices. The fundamental modes which are stronger at shorter wavelengths will bend each individual vortex parallel to its adjacent neighbors. In the full nonlinear vortex evolution however, it turns out that the the subharmonic modes appear to have a stronger impact on the vortex dynamics than predicted from linear stability analysis, as vortex mergers and reconnections seem to be the dominant feature of such flows with little signature of the wavy bending signature of the fundamental modes. This is very similar to the dynamics at higher slope angles as described below and shown in the animation contained in supplementary movie 3.
ยง.ยง.ยง Very steep slopes ฮฑ=30^โ
The subharmonic and fundamental instabilities of speaker-wire vortices at the steepest slope in this study with ฮฑ=30^โ are visualized together with the base vortices via the contour plot for the streamwise vorticity ฯ_x on the transverse plane in Figs. <ref> and <ref>.
When compared to the previous cases at smaller angles as shown in Fig. <ref>, Figs. <ref>-<ref> and Figs. <ref>-<ref>, it can be seen that the base flow vortices continue to shrink along the transverse direction and grow in the vertical with increasing slope angle.
As a consequence, the transverse wavelengths of the fundamental and subharmonic modes are also clearly less than that of the modes at lower slope angles.
The growth rates of the most unstable fundamental modes as a function of streamwise wavenumbers k_x within the range [0,0.25] for four different values of normalized surface heat flux ฮ _s =5.5,5.7,5.9,6.1 are shown in figure <ref>. The growth rates of the strongest subharmonic modes for streamwise wavenumbers k_x within [0,0.1] are shown in figure <ref>. Similar like in the previous cases, the strongest subharmonic instabilities have larger streamwise wavelengths than their fundamental counterparts.
For sufficiently small wavenumbers, i.e. 0<k_x<0.06, it can be seen that the subharmonic modes are clearly stronger than the fundamental ones, with the distinctively dominating 2-D subharmonic mode at k_x=0. However, since the growth rate of subharmonic modes decays rapidly with growing wavenumber whereas the growth rates of fundamental modes are far less sensitive to wavenumber variation, at sufficiently large wavenumbers k_x>0.1, only the fundamental modes have positive growth rates.
All fundamental modes with non-zero longitudinal wavenumbers k_x>0 are oscillatory; their frequencies are doubly-parabolic functions of k_x with one local maximum in the low-wavenumber range at k_xโ 0.01 and a global maximum at k_xโ 0.15, as shown in figure <ref>. On the other hand, figure <ref> shows that the frequencies of subharmonic modes have a less complicated relationship to their wavenumbers. All 2-D subharmonic modes(k_x=0) are stationary, and for wavenumber values less than k_x=0.04, the normalised frequencies for all four ฮ _s values seem to fit a linear dispersion relation given by (ฯ)=ฮทยท k_x, where the group velocity is ฮท=0.054 for ฮ _s=5.5 and decreases with growing ฮ _s to ฮท=0.032 at ฮ _s=6.1. However, at larger wavenumbers k_x>0.04, this linear relation no longer holds true. For ฮ _s=5.5, the frequency increases at a smaller rate with growing k_x whereas for the cases with larger ฮ _s values, the frequency starts to decrease with growing wavenumber when k_x>0.06.
From these properties of fundamental and subharmonic vortex instabilities at the very steep slope, we may infer that 2-D and long-wave subharmonic modes are the dominant driver of vortex dynamics. They would manifest themselves as pure mergers or long-wave reconnections between neighboring vortex pairs. Even though the linear stability results may suggest that the fundamental modes to be visible at shorter wavelengths, in reality, they are overshadowed by the effect of the far stronger long-wave subharmonic modes.
This can be seen from animations obtained from direct numerical integration of the full nonlinear governing equations in which the dominating dynamics are vortex mergers and long-wave instabilities, as shown in the animation available as supplementary movie 3.
ยง.ยง Vortex dynamics due to fundamental and subharmonic modes
The role of fundamental instabilities (i.e. modes which have the same transverse spatial period as the base flow) on the dynamics of an array of longitudinal rolls has been extensively documented in previous studies.
As an example, they have also been identified as instabilities of Rayleigh-Benard convection rolls which aim at distorting the structure and spacing of the rolls to bring them closer to the optimal wavelength <cit.>.
Other well-known representatives for such modes include the Crow instabilities which are responsible for the bending and reconnection of a vortex pair suspended in quiescent air <cit.>.
The effect of a fundamental instability on vortices at a slope angle of 12^โ, which is representative for fundamental modes at other angles as well, is visualised in Fig. <ref>. As can be seen, this 3-D fundamental mode causes sinusoidal bending and distortion of each speaker-wire vortex.
It is worth noticing that the fundamental modes for speaker-wire vortices are anti-symmetric, i.e. they can only bend both vortices within a pair along the same direction, as shown in Fig. <ref>.
Like their fundamental counterparts, 2-D and 3-D subharmonic vortex instabilities which have twice the transverse wavelength as the base flow also play a prominent role in shaping the vortex dynamics, such as in co-rotating Stuart vortex arrays <cit.> or as secondary instabilities in a shear layer by <cit.>, where they are shown to be responsible for the merging of neighboring vortices. As visualised in Fig. <ref>, the 3-D secondary subharmonic modes of the speaker-wire vortices in katabatic slope flows bend adjacent speaker-wire vortices in opposite directions as to facilitate their reconnection in the 3-D case and merger for the 2-D mode. In a recent investigation, the presence of long-wave vortex reconnections associated with the low-frequency content in flow spectra has been observed in the DNS of steep katabatic slope flows with angle ฮฑ>20^โ <cit.>.
We observe from Fig. <ref> and <ref> that both the fundamental and subharmonic modes are anti-symmetric within a single counter-rotating pair, i.e. they bend the two sister rolls of the same pair along the same direction, thus preventing a vortex reconnection or merger of neighboring vortices within the same pair in contrast to what is observed for a single vortex pair with Crow instability <cit.>. This means that a single vortex pair (i.e. a speaker-wire vortex) can remain in its basic pair structure even after the initial onset of instabilities, thus justifying their designation as a coherent vortex structure. Similar vortex structures which remains stable and coherent over long wavelengths have been observed in the Langmuir vortices on the surface of seas and oceans as described in <cit.>.
As outlined in the previous discussions, the properties of secondary modes for steady speaker-wire vortices are heavily influenced by the size of the slope angle ฮฑ. This in turn translates into different vortex dynamics at different slope angles. For the four different slope angles ฮฑ=3^โ,12^โ,22^โ,30^โ studied here, our analysis has shown that with increasing values of ฮฑ, the subharmonic modes become more dominant relative to their fundamental counterparts. At shallow slopes with ฮฑ=3^โ, only fundamental modes are supported, whereas at steep slopes with ฮฑ>30^โ, the subharmonic modes have the largest growth rates and are clearly stronger than their fundamental counterparts in the low wavenumber range.
When ฮฑ takes on an intermediate value such as between 10^โ and 20^โ, both fundamental and subharmonic modes have similar growth rates at stratification parameter ฮ _s not far above the linear stability threshold. To track the further evolution of the vortex dynamics beyond the initial destabilization by the instability modes, we have carried out a series 3-D numerical simulations based on the full nonlinear set of equations (<ref>)-((<ref>), initializing the flow field with an array of at least 4 base vortex pairs upon which a fundamental or subharmonic mode with 10% of the base flow strength has been added. The simulation domain and boundary conditions for cross-slope yz plane are the same as described in section <ref>, while 32 Fourier modes have been used to resolve the along-slope x direction, which is sufficient for our purpose to study the higher order vortex dynamics before the onset of full turbulence. The animated simulation in supplementary movie 1 shows the initial growth of the fundamental mode in longitudinal rolls at the slope angle ฮฑ=3^โ, which later leads to novel structures on top of the original base vortices. Supplementary movie 2 shows the initial vortex dynamics at ฮฑ=12^โ due to to the anti-symmetric fundamental mode, which later makes way for a long-wave subharmonic mode that merges two adjacent vortex pairs.
The animation shown in supplementary movie 3 illustrates how the subharmonic mode at ฮฑ=30^โ causes vortex reconnections and mergers between rolls from two adjacent pairs, without any apparent signature from a short-wave fundamental mode such as sinuosoidal bending of vortices.
ยง.ยง Comparison of secondary vortex modes under katabatic and anabatic conditions
In our earlier work <cit.>, we have established that in anabatic slope flows, the stationary roll mode is only the dominant primary instability at slope angle values less than 10 degrees whereas at steeper slopes ฮฑ >10^โ, the 2-D travelling wave mode would replace it as the stronger instability. On the other hand, under katabatic slope flow conditions, the stationary longitudinal rolls can emerge naturally as a result of the strongest instability mode over a wide range of slope angles up to 70 degrees. Thus, a comparison between the dynamics of longitudinal rolls under these two conditions is only possible for a narrow range of shallow slopes less than 10^โ. In this work, the slope angle ฮฑ=3^โ is chosen as the common reference point to contrast anabatic and katabatic slope flows, which is a realistic slope angle that can be observed in actual terrains. For further comparison purposes, we also introduce vortices formed under katabatic conditions at the steep slope angle ฮฑ=30^โ.
To illustrate the most significant differences between katabatic and anabatic slope flow conditions, we show the optimal transverse wavelengths of the primary roll instability for the one-dimensional Prandtl flow profile at different slope angles as a function of ฮ _s in Fig. ย <ref>. The aforementioned trend of decreasing vortex spacing with increasing slope angle for katabatic flows is clearly evident, and for both katabatic as well as anabatic conditions, the optimal vortex width decreases with increasing ฮ _s value.
At the shallow slope with angle ฮฑ=3^โ where steady vortices can form under both katabatic and anabatic conditions, we observe from Fig. ย <ref> that for the same value of ฮ _s=1.9 but with opposite signs of surface heat flux, the transverse wavelength ฮป_y of the most dominant vortex in the katabatic case is an order of magnitude larger than under anabatic conditions. This observation is visualized in Fig. <ref>, which displays the streamwise vorticity contours of both anabatic and katabatic vortices at ฮฑ=3^โ, and it is obvious that the vortices formed under katabatic conditions are multiple times wider and taller. Fig. <ref> also shows that only at a far steeper slope angle of ฮฑ=30^โ do the katabatic vortices assume a similar size as their anabatic counterparts at ฮฑ=3^โ.
The growth rates of secondary vortex instabilities under katabatic and anabatic conditions are displayed in Fig. ย <ref>.
For the same slope angle of 3^โ and stratification parameter ฮ _s=1.9, there are no instability modes for the base vortices at their most preferred width under anabatic conditions, whereas the katabatic configuration is subject to a fundamental secondary instability. Hence, a higher value of ฮ _s=2.3 is chosen for the anabatic slope flow in order to compare its vortex instability against the katabatic conditions at ฮ _s=1.9.
As described previously, there exists no subharmonic mode for katabatic vortices at ฮฑ=3^โ, whereas the anabatic vortices are susceptible to both fundamental and subharmonic instabilities. Fig. ย <ref> shows that even though the katabatic configuration has a lower ฮ _s value, its secondary instability exhibits an up to four times larger growth rate than the strongest modes for anabatic vortices. Another key distinction is that the most dominant mode in the katabatic case is clearly 3-D since its highest growth rate at the non-zero optimal streamwise wavenumber is clearly larger than the growth rate of the 2-D mode with zero wavenumber. There is no such behavior in the anabatic case where for both fundamental and subharmonic modes, the largest growth rates are attained at the lowest wavenumbers, and the growth rates remain relatively constant for low wavenumbers. It is interesting to notice that for katabatic vortices at ฮฑ=30^โ which are of a similar size as the anabatic vortices at the shallow ฮฑ=3^โ, the growth rates of the fundamental and subharmonic modes at the low wavenumber range are comparable to those under anabatic conditions, which indicates that the size and shape of the base vortices rather than the slope angle directly exerts a major influence on secondary mode growth for long-wave modes.
The oscillation frequencies of secondary vortex instabilities under katabatic and anabatic conditions are plotted in figureย <ref>. It is evident that the secondary modes under anabatic conditions have clearly higher frequencies at all wavenumbers than those of modes for katabatic vortices. A major distinction is that while the frequencies of both subharmonic and fundamental modes in the anabatic case fit a linear dispersion relation over the entire range of streamwise wavenumber 0<k_x<0.08, no such simple dependency exists for the frequencies of secondary modes under katabatic conditions which can increase at different rates or even decrease with growing k_x.
ยง CONCLUSIONS
We carried out a bi-global linear stability analysis to gain insights into the dynamics of longitudinal rolls, a.k.a. speaker-wire vortices, that emerge due to a primary instability from the one-dimensional katabatic slope flows over a range of slope angles. The katabatic conditions are the analogue to the anabatic slope flow conditions, whose effects on vortex dynamics are described in our prior work <cit.>.
Our base flow configuration under katabatic conditions is uniquely different than other well-known counter-rotating vortex pairs <cit.>. The katabatic Prandtl slope flow includes an independent background stratification that is at an angle to the cooled solid slope surface. Another unique feature of our configuration is the fact that the stationary longitudinal rolls serving as base flow have three non-zero velocity components even though the flow field is still 2-D, i.e. it only varies along the vertical and transverse dimensions. As a result, it is not too surprising that the instability dynamics of speaker-wire vortices investigated in the current work are distinct from the hitherto known vortex-pair instabilities.
Similar to our earlier investigation of speaker-wire vortices under anabatic conditions, we have established that the counter-rotating vortex pair form a coherent stable unit (i.e. a speaker-wire vortex) that can only lose their individual rolls to mergers or reconnections in the presence of another speaker-wire vortex. Our results for secondary instabilities of speaker-wire vortices have shown that the slope angle ฮฑ as well as the dimensionless stratification perturbation parameter ฮ _s, which can be interpreted as a normalized surface heat flux or the strength of the surface thermal forcing relative to the ambient stratification, play a major role in determining which modes are the most significant in destabilizing vortex rolls formed in katabatic slope flows. The major distinction between vortex instabilities in the anabatic and katabatic configurations is the fact that in the latter case, stationary vortices can emerge from the 1D Prandtl base flow profile for all slope angles less than around 70^โ, whereas this is restricted only to non-steep slopes less than 10^โ for anabatic slope flows.
For slope angles in the range of 3^โ-30^โ at which the 1D katabatic Prandtl slope flow profile is naturally susceptible to the stationary roll mode as primary instability, the most dominant vortex instability shifts from the fundamental mode towards the subharmonic mode with longer steamwise wavelengths with increasing slope steepness. For a shallow slope with ฮฑ=3^โ, the only unstable mode is the fundamental instability, no matter how strong the imposed surface heat flux is. This contrasts with vortices in anabatic slope flows at the same angle for which the subharmonic instability is dominant. For larger slope angles, the subharmonic mode begins to emerge as well. At ฮฑ=12^โ, subharmonic mode is still clearly weaker than its fundamental counterpart except at the smallest wave numbers. For even steeper slopes with ฮฑ = 22^โ, both subharmonic modes and fundamental modes are of comparable strength over a broad range of wave numbers. At the steepest slope angle ฮฑ=30^โ investigated in this work, the subharmonic mode is clearly more dominant than its fundamental counterpart except at the largest wavenumbers. The main difference between the two mode types is that the fundamental instability is a 3-D oscillatory mode whose growth rate at the optimal nonzero streamwise wavenumber is many times larger than the growth rate of the 2-D zero wave number mode. In contrast, the subharmonic mode attains its strongest growth at zero streamwise wavenumber where its oscillation frequency is also zero, hence it is a 2-D stationary mode. This mode exhibits its two-dimensional nature by directly merging two entire rolls from two adjacent but separate speaker-wire vortices without any bending in the streamwise direction.
The 3-D fundamental mode, with a transverse wavelength equivalent to the vortex separation of the base speaker-wire vortices, is anti-symmetric and bends all rolls in all speaker-wire vortices along the same direction, thus the distances between them remain the same as in the original base configuration before the onset of instability. Due to the absence of any symmetric instabilities, the fundamental mode cannot bend the two vortices within a pair towards each other. However, after sufficiently long time, the long-wave or two-dimensional subharmonic instability will eventually manifest itself to move two neighboring speaker-wire vortices from different pairs towards each other, resulting in the merger between two adjacent rolls which are not from the same speaker-wire vortex as in the base configuration as described above.
Our results demonstrate that the only possible vortex merger or reconnection dynamics under Prandtl's katbatic slope flow model is triggered by a long-wave subharmonic mode which requires four vortex rolls in two speaker-wire vortices. In contrast, one single speaker-wire vortex is able to maintain its two-roll structure even after the onset of the fundamental instabilities. The dependence of vortex dynamics on the slope angle is an extension of our earlier study for vortices in anabatic slope flows which focused on the effect of vortex width and separation <cit.>. Hence, the vortex dynamics under katabatic conditions investigated in this work are expected to contribute toward a better understanding of turbulent transition in stably stratified boundary layers on non-flat surfaces.
Supplementary movies. 3 Supplementary movies have been attached.
Acknowledgments:
This material is based upon work supported by the National Science Foundation under Grant No. (1936445). Research was sponsored in part by the University of Pittsburgh, Center for Research Computing through the computing resources provided.
Declaration of Interests: The authors report no conflict of interest.
jfm
|
http://arxiv.org/abs/2306.17708v1
|
20230630144649
|
Elmendorf's Theorem for Diagrams
|
[
"Robert C. Housden Jr"
] |
math.AT
|
[
"math.AT",
"55P91"
] |
The notion of a continuous G-action on a topological space readily generalizes to that of a continuous D-action, where D is any small category. Dror Farjoun and Zabrodsky introduced a generalized notion of orbit, which is key to understanding spaces with continuous D-action. We give an overview of the theory of orbits and then prove a generalization of โElmendorf's Theorem,โ which roughly states that the homotopical data of of a D-space is precisely captured by the homotopical data of its orbits.
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1811189.
The Structure of the Spin^h Bordism Spectrum
Keith Mills
June 30, 2023
============================================
ยง INTRODUCTION
Equivariant homotopy theory is the study of topological spaces with the action of a (usually finite) group G. Any G-space X automatically inherits an action via any subgroup H โค G, and the corresponding fixed-point subspaces X^H are key to understanding the structure of X. One concrete way to see this is via the celebrated result of โElmendorf's theorem,โ which was originally proven by Elmendorf <cit.>. The following is a reformulation due to Piacenza <cit.>:
[Elmendorf's theorem, classical version]
Let G be a topological group, and let ๐ช_G be the category of G-orbits (that is, G-spaces of the form G/H). Then, there is a Quillen equivalence between ^G and ^๐ช_G^op.
Our main result is a generalization of Elmendorf's theorem to โD-spaces", functors from a small category D to . D-spaces generalize equivariant homotopy theory because every group can be viewed as a small category. For this reason, we'll refer to the action of D as โD-equivarianceโ or โdiagram equivariance.โ In the 1980s, D-spaces were studied to prove categorical facts in homotopy theory. For instance, the version of Elmendorf's theorem above uses the category D = ๐ช_G^op. In a similar vein, functors out of the poset โ are often used to define spectra. D-spaces are also interesting objects in their own right, and we will give an overview of some common examples in Section <ref>. In the meantime, let us state our main theorem:
[Elmendorf's theorem, diagram-equivariant version]
Let D be a small[That is, a category with an actual set of objects where (X,Y) is also always a set.] category, and let ๐ช_D be the category of D-orbits (that is, D-spaces X where (X) is terminal). Then, there is a Quillen equivalence between ^D and ^๐ช_D^op.
The key definition that facilitates the study of D-spaces is the aforementioned notion of D-orbit, which is originally due to Dror Farjoun and Zabrodsky in <cit.>. In our version of Elmendorf's theorem, the model structures on ^D and ^๐ช_D^op are defined precisely so that they track the orbits of D. Namely:
In the projective model structure on ^๐ช_D^op, weak equivalences (resp. fibrations) are those maps ฮฒ R โ S such that, for each object O โ๐ช_D, ฮฒ_O is a weak equivalence (resp. fibration).
In the ๐ช_D model structure on ^D weak equivalences (resp. fibrations) are those maps ฮฑ X โ Y such that, for each object O โ๐ช_D, ^D(O, ฮฑ) is a weak equivalence (resp. fibration).
Note that while D and ๐ช_D^op are both categories, the model structures on ^D and ^๐ช_D^op are quite different. When Piacenza was proving his version of Elmendorf's theorem, he only needed the projective model structure because he was only considering categories of the form ๐ช_G^op. In this paper, we're doing something quite different, which is to allow a small category D to replace the group G. Thus, for our theorem to work, we create a model structure that directly generalizes the one Piacenza uses on ^G. In fact, because the model structures on ^D and ^๐ช_D^op are both based on orbits, we actually prove a more general result:
[Main Theorem]
Let D be a small category, let โฑ be a collection of orbits containing all free D-orbits (those of the form D(d,-) for some object d โ D), and let ๐ช_โฑ be the category of D-orbits in โฑ. Then, there is a Quillen equivalence between ^D and ^๐ช_โฑ^op.
For our main theorem, we use model structures where the weak equivalences and fibrations only involve orbits in โฑ. In the case where D is a group, this generalization was proven by Stephan <cit.>; our proof follows much of the same argument. This generalization is especially useful in the diagram-equivariant setting because ๐ช_D is often a large category, but only a small set of orbit types appear in any given D-space.
We prove our diagram-equivariant Elmendorf's theorem with an eye toward equivariant stable homotopy theory. In the past decade, there have been approaches due to Barwick <cit.> and Guillou-May <cit.> defining equivariant spectra via โspectral Mackey functors.โ These are functors whose domain is a modified version of ๐ช_G^op. This author's PhD thesis <cit.> uses a version of spectral Mackey functors to define diagram-equivariant spectra. By contrast, older approaches (going as far back as Graeme Segal's paper <cit.> establishing equivariant stable homotopy theory) rely heavily on representation theory and the representation spheres S^V. The key limitation of the representation-based approach is that one can't easily define representation spheres when G isn't a compact Lie group. Working with orbits directly allows one to sidestep this limitation. After all, the classical version of Elmendorf's theorem holds for general topological groups. For this reason, attempts to define equivariant spectra for groups that aren't compact Lie, such as โค, have taken a more orbit-centric approach. While this paper focuses entirely on D-spaces, our envisioned applications are these orbit-centric versions of equivariant spectra.
ยง.ยง Outline
The paper is organized as follows: Section <ref> lays out the first properties, related definitions, and examples of D-spaces. Section <ref> explores the categorical properties of ^D. Section <ref> explains how D-orbits perform the same role in D-equivariant homotopy theory that subgroups H โค G do in G-equivariant homotopy theory. (These notions are indeed compatible when D is a group.) Section <ref> discusses the homotopy groups of D-spaces and how they are naturally indexed by the category of D-orbits. Section <ref> explores the notions of D-CW-complexes and D-cell complexes, which are also due to Dror Farjoun and Zabrodsky. Section <ref> contains the model structures on ^D and ^๐ช_โฑ^op, and Section <ref> uses them to prove our generalized Elmendorf's theorem.
ยง.ยง Acknowledgements
This contents of this paper formed the first half of my PhD thesis, and I'm grateful to my advisor, Mike Hill, for the many hours of discussion that led to these results. I'd also like to thank Anna Marie Bohmann for her many helpful comments on previous drafts of this paper.
ยง BASIC NOTIONS
Equivariance is often encoded as a continuous group action on a topological space. To view this more categorically, we recall that a group can be defined as a (small) category with one object where every morphism is an isomorphism. In this context, a space with a G-action is just a functor X G โ, where G is regarded as a category.
This definition would work just as well if G were any category, which leads us to:
Given a small category D, a D-space is a functor X D โ. A morphism of D-spaces ฮฑ X โ Y is a natural transformation.
This categorification works just as well for other kinds of โobjects with group action.โ For instance, a D-set is a functor X D โ and a D-representation is a functor X D โVect_k where Vect_k is the category of vector spaces over a fixed field, k. As with D-spaces, morphisms are natural transformations. We now repeat our key example, orbits, which generalize G-sets of the form G/H:
[<cit.>] Given a category D, we say that a D-space X D โ is an orbit if the colimit of X is terminal (that is, a one-point space).
We are especially interested in those orbits that we can analyze via the Yoneda lemma:
[<cit.>] Given a category D and object d, the free orbit of d, F^d, is the representable D-set (and discrete D-space) D(d,-).
For any object d โ D, the free orbit D(d,-) is indeed an orbit.
For any morphism f d โ d' with source d, ๐_d โ f = f. Thus, D(d,f) D(d,d) โ D(d,d') sends ๐_d to f, so ๐_d and f are glued together in (D(d,-)). Because this holds for all f with source d (that is, all elements of โจฟ_d'D(d,d')), the colimit of D(d,-) is a one-point set, meaning D(d,-) is an orbit.
When D is a group, the sole free orbit is isomorphic to D/{e} due to D only having one object. As for the other orbits, one can indeed check that D being a group implies that any D-orbit O is isomorphic to D/H for some subgroup H. However, things generally get much more exciting for non-group categories, even very simple ones. For example:
Let ๐ = s t be the category with two objects (โsโ for โsourceโ and โtโ for โtargetโ) and one non-identity morphism f x โ y.
There is an equivalence of categories between the category of ๐-orbits and .
By the usual construction of colimits in , the colimit of X๐โ is given as the quotient (X_s โจฟ X_t)/ โผ, where โผ relates each x_s โ X_s to f(x_s) โ X_t. Note that each x_s โ X_s is related to exactly one point in X_t because f is a function. Thus, it's not possible for two distinct points in X_t to become glued together in the quotient, so any orbit X must have a singleton X_t. This then forces all points in X_s to be glued to the unique x_t โ X_t. Hence, a ๐-orbit is precisely a ๐-space X๐โ such that X_t has a single point. From this we observe that any continuous map ฮฑ_s X_s โ Y_s will yield a commutative square
X_s Y_s
X_t Y_t["ฮฑ_S", from=1-1, to=1-3]
["ฮฑ_t", from=3-1, to=3-3]
["X_f"', from=1-1, to=3-1]
["Y_f"', from=1-3, to=3-3]
whenever Y is an orbit. ฮฑ is then uniquely determined. In other words, the functor ^๐โ taking X to X_s is full, faithful, and essentially surjective (i.e., an equivalence).
We'll use finite ๐-orbits as a frequent source of examples, so it will be helpful to have simple notation for them:
For any n โโ, [n] is the ๐-orbit { 0, โฆ n-1 }โ{}.
The free orbits of ๐ are [0] โ
F^t = ๐(t,-) and [1] โ
F^s = ๐(s,-).
For a general small category D, we can still have a lot of orbits to work with. In most situations, we only need to consider the orbits that actually appear in our D-space X. Let us introduce some vocabulary to describe these orbits.
For any topological space A, we will abuse notation and also refer to its corresponding constant D-space as A. (Thus, the statement A_d = A is perfectly valid.)
[<cit.>] Given a D-space X and a point x_d of X_d for some object d โ D, the orbit of x_d, O_x_d, is the D-space of points in X that get glued to x_d in (X). This can be identified with the following pullback: (The spaces on the bottom row are viewed as constant D-spaces, following Convention <ref>.)
O_x_d X
{x_d} (X)[from=1-3, to=3-3]
[hook, from=3-1, to=3-3]
[from=1-1, to=3-1]
[hook, from=1-1, to=1-3]
["โ"anchor=center, pos=0.125, draw=none, from=1-1, to=3-3]
Orbit types allow us to classify discrete D-spaces (that is, D-sets) as follows:
Given a small category D, any D-set X can be decomposed as the coproduct of its (necessarily discrete) D-orbits. Furthermore, this decomposition is unique up to reordering and isomorphic replacement of the factors.
Given a point x_d of X, each point x โ O_x_d is by definition glued to x_d in (X). Thus, each point of X is in precisely one orbit. This gives a disjoint union decomposition of X, which is precisely the claimed coproduct structure.
To see uniqueness, suppose X is isomorphic to both โ_i โ I O_i and โ_j โ JO_j, where each O_i and O_j is a D-orbit. The fact that these are both decompositions of X give us an isomorphism
fโ_i โ I O_i โโ_j โ JO_j.
Because isomorphisms preserve colimits, f induces an isomorphism of sets
g(โ_i โ I O_i) โ(โ_j โ JO_j).
Since colimits commute with coproducts, g can instead be viewed as a bijective function
gโ_i โ I(O_I) โโ_j โ J(O_j).
But each (O_i) and (O_j) is a one-point set, so g corresponds to a bijection from I to J, which we will call h. The fact that h sends i to h(i) means that f sends points in O_i to points in O_h(i). Because h is a bijection, only the points in O_i can be sent to O_h(i). Thus, since f is an isomorphism, the restriction of f to O_i โO_h(i) must also be a bijection and hence an isomorphism. This shows the desired uniqueness.
ยง PROPERTIES OF TOP^D
Let us now explore the categorical properties of ^D:
For any small category D, ^D has all small limits and colimits.
This is immediate from the fact that has all small limits and colimits and the fact that limits and colimits in a functor category can be computed objectwise.
Next, we'd like see that ^D is nicely enriched in . What follows is largely a recap of <cit.>.
For any D-spaces X and Y, we topologize D(X,Y) with the subspace topology from its inclusion into (โจฟ_d X_d, โจฟ_d Y_d).
We can view ^D as enriched in .
However, we can also view D(X,Y) as a D-space:
<cit.> For any D-spaces X and Y, we can view D(X,Y) as a D-space where D(X,Y)_d = D(X ร F^d,Y). For any morphism f:d โ d' in D,
D(X,Y)_f: D(X ร F^d, Y) โ D(X ร F^d', Y)
is induced by the natural map
D(f,-):F^d' = D(d',-) โ D(d,-) = F^d.
This construction makes ^D enriched in itself. Many times, we'll use the following notation for D(X,Y):
For D-spaces X and Y, Y^X ^D(X,Y). Whether we wish for Y^X to be a space or a D-space will depend on context.
Most commonly, we'll be applying this notation to the context of the representable โfixed pointโ functor (-)^Y. Let's list a few of this functor's properties:
For any D-space X, the functor (-)^X (valued in either or ^D) preserves limits.
This is immediate from the fact that (enriched) representable functors preserve limits.
For any D-orbit O, the functor (-)^O (valued in either or ^D) preserves pushouts and coproducts.
We begin with pushouts: Consider any pushout Y โ_W Z of D-spaces. Because colim(O) = {o} is a one-point space, a map f:O โ Y โ_W Z is factored by a map O โ Y or O โ Z based on whether a given point f(o) lands in (Y) โ(Y โ_W Z) or in (Z) โ(Y โ_W Z). If f(o) lands in both (Y) and (Z), then O โ Y and O โ Z are both factored by a map O โ X. In other words, (Y โ_W Z)^O has the universal property of a pushout of Y^O โ W^O โ Z^O, so (Y โ_W Z)^O and Y^O โ_W^O X^O are naturally isomorphic as spaces. In other words, (-)^O preserves pushouts, at least when it's valued in .
Similarly, for any coproduct โจฟ_i โ I Y_i of D-spaces, a map
O โโจฟ_i โ I Y_i
is factored based on which colim_d โ D(Y_i) โcolim_d โ D(โจฟ_i โ I Y_i) that f(o) lands in. Thus, (โจฟ_i โ I Y_i)^O has the universal property of โจฟ_i โ I Y^O_i, at least when (-)^O is valued in .
To get the version valued in ^D, recall that X^O_d is the space ^D(F^d,X^O). Since F^d is an orbit, we conclude the pushouts and coproducts are preserved at each object. Since equivariant maps that have objectwise-isomorphisms are themselves isomorphisms, we conclude that the ^D-valued functor ( - )^O preserves pushouts and coproducts.
We can say more about this enrichment if we take to be the category of compactly generated weak Hausdorff spaces. This condition implies that, for any spaces A, B, C, we have an isomorphism of spaces
(A ร B, C) โ
(A,(B,C)).
As long as we have this condition, we get the following:
^D is a closed symmetric monoidal category, with monoidal structure given by objectwise Cartesian product.
We just need to confirm that there is a natural isomorphism of sets
^D(X ร Y,Z) โ
^D(X,^D(Y,Z)).
Given a natural transformation
ฮฑ:X ร Y โ Z,
we get a natural transformation
ฮฒ: X โ^D(Y,Z),
where
ฮฒ_d:X_d โ^D(Y,Z)_d = ^D(Y ร F^d,Z)
is defined by
[ฮฒ_d(x_d)](y_d',f) = ฮฑ(f(x_d),y_d').
(Here, f is a generic element of F^d(d'), meaning it's a morphism f:d โ d'.) The fact that we're working with compactly generated weak Hausdorff spaces ensures that each ฮฒ_d is continuous. We can recover ฮฑ from ฮฒ by setting
ฮฑ_d(x_d,y_d) = [ฮฒ_d(x_d)](y_d,๐_d).
Again, the fact that we're working with compactly generated weak Hausdorff spaces ensures that each ฮฑ_d is continuous.
If we apply the same argument to pointed compactly generated weak Hausdorff spaces, using the isomorphism
_โ(D โง E,F) โ
_โ(D,(E,F)),
for any pointed spaces D,E, and F, we get:
^D_โ is closed symmetric monoidal category, with monoidal structure given by objectwise smash product.
ยง ORBITS VS. SUBGROUPS
In the group case, keeping track of orbits of G is essentially the same task as keeping track of subgroups of G. One way of making this precise is the following:
The category of G-orbits with G-equivariant maps, ๐ช_G, is equivalent to the category of subgroups of G with inclusions and conjugations for morphisms.
For general small categories D, the natural generalization of this proposition that uses โsubcategoryโ instead of โsubgroupโ is not even remotely true. For ๐ in Definition <ref>, we saw in Proposition <ref> that the orbit category was equivalent to , but we can see there are only finitely many subcategories! Thankfully, we don't need to use subgroups to capture the equivariant structure, and our story can be explained purely in terms of orbits. Let's explore how to go about this; for the group case, this will involve translating notions that use the subgroup H โค G into the language of G-orbits G/H. But first, we'll need a definition:
Given a D-set T D โ, its translation category, B_D(T), is the category with objects given by elements of โจฟ_d โ D T_d and has morphism-sets defined by
B_D(T)(a,b) = {f โ D | T_f(a)=b }.
This construction is functorial in T.
When we pick D to be a group and T to be some orbit D/H, we get what is usually called the translation groupoid. This translation groupoid, B_D(D/H), is in fact equivalent (in the categorical sense) to H. Thus, the functor categories ^H and ^B_D(D/H) are categorically equivalent. Hence, we can talk about โrestrictedโ action of subgroups purely in terms of orbits: while any G-space X has a โrestrictedโ H action given by the inclusion H โค G, X also gives rise to a B_G(G/H)-space that encodes the same data. We can use a similar technique to discuss the fixed-point spaces of X: the set X^H of points in X that are fixed by the action of H can be identified with ^G(G/H, X).
ยง THE HOMOTOPY THEORY OF D-SPACES
To use algebraic invariants for D-spaces, we need to choose how much of the D-equivariant structure to capture. In classical (non-equivariant) homotopy theory, the nth homotopy groups of a pointed space X, ฯ_n(X), is usually viewed as consisting of the homotopy classes of pointed maps from S^n to X. If we want to do this equivariantly, X will have an action attached to it, and we need to decide what action S^n has. If we give S^n the constant โtrivialโ action, (that is, viewing S^n as a functor D โ_โ that sends every object to the sphere S^n and every morphism to the identity map on S^n) then we're extremely limited in the power of our invariants. For instance:
Let C_2 denote the group of order 2, and let V be the m-dimensional orthogonal C_2-representation where the non-identity morphism acts via multiplication by -1. For any n, there is only one pointed C_2-equivariant map from S^n to S^V.
In other words, pointed C_2-equivariant maps from various S^n can't distinguish between the S^V above and a point. If we built a homotopy invariant out of the homotopy classes of such maps, we'd have a very weak invariant. A common solution to this problem is to instead build an invariant from the data of ฯ_n(X^H) for all subgroups H โค G. This is the approach we'll adapt, using the fact that X^H โ
^G(G/H,X) to make an orbit-theoretic statement. But first, let's note that the weakness of only considering spheres with trivial actions isn't unique to the group-equivariant case:
Let X be the ๐-space where X_s = {} and X_t = S^m. For any n, there is only one ๐-equivariant map from S^n to X. (Here, we're following Convention <ref> and treating S^n as a constant ๐-space, which is the same as saying that S^n has the โtrivialโ action.)
In both examples, the issue is orbit types: an equivariant map can only send points of orbit type O_1 to points of orbit type O_2 if there's a D-equivariant map from O_1 to O_2. When we used S^n with the trivial action, there was only one orbit type represented,[This is true for all connected categories. In general, a constant D-space has orbit types precisely corresponding to the connected components of D.] that orbit being C_2/C_2 in the first example and [1] in the second example. However, there were other orbit types present in the codomain, namely C_2/{e} and [0], respectively.
We can capture the homotopical data for these other orbit types by replacing S^n with the โfreeโ space S^n โง O_+. (O_+ is the pointed D-space obtained from O by adding a disjoint base point at every object.) Aside from potentially the base points, every point in S^n โง O_+ has orbit type O. By Corollary <ref>, an equivariant map from S^n โง O_+ to X is equivalent to the data of an equivariant map from S^n to X^O.
Thus, instead of having a single n-th homotopy group, we have one for each orbit. By the representability of homotopy groups, these can be arranged into a functor:
Let ๐ช_D be the category of D-orbits with D-equivariant maps. Given a pointed D-space X, its n-th equivariant homotopy group functor is a contravariant functor ฯ_n^*(X): ๐ช_D^opโ Grp given by
ฯ_n^O(X) = [S^n โง O_+, X]^D โ
[S^n, X^O]^D.
Here, [ - , - ]^D denotes the set of D-equivariant homotopy classes of maps. ฯ_n^O(X) is indeed a group when n โฅ 1 because the constant space S^n is automatically a cogroup object in the category of D-spaces with D-homotopy classes of maps. Let's now explore this invariant with a few examples for ๐:
Let X be a constant ๐-space. Then, ฯ_n^*(X) is the constant functor ฯ_n(X_t) (or equivalently, ฯ_n(X_s)).
Consider any morphism ฮฑ: (S^n โง O_+) โ X:
(S^n โง O_+)_s X_s
(S^n โง O_+)_t X_t["X_f"', from=1-3, to=3-3]
["(S^n โง O_+)_f"', from=1-1, to=3-1]
["ฮฑ_s", from=1-1, to=1-3]
["ฮฑ_t", from=3-1, to=3-3]
Since X_f is an identity morphism, ฮฑ_s is uniquely determined as the composite ฮฑ_t โ (S^n โง O_+)_f. Thus, the homotopy classes of ๐-equivariant maps from S^n โง O_+ to X can be identified with the non-equivariant homotopy classes of maps from (S^n โง O_+)_t to X_t. Now recall that when O is a ๐-orbit, O_t is a one-point space. Thus, (S^n โง O_+)_t โ
S^n, so ฯ_n^O(X) โ
ฯ_n(X_t). Because this identification can be made compatibly for each orbit, we conclude that ฯ_n^*(X) is a constant functor.
In the above example, we didn't get any interesting orbit data. This was just because X only had one orbit type. To see a more general behavior, let's revisit the case where X_s = {} and X_t = S^m:
Let X be the ๐-space with X_s = {} and X_t = S^m. Then, ฯ_n^[0](X) = ฯ_n(S^m) and ฯ_n^O(X) = 0 for all other orbits O. This uniquely determines the structure maps.
We will use the notation of the previous example. Since X_s consists of a single point, the composite X_f โฮฑ_s must send all points of (S^n โง O_+)_s to the base point of X_t. When O is not the orbit [0], the map (S^n โง O_+)_f is surjective, which means ฮฑ_t must send all points of (S^n โง O_+)_t to the base point of X_t. Hence, ฯ_n^O(X) = 0. When O is the orbit [0], we know that (S^n โง O_+)_s โ
{} and (S^n โง O_+)_t โ
S^n. This means ฮฑ_t can be any map from S^n to S^m, so ฯ_n^[0](X) โ
ฯ_n(S^m).
The next example illustrates that we care about the structure maps of ฯ_n^*(X), not just its evaluation on objects.
Let X be the ๐-space with X_s=S^1 and X_t=S^1, but where X_f is the โdouble counter-clockwise windingโ map, hereafter denoted โ2.โ (If one views S^1 as the unit sphere in โ, this is the map given by z โฆ z^2.) Then, ฯ_n^O(X) = ฯ_n(S^1) for all orbits O. For any ๐-equivariant map g:O_1 โ O_2, ฯ_n^g(X) is multiplication by 2 when O_1 is the orbit [0] and O_2 is a non-[0] orbit; otherwise, ฯ_n^g(X) is the identity map.
For any morphism ฮฑ: (S^n โง O_+) โ X, we have the commutative square
(S^n โง O_+)_s S^1
(S^n โง O_+)_t S^1["2"', from=1-3, to=3-3]
["(S^n โง O_+)_f"', from=1-1, to=3-1]
["ฮฑ_s", from=1-1, to=1-3]
["ฮฑ_t", from=3-1, to=3-3]
Note that (S^n โง O_+)_s โ
โ_i โ O_s S^n and (S^n โง O_+)_t โ
S^n. Thus, when n โฅ 2, any map ฮฑ_t is nullhomotpic. Furthermore, we can lift any nullhomotopy on ฮฑ_t to a compatible one on ฮฑ_s because (S^n โง O_+)_f is an isomorphism on each of the wedge factors. Hence, ฯ_n^O(X) = 0 when n โฅ 2, so we only need to consider the n=1 case.
Classically, we know that any pointed map from โ_i โ O_s S^1 to S^1 has a unique pointed lift via the map X_f = 2. In particular, there must be only one lift of ฮฑ_t โ (S^1 โง O_+)_f. Let's compare ฮฑ_s with another such lift:
When O is not the orbit [0], there exists maps h:O_t โ O_s such that O_f โ h = ๐_O_t. Thus, we have a map ฮฑ_t: (S^1 โง O_+)_t โ S^1 given by ฮฑ_t = ฮฑ_s โ (S^1 โง h). However, we can consider the diagram
(S^1 โง O_+)_t
(S^1 โง O_+)_s S^1
(S^1 โง O_+)_t S^1["2", from=3-4, to=5-4]
["(S^1 โง O_+)_f", from=3-2, to=5-2]
["ฮฑ_s", from=3-2, to=3-4]
["ฮฑ_t", from=5-2, to=5-4]
["h", from=1-1, to=3-2]
["ฮฑ_t", dashed, from=1-1, to=3-4]
["๐_(S^1 โง O_+)_t"', from=1-1, to=5-2]
and compute that 2 โฮฑ_t = 2 โฮฑ_s โ h = ฮฑ_t โ (S^1 โง O_+)_f โ h = ฮฑ_t.
Thus, ฮฑ_tโ (S^1 โง O_+)_f is a pointed lift of ฮฑ_t โ (S^1 โง O_+)_f. By the uniqueness of pointed lifts, this means ฮฑ_s = ฮฑ_tโ (S^1 โง O_+)_f. In other words, we have a commutative diagram:
(S^1 โง O_+)_s S^1
(S^1 โง O_+)_t S^1["2"', from=1-3, to=3-3]
["(S^1 โง O_+)_f"', from=1-1, to=3-1]
["ฮฑ_s", from=1-1, to=1-3]
["ฮฑ_t", from=3-1, to=3-3]
["ฮฑฬ_ฬtฬ", dashed, from=3-1, to=1-3]
Assuming still that O โ [0], observe that a ๐-equivariant map from (S^1 โง O_+) to X thus uniquely determines a map from the constant ๐-space (S^1 โง O_+)_t to X, and vice versa. We could repeat the same argument. replacing (S^1 โง O_+) with (S^1 โง O_+) ร I, to get a lifting of D-equivariant homotopies. We also note that the constant ๐-space (S^1 โง O_+)_t is isomorphic to (S^1 โง [1]_+). Thus, when O โ [0], we compute
ฯ_1^O(X) = [S^1 โง O_+, X]^๐โ
[S^1 โง [1], X]^๐ = [S^1, X_s] = ฯ_1(S^1).
These isomorphisms specify that the structure maps ฯ_1^g(X) are isomorphisms for g:O_1 โ O_2 when neither O_1 nor O_2 are [0].
When O = [0], we compute
ฯ_1^[0](X) = [S^1 โง [0]_+, X]^๐โ
[S^1,X^[0]] โ
[S^1, X_t] = [S^1,S^1] = ฯ_1(S^1).
Now, we just need to determine the unresolved structure maps. Since ฯ_1^g(X) is an isomorphism when O_1 and O_2 are not [0], we only need to consider the unique map j:[0] โ [1]. (All of the other unresolved maps are obtained by composing this map with some already-known isomorphism.)
Recall again that [0] and [1] are isomorphic to the free orbits ๐(t,-) and ๐(s,-), respectively. Under this identification, j:[0] โ [1] becomes ๐(f,-). This means the structure map from
ฯ_1^[1](X) = [S^1 โง [1]_+,X]^๐โ
[S^1, X_s] = ฯ_1(X_s)
to
ฯ_1^[1](X) = [S^1 โง [0]_+,X]^๐โ
[S^1, X_t] = ฯ_1(X_t)
is given by ฯ_1(X_f), which is multiplication by 2.
Comparing this example with the preceding one about constant ๐-spaces shows why we needed to have structure maps: without the maps, the X above would have been indistinguishable from the constant ๐-space S^1. For our last two examples, let's see how orbits other than [0] and [1], the free obits, can provide useful data:
Let X be the pointed ๐-space where X_s = S^m, where X_t = S^โ = โ_n โโ S^n, and where X_f is the inclusion of S^m into S^โ. Then, ฯ_n^[0](X) = 0, while ฯ_n^O(X) = ฯ_n(X_s) for all other obits. Given any g: O_1 โ O_2 where O_1 and O_2 are not [0], ฯ_n^g(X) is the identity map.
For O โ [0], (S^n โง O_+)_f and (S^n โง O_+ ร I)_f are surjective, so any map (or homotopy of maps) from S^n โง O_+ to X is factored by the inclusion of the constant ๐-space S^m into X. Thus, for O โ [0] (and the structure maps between such O), ฯ_n^O(X) agrees with ฯ_n^O(S^m) โ
ฯ_n(S^m).
We contrast this with the following:
Let Y be the pointed ๐-space with Y_s = S^m and Y_t = {}. Then, ฯ_n^[i](Y) โ
โ_z โ [i]_sฯ_n(S^m), and g:[j] โ [i] acts by sending (z_1, โฆ, z_i) to (z_g(1), โฆ , z_g(j)).
Because Y_t is terminal, the data of a map ฮฑ from S^n โง [i]_+ to Y is the same as the data of ฮฑ_s: (S^n โง [i]_+)_s โ Y_s. Since (S^n โง [i]_+)_s is homeomorphic to the i-fold wedge product S^n โจโฆโจ S^n, we have that
ฯ_n^[i](Y) โ
[S^n โจโฆโจ S^n, S^m] โ
โ_z โ [i]_sฯ_n(S^m),
where the last isomorphism follows from the fact that โจ is the coproduct in the category of pointed topological spaces with homotopy classes of maps. Our description of ฯ_n^g(Y) then follows from chasing through the two isomorphisms.
In these last two examples, ฯ_n^[0](X), ฯ_n^[1](X), and ฯ_n^[0] โ [1](X) agree with their counterparts for Y. Only by using the other orbits can we homotopically distinguish between X and Y. This is desirable because while X_s โ Y_s and X_t โ Y_t are homotopy equivalent as spaces, X and Y are not homotopic as ๐-spaces. This is precisely analogous to distinguishing between G-spaces whose underlying spaces are homotopy equivalent but which are not equivariantly homotopy equivalent.
ยง D-CW-COMPLEXES AND D-CELL COMPLEXES
As in the non-equivariant case, we have a notion of CW-complexes, objects that are completely described by homotopy group functors. The material here largely follows the original exposition given by Dror Fajoun and Zabrodsky, with some more modern updates. We will need D-cell complexes for our proof of the diagram-equivariant Elmendorf's theorem because they are used to build the cofibrant objects of ^D and ^๐ช_โฑ^op.
[<cit.>] Given a collection of orbits โฑ of a small category D and a D-space X, a relative D-CW structure of type โฑ on X is a sequence of D-spaces
X^-1โช X^0 โช X^1 โชโฆโช X^n โชโฆโช X
such that for each i โฅ 0, X^i is obtained from X^i-1 as a pushout
S^i-1ร A_i X^i-1
D^i ร A_i X^i[hook, from=1-1, to=3-1]
[from=1-1, to=1-3]
[hook, from=1-3, to=3-3]
[from=3-1, to=3-3]
["โ"anchor=center, pos=0.125, rotate=180, draw=none, from=3-3, to=1-1]
where each A_i is a disjoint union of D-orbits in โฑ. If X^-1 is the constant empty D-space, we drop the word relative and say that
X^0 โช X^1 โชโฆโช X^n โชโฆโช X
is a D-CW structure on X.
As in the non-equivariant case, a map ฮฑ: X โ Y of D-CW-complexes is a D-homotopy equivalence if an only if all nth homotopy group functors (including n=0) induce isomorphisms. This result is usually called โWhitehead's theoremโ in the non-equivariant case and โBredon's theoremโ in the group-equivariant case. We now present the diagram-equivariant case, which was proven by Dror Farjoun and Zabrodsky.
[<cit.>] Let โฑ be a collection of orbits of a small category D. We say a D-space X is of type โฑ if
๐ช_X = { O_x | x โ(X) }โโฑ.
[<cit.>] Let D be a small category and let ฮฑ: X โ Y be a D-equivariant map of D-CW-complexes of type โฑ. Then, ฮฑ is a D-homotopy equivalence if and only if ฮฑ^O:X^O โ Y^O is a homotopy equivalence of spaces for all O โโฑ. (That is, that ฯ_n^O(ฮฑ):ฯ_n^O(X) โฯ_n^O(Y) is a isomorphism for all n โ๐ฉ and O โโฑ.)
In the previous section, we saw that X = (S^m โช S^โ) and Y = (S^m โ{}) were not ๐-homotopy equivalent. Since X is of type ๐ช_X = { [0], [1] } and Y is of type ๐ช_Y = { Y }, (Y is an orbit!) we know that any ฮฑ:X โ Y must have ฯ_n^O(ฮฑ) fail to be an isomorphism for some n โโ and O โ{ [0], [1], Y }. Back then, we showed that ฯ_n^[i](X) and ฯ_n^[i](Y) were not isomorphic for i โฅ 2 and any n where ฯ_n(S^m) โ 0. The theorem above says we could have simply checked ฯ_m^Y(X) and ฯ_m^Y(Y).
In general, once one has a D-CW structure on X, it's straightforward to know which orbits to check:
If X has (non-relative) D-CW structure of type โฑ, then X is a D-space of type โฑ.
Let X have a non-relative D-CW structure of type โฑ, and consider any point x_d โ X_d for any object d โ D. We wish to show that O_x_dโโฑ. By construction, x_d is a point in the interior of D^i ร A_i for exactly one i โโ. By equivariance, each point in O_x_d must also be a point in the interior of D^i ร A_i. Thus, the orbit type of x_d in X must be the same as its orbit type in D^i ร A_i. Because the latter is an orbit type in โฑ and x_d is a generic point, we're done.
Like in the group-equivariant case, D-CW-complexes are tame combinatorial objects that allow us to isolate certain homotopical behavior. However, sometimes we want to remove the restriction that higher-dimensional cells only attach onto lower-dimensional cells. When we get rid of this restriction, we get the more general notion of D-cell complexes. D-cell complexes enable us to provide nice descriptions of the cofibrations in ^D and ^๐ช_โฑ^op, which we will need to prove our generalization of Elmendorf's theorem.
Given a collection of orbits โฑ of a small category D and a D-space X, a D-cell structure of type โฑ on X is a (potentially transfinite) sequence of pushouts of the form:
S^n-1ร O_ฮผ X_ฮผ
D^n ร O_ฮผ X_ฮผ+1[from=1-1, to=1-3]
[from=1-3, to=3-3]
[from=1-1, to=3-1]
[from=3-1, to=3-3]
such that (X_ฮผ) = X, where each O_ฮผโโฑ, and where n โฅ 0 is allowed to vary with respect to ฮผ. (Here, the (-1)-sphere is the empty space.)
That is, there is an ordinal ฮป such that X = _ฮฝโคฮปX_ฮฝ. For successor ordinals ฮป = ฮผ+1, X_ฮป is obtained by the above pushout. For limit ordinals ฮป, X_ฮป = _ฮฝ < ฮปX_ฮฝ. If X_0 is the constant empty D-space, we drop the word relative and say X is a D-cell complex of type โฑ.
As with D-CW-complexes, D-cell complexes of type โฑ are spaces of type โฑ:
If X is a (non-relative) D-cell complex of type โฑ, then X is a D-space of type โฑ.
Take the proof of proposition <ref> and replace โD-CW structureโ with โD-cell structure.โ
Any relative D-CW-complex is a relative D-cell complex.
Let X have a relative D-CW structure. By definition, each A_i involved in the construction is a disjoint union of orbits O_ฮฑ. By the usual axioms of set theory, this collection of orbits can be well-ordered. We can then order the orbits of A_0, A_1, โฆ lexicographically and get a new well-ordered set of all the orbits involved. This corresponds to some ordinal ฮป which we will now use for labeling. For any orbit O_ฮผ, we build our attaching pushout as
X^n-1
S^n-1ร O_ฮผ X_ฮผ
D^n ร O_ฮผ X_ฮผ+1[from=3-1, to=3-3]
[from=3-3, to=5-3]
[from=3-1, to=5-1]
[from=5-1, to=5-3]
[hook, from=2-3, to=3-3]
[from=3-1, to=2-3]
["โ"anchor=center, pos=0.125, rotate=180, draw=none, from=5-3, to=3-1]
where the inclusion X^n โช X_ฮผ is guaranteed by our lexicographic ordering and where the map S^n-1ร O_ฮผโ X^n is given by the disjoint union decomposition of A_n (and corresponding decomposition of S^n-1ร A_n). X_0 is defined as X^-1.
From the last sentence of the proof, we are also able to conclude:
Any D-CW-complex is a D-cell complex.
ยง MODEL STRUCTURES ON TOP^D
We can now establish a model structure on ^D from that on :
[<cit.>] The classical model structure on is given by:
* Weak equivalences are weak homotopy equivalences (that is, maps that induce isomorphisms for all ฯ_n).
* Fibrations are โSerre fibrations.โ
* Cofibrations are retracts of relative cell complexes.
As in any model category, a choice of two of {Fibrations, Cofibrations, Weak Equivalences} uniquely determines the third. It is thus a theorem that the weak equivalences and fibrations described above determine the cofibrations of the definition.
From this, we can build a model structure on ^C for any (possibly large) category, C. The model structure we're about to describe will most often be used on C = ^๐ช_โฑ^op, where ๐ช_โฑ is the full subcategory of ^D whose objects are orbits O โโฑ. We'll use a different model structure on ^D, which is why we're using the letter โCโ here.
Given a (possibly large) category C, the projective model structure on ^C is given by the following conditions:
* Weak equivalences ฮฒ:R โ S are such that ฮฒ_X is a weak homotopy equivalence for each object X โ C.
* Fibrations ฮฒ:R โ S are such that ฮฒ_X is a Serre fibration for each object X โ C.
* Cofibrations are retracts of relative C-cell complexes of type , the collection of free orbits of C.
The description of the cofibrations follows from <cit.>. We now give our model structure on ^D:
Let D be a small category and let โฑ be some collection of D-orbits that contains all of the free orbits. Then, the โฑ-model structure on ^D is given by:
* Weak equivalences ฮฑ: X โ Y are such that ^D(O,ฮฑ): ^D(O,X) โ^D(O,Y) is a weak equivalence for each O โโฑ.
* Fibrations ฮฑ: X โ Y are such that ^D(O,ฮฑ): ^D(O,X) โ^D(O,Y) is a Serre fibration for each O โโฑ.
Our main theorem, which is proven in the next section, is that there is a Quillen equivalence,
^D
^๐ช_โฑ^op[""name=0, anchor=center, inner sep=0, "K", curve=height=-30pt, from=3-1, to=1-1]
[""name=1, anchor=center, inner sep=0, "ฮฆ", curve=height=-30pt, from=1-1, to=3-1]
["โฃ"anchor=center, draw=none, from=0, to=1]
where ^D has the โฑ-model structure and ^๐ช_โฑ^op has the projective model structure. The fact that this is a Quillen adjunction allows us to describe the cofibrations in the โฑ-model structure, which is the subject of Proposition <ref>
ยง ELMENDORF'S THEOREM
We now prove our diagram-equivariant version of Elmendorf's theorem. Our approach follows a modern treatment of the group-equivariant case by Marc Stephan <cit.>. There is a similar theorem for simplicial sets given by Dwyer and Kan <cit.> that uses a different notion of โorbit.โ We state our theorem as Theorem <ref> and spend the rest of this paper proving it.
Let D be a small category, let โฑ be some collection of orbits of D that contains all of the free orbits, and let ๐ช_โฑโ^D be the full subcategory spanned by โฑ. Then, there is a Quillen equivalence
K:^๐ช_โฑ^opโ^D: ฮฆ,
where ^๐ช_โฑ^op has the projective model structure and ^D has the โฑ-model structure.
The functors that comprise the Quillen equivalence are:
* K:^๐ช_โฑ^opโ^D is defined by K(R) = R โ i, where i is the inclusion of D into ๐ช_โฑ^op via the Yoneda embedding d โฆ F^D.
* ฮฆ: ^D โ^๐ช_โฑ^op is defined by ฮฆ(X)(O) = ^D(O,X) for all O โโฑ.
To prove Theorem <ref>, we will show first show that (K,ฮฆ) is an adjunction, then show that (K,ฮฆ) is a Quillen adjunction, and then finally show that (K,ฮฆ) is in fact a Quillen equivalence.
K is a left inverse to ฮฆ. That is, there is a natural isomorphism Kฮฆโ
๐_^D.
Let X be a D-space. By definition, Kฮฆ (X) is the D-space given by
[Kฮฆ (X)]_d = ^D(F^d,X).
But by the Yoneda lemma, ^D(F^d,X) โ
X_d. The naturality of the Yoneda lemma in the first argument thus tells us that that Kฮฆ (X) โ
X. The naturality of the Yoneda lemma in the second argument allows us to then conclude that Kฮฆโ
๐_^D.
(K,ฮฆ) is an adjunction.
We will construct a natural isomorphism
^D(K(R),X) โ
^๐ช_โฑ^op(R,ฮฆ (X)),
where R is a generic ๐ช_โฑ^op-space and X is a generic D-space. In this context, given a D-equivariant map f K(R) โ X, its adjunct is the ๐ช_โฑ^op-equivariant map g: R โฮฆ (X) defined as follows:
For any O โโฑ,
g(O): R(O) โ [ฮฆ (X)](O) = ^D(O,X)
is the continuous map that sends r โ R(O) to the D-equivariant map
[g(O)](r):O โ X,
where [g(O)](r) is defined by
([g(O)](r))_d(o_d) = f(R(o_d^*)(r))
for all d โ D and o_d โ O_d. (Here, o_d^* F^d โ O is the unique D-equivariant map that sends ๐_d to o_d.)
For this construction to be valid, we need to check that [g(O)](r) is indeed D-equivariant and then that g is indeed ๐ช_โฑ^op-equivariant. This just means that [g(O)](r) and g need to be natural transformations, so we check the corresponding naturality squares. We'll begin with [g(O)](r):
Let ฮฑ : d โ d' be a morphism in D. To see that the square
O_d X_d
O_d' X_d'["O_ฮฑ"', from=1-1, to=3-1]
["X_ฮฑ", from=1-3, to=3-3]
["([g(O)](r))_d", from=1-1, to=1-3]
["([g(O)](r))_d'"', from=3-1, to=3-3]
commutes, consider a generic element o_d โ O_d and observe that:
* [O_ฮฑ(o_d)]^* = o_d^* โ^D(ฮฑ, -) because the maps agree on ๐_d' and are D-equivariant.
* Applying R to both sides gives us R([O_ฮฑ(o_d)]^*) = R(^D(ฮฑ, -)) โ R(o_d^*). (R is contravariant!)
* By definition of K, K(R)_d = R(F^d), K(R)_d' = R(F^d'), and K(R)_ฮฑ = R(^D(ฮฑ, -)), so the previous line can be rephrased as R([O_ฮฑ(o_d)]^*) = K(R)_ฮฑโ R(o_d^*).
* Since f: K(R) โ X is a D-equivariant map, f_d'โ K(R)_ฮฑ = X_ฮฑโ f_d.
* Thus, combining the previous three steps, we see that
f_d'โ R([O_ฮฑ(h_d)]^*) = f_d'โ R(^D(ฮฑ, -)) โ R(o_d^*) = X_ฮฑโ f_d โ R(o_d^*).
* The above are all continuous maps from R(O) to X_d'. Hence, for any r โ R(O), f_d'โ R([O_ฮฑ(o_d)]^*)(r) = X_ฮฑโ f_d โ R(o_d^*) (r) as elements of X_d'.
* By definition of g(O)(r), this shows that [g(O)(r)]_d'โ O_ฮฑ (o_d) = X_ฮฑโ [g(O)(r)]_d (o_d). Since o_d was arbitrary, our square commutes and we conclude that g(O)(r) is indeed D-equivariant.
Now let's confirm that g is ๐ช_โฑ^op-equivariant, which is to say that the square
R(O) ฮฆ(X)(O) = ^D(O,X)
R(P) ฮฆ(X)(P) = ^D(P,X)["R(ฯ)", from=3-1, to=1-1]
["g(P)"', from=3-1, to=3-4]
["g(O)", from=1-1, to=1-4]
["^D(ฯ,X) = (- โฯ)"', from=3-4, to=1-4]
commutes, where ฯ: O โ P is any map of D-orbits. (Note the direction of the vertical arrows; R and ฮฆ(X) are contravariant.)
Let r be a generic element of R(P). We can see g(O)(R(ฯ)(r)) and g(P)(r) โฯ are the same element of ^D(O,X) by the following:
* For any object d โ D and point o_d โ O_D, we know that ฯโ o_d^* = (ฯ(o_d))^* because they are both D-equivariant maps from F^d that agree on ๐_d.
* Thus, applying R to both sides, we get that R(o_d^*) โ R(ฯ) and R((ฯ(o_d))^*) are equal as functions from R(P) to R(F^d). Hence, for any r โ R(P), R(o_d^*) โ R(ฯ)(r) = R((ฯ(o_d))^*)(r).
* Since R(F^d) = K(R)_d by the definition of K, we can post-compose f to both sides and see that
f(R(o_d^*) โ R(ฯ)(r)) = f(R((ฯ(o_d))^*)(r)).
* But by definition of g, the left-hand side of this equation is g(O)(R(ฯ)(r))(o_d), and the right-hand side is g(P)(r) โฯ (o_d). Because this holds for all possible r and o_d, g is indeed ๐ช_โฑ^op-equivariant.
Having constructed the adjunct g, let us now show that the assignment of f:K(R) โ X to g:R โฮฆ(X) as described above yields a bijection ^D(K(R),X) โ
^๐ช_โฑ^op(R,ฮฆ (X)). To see that the assignment is injective, observe that if f_1, f_2: K(R) โ X differ at r โ K(R)_d = R(F^d), then the corresponding g_1 and g_2 differ because g_i(F^d)(r)(๐_d) = f_i(r).
To show surjectivity, we will demonstrate that any g:R โฮฆ(X) has an f:K(R) โ X assigned to it, namely the one defined by
f(r) = ([g(F^d)](r))_d(๐_d)
for any r โ R(F^d) = K(R)_d.
If we were to continue with the notation just used, the rest of the proof would be quite cumbersome. It's time to simplify:
From now on, ([g(O)](r))_d(o_d) will be denoted g(O)(r)(o_d). (In particular, the domain of g(O)(r), treated as a continuous map, will be implicit from the argument.)
Resuming the proof of surjectivity, we first note that this f is indeed D-equivariant because the naturality of g in F^d gives that f is natural in d, and that f_d is continuous because g(F^d) is. To finish proving surjectivity, we just need to show that f is actually assigned to g, which we do as follows:
* The adjunct g that f is assigned to is defined by
g(O)(r_O)(o_d) = f(R(o_d^*)(r_O))
for any r_O โ R(O). However, we've defined f above such that
f(R(o_d^*)(r_O)) = (g(F^d)(R(o_d^*)(r_O))(๐_d).
* By naturality of g in O, we know that g(O)(r_O) โ o_d^* and g(F^d) โ (R(o_d^*)(r_O)) are equal as elements of ^D(F^d,X). In particular, they agree on the evaluation of ๐_d, which means
g(O)(r_O)(o_d) = g(F^d)(R(o_d^*)(r_O))(๐_d) = g(O)(r)(o_d).
Hence, g = g, so the generic assignment of f to g is surjective.
Finally, to complete the proof of the adjunction, we just need to show that the bijection (isomorphism of sets) ^D(K(R),X) โ
^๐ช_โฑ^op(R,ฮฆ (X)) is natural in both X and R:
* Let ฮฑ: X โ Y be a map of D-spaces. To show naturality in X, we will check that the diagram
^D(K(R),X) ^๐ช_โฑ^op(R,ฮฆ(X))
^D(K(R),Y) ^๐ช_โฑ^op(R,ฮฆ(Y))["โ
", from=1-1, to=1-3]
["โ
", from=3-1, to=3-3]
["ฮฑโ -"', from=1-1, to=3-1]
["ฮฆ(ฮฑ) โ -"', from=1-3, to=3-3]
commutes. Consider any f โ^D(K(R),X), which thus has adjunct g โ^๐ช_โฑ^op(R,ฮฆ (X)) defined by g(O)(r)(h_d) = f(R(o_d^*)(r)), for all O โ๐ช_โฑ, r โ R(O), and o_d โ O_d. By definition of ฮฆ, the composition ฮฆ(ฮฑ) โ g thus satisfies ฮฆ(ฮฑ) โ g(O)(r)(o_d) = ฮฑโ f(R(o_d^*)(r)). But ฮฑโ f(R(o_d^*)(r)) is precisely the formula that defines that adjunct to ฮฑโ f โ^D(K(R),Y). Hence, the diagram commutes, so ^D(K(R),X) โ
^๐ช_โฑ^op(R,ฮฆ (X)) is natural in X.
* Similarly, let ฮณ:R โ S be a map of ๐ช_โฑ^op-spaces. To show naturality in R, we will check that the diagram
^D(K(R),X) ^๐ช_โฑ^op(R,ฮฆ(X))
^D(K(S),X) ^๐ช_โฑ^op(S,ฮฆ(X))["โ
", from=1-1, to=1-3]
["โ
", from=3-1, to=3-3]
["- โ K(ฮณ)", from=3-1, to=1-1]
["- โฮณ", from=3-3, to=1-3]
commutes. (Note the direction of the vertical arrows.) We know that any f โ^D(K(S),X) is assigned to the adjunct g โ^๐ช_โฑ^op(S,ฮฆ (X)) defined by g(O)(s)(o_d) = f(S(o_d^*)(s)) for all O โ๐ช_โฑ, s โ S(O), and o_d โ O_d. The composition g โฮณ then satisfies
(g โฮณ) (O)(r)(o_d) = g(O)(ฮณ(r))(o_d) = f(S(o_d^*)(ฮณ(s)))f(ฮณโ R(o_d^*)(ฮณ(s))).
(The first equation is the definition of g โฮณ, the second follows by plugging ฮณ(r) into the adjunct formula, and the third is given by the fact that ฮณ(R) = S.) But f(ฮณโ R(o_d^*)(ฮณ(s))) is precisely the formula that defines the adjunct of f โ K(ฮณ). Hence, ^D(K(R),X) โ
^๐ช_โฑ^op(R,ฮฆ (X)) is natural in R.
Thus, we've completed the proof of that (K,ฮฆ) is an adjunction.
(K,ฮฆ) is a Quillen adjunction.
One of the equivalent conditions for an adjunction to be a Quillen adjunction is that the right adjoint, ฮฆ, preserve fibrations and trivial fibrations (that is, fibrations that are also weak equivalences). Recall from Definitions <ref> and <ref> that the model structures we're using are:
* The weak equivalences (or fibrations) in ^D are maps ฮฒ: X โ Y such that ^D(O,ฮฒ):^D(O,X) โ^D(O,Y) is a weak equivalence (or fibration) in for all O โโฑ.
* The weak equivalences (or fibrations) in ^๐ช_โฑ^op are maps ฮณ: R โ S such that ฮณ(O):R(O) โ S(O) is a weak equivalence (or fibration) in for all O โโฑ.
We observe from this description that, since ฮฆ(X)(O) = ^D(O,X), a D-equivariant map ฮฒ: X โ Y is a weak equivalence (resp. fibration) if and only if ฮฆ(ฮฒ):ฮฆ(X) โฮฆ(Y) is an equivalence (resp. fibration). Hence, ฮฆ preserves weak equivalences and fibrations, and thus also trivial fibrations.
We're now able to describe the cofibrations in the โฑ-model structure:
Any relative cell complex ฮฑ: X_0 โ X of type โฑ is a cofibration in ^D under the โฑ-model structure.
The left adjoint in a Quillen adjunction preserves cofibrations. It also preserves pushouts and general colimits. Thus, any D-cell complex X of type โฑ is the image under K of a ^๐ช_โฑ^op-cell complex of type . (A D-orbit in โฑ is a free ๐ช_โฑ^op-orbit under the Yoneda embedding. Thus, a D-cell complex where D^n ร O_ฮผ is attached at the ฮผth stage is hit by a ^๐ช_โฑ^op-cell complex where ^๐ช_โฑ^op(O_ฮผ,-) ร D^n is attached at the ฮผth stage.)
We can now finish the proof of the theorem:
(K, ฮฆ) is a Quillen equivalence.
We need to show that, for any cofibrant R โ^๐ช_โฑ^op and fibrant X โ^D, any D-equivariant map f:K(R) โ X is a weak equivalence if and only if its adjunct g:R โฮฆ(X) is a weak equivalence. But by how we've defined our weak equivalences, f is a weak equivalence if and only if ฮฆ(f):ฮฆ K (R) โฮฆ(X) is. By the 2-of-3 property of weak equivalences and that fact that the unit of the adjunction at R, ฮท_R, factors ฮฆ(f) as f โฮท_R, it is sufficient (and necessary) to show that ฮท_R is a weak equivalence for all cofibrant R โ^๐ช_โฑ^op. We will in fact show that ฮท_R is an isomorphism for each cofibrant R.
Recall from the discussion in Definition <ref> that cofibrations in the projective model structure on ^๐ช_โฑ^op are retracts of relative ^๐ช_โฑ^op-cell complexes of type . Hence, every cofibrant R can be realized as a retract of some R', where R' is a transfinite composition of pushouts of the form
^๐ช_โฑ^op(O,-) ร A R_ฮผ
^๐ช_โฑ^op(O,-) ร B R_ฮผ+1["๐_^๐ช_โฑ^op(O,-)ร c"', from=1-1, to=3-1]
[from=1-1, to=1-3]
[from=3-1, to=3-3]
[from=1-3, to=3-3]
["โ"anchor=center, pos=0.125, rotate=180, draw=none, from=3-3, to=1-1]
.
Since R is a retract of R', we can show ฮท_R is an isomorphism by showing that ฮท_R' is an isomorphism. We will do this by transfinite induction:
Let ฮป be an ordinal such that there is a functor R: ฮปโ^๐ช_โฑ^op with colimit R' and such that for all successor ordinals ฮผ +1 < ฮป, R_ฮผ + 1 is the pushout given above. (O is some orbit in ๐ช_โฑ and c: A โ B is a generating cofibration in .
Initial Case: If ฮป is the initial ordinal, then the colimit of ฮป is the initial object of ^๐ช_โฑ^op. In this case, R'(O)={} for all O โโฑ. Hence, K(R')_d = {} for all objects d โ D, and consequently ฮฆ K (R') (O) = ^D(O,K(R')) = {} as all orbits have at least one point and there are no continuous maps from a non-empty to the empty space. This makesฮท_R' an equality and in particular an isomorphism.
Successor Case: If ฮป = ฮผ +1 for some ordinal ฮผ, we inductively assume that ฮท_R_ฮผ is an isomorphism. To see that ฮท_R' is an isomorphism, we will check that ฮฆ K (-) preserves pushouts and that ฮท_^๐ช_โฑ^op(O,-) ร A is an isomorphism for all O โโฑ and A โ. For pushout-preservation, we first note that K automatically preserves pushouts by being a left adjoint. Since colimits (and limits) in a functor category are computed object-wise, showing that ฮฆ preserves pushouts is equivalent to showing that ^D(O, Y โ_X Z) is naturally isomorphic to ^D(O,Y) โ_^D(O,X)^D(O,Z). But this is immediate from Proposition <ref>, since pushouts are colimits.
To complete the successor ordinal case, we just need to show that ฮท_^๐ช_โฑ^op(O,-) ร A is an isomorphism for all O โโฑ and A โ. To see this, note that
K(^๐ช_โฑ^op(O,-) ร A)_d = ^D(O,F^d) ร A โ
O_d ร A.
The naturality of the Yoneda lemma in the second argument shows us that
K(^๐ช_โฑ^op(O,-) ร A) โ
O ร A.
Applying ฮฆ to both sides gives
ฮฆ K(^๐ช_โฑ^op(O,-) ร A) โ
ฮฆ (O ร A).
Next, observe that, at any orbit P โโฑ,
ฮฆ (O ร A)(P) = ^D(P, O ร A) โ
^D(P,O) ร A = ^๐ช_โฑ^op(O,P) ร A.
(To see the natural isomorphism above, first observe that
^D(P, O ร A) โ
^D(P,O) ร^D(P,A)
because representable functors preserve limits. Then, simplify by noting that ^D(P,A) โ
A because colim(P) = {p} is a one-point space and every morphism in A is an identity morphism.) Since ^D(P,O) ร A is the same space as ^๐ช_โฑ^op(O,-) ร A evaluated at P, we conclude that ฮฆ K(^๐ช_โฑ^op(O,-) ร A) is naturally isomorphic to ^๐ช_โฑ^op(O,-) ร A. By construction, the natural isomorphism described above is precisely ฮท_^๐ช_โฑ^op(O,-) ร A, meaning the successor case is complete.
Limit Case: Let ฮป be a limit ordinal. We inductively assume that ฮท_R_ฮฝ is an isomorphism for all ฮฝ < ฮป. Since ฮป is a limit ordinal, R' = _ฮฝ < ฮป(R_ฮฝ). As before, showing that ฮท_R':ฮฆ K (R') โ R' is an isomorphism reduces to showing each ฮท_R'(O):ฮฆ K (R')(O) โ R'(O) is an isomorphism of spaces. Note that each K(R_ฮฝ) is a D-cell complex by construction, (Specifically, it's only built out of cells of orbit types O โโฑ.) and K(R_ฮฝ) โ K(R_ฮพ) is a D-cellular inclusion for any ฮฝ < ฮพ < ฮป. Thus, since K preserves colimits, we only need to check that ^D(O,-) preserves colimits indexed by ordinals where each map is a D-cellular inclusion of D-cell complexes. Combined with the inductive hypothesis, this will show that ฮท_R'(O):ฮฆ K (R')(O) โ R'(O) is an isomorphism of spaces.
Let f:O โ R' = _ฮฝ < ฮป(R_ฮฝ) be a D-equivariant map. We wish to find an ordinal ฮฝ < ฮป and map g:O โR_ฮฝ such that g factors f. Let o_d โ O_d be a point of O, and set ฮฝ to be the smallest ordinal such that f(o_d) โ (R_ฮฝ)_d. Observe that when f(o_d) was added to R_ฮฝ, it was added as the interior of some D-disk Orb_โฑ^op(P,F^d) ร D^n+1โ
P ร D^n+1. By equivariance and the colimit property of orbits, all points in R_ฮฝ that are in the orbit of f(o_d) must also have been in the interior of the new (P ร D^n+1)-cell. Thus, all points in the orbit of f(o_d) must lie in R_ฮฝ, which means that f is indeed factored by a map g:O โR_ฮฝ. This completes the limit case and the proof.
alpha
|
http://arxiv.org/abs/2306.08946v1
|
20230615083716
|
Bootstrap aggregation and confidence measures to improve time series causal discovery
|
[
"Kevin Debeire",
"Jakob Runge",
"Andreas Gerhardus",
"Veronika Eyring"
] |
stat.ME
|
[
"stat.ME",
"stat.ML"
] |
The nanoscale imaging of the bulk polycrystalline material with the effects of depth of field and field of view based on x-ray free electron laser
Meng Lv
July 31, 2023
==================================================================================================================================================
Causal discovery methods have demonstrated the ability to identify the time series graphs representing the causal temporal dependency structure of dynamical systems. However, they do not include a measure of the confidence of the estimated links. Here, we introduce a novel bootstrap aggregation (bagging) and confidence measure method that is combined with time series causal discovery. This new method allows measuring confidence for the links of the time series graphs calculated by causal discovery methods. This is done by bootstrapping the original times series data set while preserving temporal dependencies. Next to confidence measures, aggregating the bootstrapped graphs by majority voting yields a final aggregated output graph. In this work, we combine our approach with the state-of-the-art conditional-independence-based algorithm PCMCI+. With extensive numerical experiments we empirically demonstrate that, in addition to providing confidence measures for links, Bagged-PCMCI+ improves the precision and recall of its base algorithm PCMCI+. Specifically, Bagged-PCMCI+ has a higher detection power regarding adjacencies and a higher precision in orienting contemporaneous edges while at the same time showing a lower rate of false positives. These performance improvements are especially pronounced in the more challenging settings (short time sample size, large number of variables, high autocorrelation). Our bootstrap approach can also be combined with other time series causal discovery algorithms and can be of considerable use in many real-world applications, especially when confidence measures for the links are desired.
ยง INTRODUCTION
Since a rigorous mathematical framework for causal analyses has been established in the seminal works of Pearl, Spirtes, Glymour, Scheines, and Rubin <cit.>, causal inference has undergone continuous developments to address the challenges of real-world problem settings.
Learning causal graphs from data (termed causal discovery) is a main pillar of causal inference and of high interest in many fields where not even qualitative causal knowledge in the form of graphs is available, such as in biology <cit.>, neuroscience <cit.>, or Earth system sciences <cit.>. Furthermore, data in these fields typically comes in the form of time series, constituting a more general problem setting than the standard i.i.d.-case. In <cit.>, the authors give an overview of a few main categories of time series causal discovery methods: Granger causality and its extensions <cit.>, nonlinear state-space methods (CCM <cit.>), causal network learning algorithms (for example, naive adaptations of the PC-algorithm <cit.>, PCMCI and its extensions like PCMCI+<cit.>, FCI <cit.>), Bayesian score-based approaches <cit.>, and the structural causal model framework (VarLiNGAM <cit.>).
In parallel with the development of causal discovery methods, <cit.> introduced bootstrap aggregation (bagging) which has initially been used to improve the accuracy and stability of machine learning algorithms. In bagging, a random sample in the training set is selected with replacement โmeaning that each data point can be drawn more than once. Several data samples are generated in this fashion to produce a set of replicates (also called resamples). The machine learning models are then trained independently on each replicate and, finally, the outputs are averaged for prediction tasks or aggregated for classification tasks (for example by majority voting).
Combining bagging and causal graphical model algorithms has been proposed to improve the stability of graphical model learning <cit.>, as the estimation of graphical models is relatively sensitive to small changes of the original data. For example, <cit.> introduce an aggregation approach of directed acyclic graphs (DAGs) by minimizing the overall distance of the aggregated graph (based on structural hamming distance) to the ensemble of DAGs. In addition, the idea of measuring the uncertainty or confidence for an edge of an estimated graph from the edge frequency based on the graphs learned on bootstrap samples has been suggested in <cit.>. However, none of these approaches is directly transferable to time series causal discovery because their bootstrap sampling needs to be adapted to the lagged interdependencies of time series data.
Our main contribution is the introduction of a bootstrap method for time series causal discovery which preserves temporal dependencies. Our method allows (1.) to obtain uncertainty estimates for the links of the output graph, and (2.) improves the stability and accuracy by aggregating the ensemble of bootstrap graphs to one single output graph with majority voting on the level of each individual edge.
In principle, our method can be paired with any time series causal discovery algorithm. Here we combine it with PCMCI+ย <cit.> as a representative of a state-of-the-art constraint-based time series causal discovery method.
The paper is structured as follows. In Sectionย <ref>, we give an overview of time series causal discovery and the PCMCI+ method. In Sectionย <ref>, we present our bagging and confidence measure method which we combine with PCMCI+ (Bagged-PCMCI+). With a range of numerical experiments, we show in Sectionย <ref> that Bagged-PCMCI+ outperforms PCMCI+ and that our method to measure confidence for links is effective. Finally, we summarize the paper in Sectionย <ref>. The paper is accompanied by Supplementary Material (SM).
ยง TIME SERIES CAUSAL DISCOVERY
ยง.ยง PRELIMINARIES
We consider discrete-time structural causal processes ๐_t = (X_t^1, ..., X_t^N) such that
X_t^j := f_j( pa(X_t^j),ฮท_t^j) โ j โ{1, โฆ, N}.
Here, f_j are arbitrary measurable functions that depend non-trivially on all their arguments
and ฮท_t^j are mutually and temporally independent noises. In a time series graph ๐ข, the nodes represent the variables X_t^j at different time lags. The causal parents pa(X_t^j) are the set of variables on which X^j_t depends, and a causal link from X^i_t-ฯ to X^j_t exists if X^i_t-ฯโ pa(X_t^j) for a time lag ฯ.
A link X^i_t-ฯโ X^j_t is a called lagged if ฯ > 0, else it is called contemporaneous.
In this work, we assume stationarity of the causal links: that is, if the causal link X^i_t-ฯโ X^j_t exists for some time t, then X^i_t'-ฯโ X^j_t' also exists for all times t' โ t. We define the set ๐(X^j_t) of non-future adjacencies of variable X_t^j as the set of all variables X_t-ฯ^i for ฯโฅ 0 that have a causal link with X_t^j.
ยง.ยง PCMCI+
PCMCI+ learns the causal time series graph including lagged and contemporaneous links (up to Markov equivalence) under the standard assumptions of Causal Sufficiency, Faithfulness, and the Causal Markov condition <cit.>. To increase the detection power and maintain well-calibrated tests, PCMCI+ optimizes the choice of conditioning sets in the conditional independence (CI) tests. It is based on two central ideas: (1)ย separating the skeleton edge removal phase into a lagged and contemporaneous conditioning phase, and (2) constructing conditioning sets in the contemporaneous conditioning phase via the so-called momentary conditional independence (MCI) approachย <cit.>, as explained below. Moreover, PCMCI+ is order-independentย <cit.>, which implies that the output does not depend on the order of the variables X^j. More details and examples of PCMCI+ can be found in <cit.>, here we briefly summarize it.
In its first phase, PCMCI+ starts with a fully connected graph and then removes adjacencies among variable pairs by conditional independence testing.
In the first phase (PC_1), the algorithm tests all lagged pairs (X^i_t-ฯ, X_t^j) for ฯ > 0 conditioning on subsets ๐_kโ๐(X^j_t) โฉ๐^-_t with the lagged variables ๐^-_t=(๐_t-1, ๐_t-2,โฆ, ๐_t-ฯ_max) up to a maximum time lag ฯ_max. If (conditional) independence is detected, the adjacency is removed. The subsets ๐_k are chosen with increasing cardinality k: For k=0 all X^i_t-ฯ with X^i_t-ฯ X^j_t are removed, for k=1 those with X^i_t-ฯ X^j_t | ๐_1 where ๐_1 is the adjacency with largest association from the previous step, for k=2 those with X^i_t-ฯ Y_t | ๐_2 where ๐_2 are the two adjacencies with largest association from the previous step, and so on. Association strength is measured by the absolute test statistic value of the CI test. This choice improves recall and speeds up the skeleton phase. The resulting lagged adjacency sets for each X^j_t of this first phase are denoted โฌ^-_t(X_t^j). In its second phase, the graph ๐ข is initialized with all
contemporaneous adjacencies plus all lagged adjacencies
from โฌฬ^-_t(X_t^j) for all X_t^j. This phase of PCMCI+ tests all, contemporaneous and lagged, adjacent pairs (X^i_t-ฯ, X_t^j) for ฯโฅ 0, but iterates only through contemporaneous conditions ๐โ๐(X^j_t) โฉ๐_t with the MCI test
X^i_t-ฯ X_t^j |๐,โฌฬ^-_t(X_t^j) โ{X^i_t-ฯ}, โฌฬ^-_t(X^i_t-ฯ).
The conditioning on โฌฬ^-_t(X_t^j) blocks paths through lagged parents while the conditioning on โฌฬ^-_t(X^i_t-ฯ) has been shown to lead to well-calibrated tests even for highly autocorrelated time seriesย <cit.>. After these tests, the time-series-adapted collider orientation phase and rule orientation phase are applied: the former rule orients the collider motifs that contain contemporaneous links based on unshielded triples while the latter rule orients the remaining contemporaneous links based on the Meek rules <cit.>.
In the final graph, a pair (X^i_t-ฯ, X^j_t) of vertices can be connected by the following link types: no link (i.e., pair is non-adjacent), direct link X^i_t-ฯ X^j_t, opposite direct link X^i_t X^j_t (only for ฯ = 0), unoriented link X^i_t X^j_t (only for ฯ = 0), conflict-indicating link X^i_t X^j_t (due to finite sample effects or violations of assumptions, only for ฯ = 0).
Like other CI-based methods, PCMCI+ has the free parameters ฮฑ_PC (significance level of CI tests), ฯ_max (maximal considered time lag), and the choice of the CI test. ฮฑ_PC in PCMCI+ has been empirically shown to be an upper bound on the false positives. As opposed to such a statistically-motivated choice, it can also be chosen based on cross-validation or an information criterion <cit.>. ฯ_max should be larger or equal to the maximum assumed true time lag of any parent and can in practice also be chosen based on model selection. However, the numerical experiments indicate that a too large ฯ_max does not degrade performance much <cit.>.
PCMCI+ can flexibly be combined with different CI tests for nonlinear causal discovery, and for different variable types (discrete or continuous, univariate or multivariate).
ยง BAGGING FOR TIME SERIES CAUSAL DISCOVERY
ยง.ยง MOTIVATIONAL EXAMPLE
The example in Fig. <ref> illustrates the key benefits of bagging a causal discovery method. In this example, we generated four time series according to Equation <ref> and plot them in Fig. <ref>A. We then estimate the causal dependencies from the data, once with PCMCI+ and once with Bagged-PCMCI+. In the output graph of PCMCI+, see Fig. <ref>E, it is not easily possible to assess the confidence associated with a link. Approaches in this direction are based on taking the absolute p-value over all conditioning sets <cit.>, which presents an overly conservative option and does not allow us to assess the confidence in orientations. In comparison, Bagged-PCMCI+ provides a measure of confidence here shown by the width of the links in Fig. <ref>C (thick links for high confidence degrees, thin links for low confidence degrees). Moreover, in this example, Bagged-PCMCI+ has fewer false positives compared to PCMCI+, which leads to higher precision and is a robust property as demonstrated by our extensive numerical experiments.
ยง.ยง BAGGED-PCMCI+
While our bootstrap approach can be adapted to any time series causal discovery algorithm, this paper focuses on its implementation with PCMCI+ called Bagged-PCMCI+.
We illustrate the different bootstrap aggregation phases in Fig. <ref>A, B and C. The returned results of Bagged-PCMCI+ are: (i) an ensemble of B causal graphs from applying PCMCI+ to B datasets that are obtained by resampling with replacement while preserving temporal dependencies, (ii) the aggregation of all these graphs by majority voting (at the level of each individual edge) to a final output graph, (iii) link frequencies for the final aggregated graph that provide a confidence measure for the links.
Since time series causal discovery is sensitive to temporal dependencies, it is essential to retain the temporal dependencies in the resampling procedure. However, standard resampling inevitably destroys (at least parts of) the temporal dependencies. To resolve this problem, we employ the resampling strategy as illustrated in Figureย <ref>: First, starting from the original time series (๐)_t=1^T = (X^1, โฆ, X^N)_t=1^T, we replace each component (X^i)_t=1^T by the 2ฯ_max+1 dimensional vector (X^โ, i)_t=2ฯ_max+1^T = (X^โ, i, 0, X^โ, i, 1, โฆ, X^โ, i, 2ฯ_max)_t=2ฯ_max+1^T, where X^โ, i, ฯ_t = X^i_t-ฯ, and define (๐^โ)_t=2ฯ_max+1^T = (X^โ,1, โฆ, X^โ, N)_t=2ฯ_max+1^T. Second, we use standard resampling on the artificially enlarged time series (๐^โ)_t=2ฯ_max+1^T to create the B bootstrap datasets. This procedure retains the temporal dependencies ๐ up to time lags of 2ฯ_max because these dependencies are encoded within each single time step of ๐^โ by means of the auxiliary components X^โ, i, ฯ. When combined with PCMCI+ with a maximum time lag ฯ_max, retaining time lags up to 2ฯ_max allows PCMCI+ to run all its CI tests; when instead combined with other causal discovery algorithms minor modifications in the definition of ๐^โ might be necessary. Moreover, we note that in our implementation the enlarged time series ๐^โ is not actually created in the sense of allocating additional memory, but instead, we employ pointers to implement the above sampling procedure. Bagged-PCMCI+ then individually applies PCMCI+ to each of the B bootstrap datasets, thus generating an ensemble ๐ = {๐ข_1,โฆ,๐ข_B} of B causal graphs as an intermediate results. Alg. <ref> summarizes all steps up to this point.
In Alg. <ref>, we detail the aggregation of the B causal graphs ๐ = {๐ข_1,โฆ,๐ข_B} to a single final output graph ๐ข_bagged and the quantification of confidences. For each pair of vertices (X^i_t-ฯ, X^j_t) we iterate through the graphs ๐ข_1,โฆ,๐ข_B and record the relative frequency of each of the possible link types. The possible link types are: no link, , , , (the last three only for ฯ = 0). For aggregation, different strategies are possible. Here, we aggregate by relative majority voting of the link types for each pair (X^i_t-ฯ, X^j_t) individually.
Ties are resolved by the preference order no link, and ; and in case of a tie between โ and โ a conflicting link is returned. One benefit of aggregating at the level of individual edges is simplicity. However, it can result in cyclic aggregated graphs, which might not be desirable. The exploration of alternative aggregation methods, e.g.ย by adapting current techniques that minimize a modified structural Hamming distance of the aggregated graph to the entire set of graphs <cit.>, can be considered in future research. To quantify confidences, we employ the relative frequencies of link types. For example, if a link in ๐ข_bagged occurs in 80% of the ensemble graphs in ๐, then we have stronger confidence in this link than in a different link in ๐ข_bagged that occurs only in 45% of the ensemble graphs.
ยง.ยง THEORETICAL RESULTS
Based on the results of numerical experiments and the asymptotic consistency of PCMCI+, we conjecture that Bagged-PCMCI+ is asymptotically consistent in the following sense.
We assume Causal Sufficiency, the Causal Markov
Condition, the Faithfulness Conditions, and
consistent CI tests (oracle). We also assume stationarity, time-order, and
that the maximum time lag ฯ_maxโฅฯ^ pa_max, where ฯ^ pa_max is the maximum time lag of any parent in the model <ref>. We rule out selection variables and measurement errors.
Under Assumptions <ref> and as the number T of time steps goes to infinity, Bagged-PCMCI+ returns the correct partially directed acyclic graph (CPDAG), i.e, the graph with correct adjacencies and links oriented as much as possible.
We provide more details in the SM. What we are not yet able to prove is that the CI tests remain consistent when they are applied to the resampled datasets rather than the original dataset.
ยง NUMERICAL EXPERIMENTS
ยง.ยง BAGGED-PCMCI+ EVALUATION ON SYNTHETIC DATA
The numerical experiments model a number of typical challenges in time series causal discoveryย <cit.>: contemporaneous and time lagged causal dependencies, strong autocorrelation, large numbers of variables and considered time lags.
For better comparison to PCMCI+, we use a similar setup to the numerical experiments presented in <cit.>.
The synthetic data is generated according to the following additive model:
X^j_t = a_j X_t-1^j + โ_i^ f_i(X^i_t-ฯ_i) + ฮท_t^j
โ j โ{1, โฆ, N}
Autocorrelation coefficients a_j are uniformly drawn from [max(0, a - 0.3),a] for a as indicated in the header of the corresponding figures.
The noise terms ฮท_j are independent and identically distributed zero-mean Gaussians
๐ฉ(0,ฯ^2) with standard deviation ฯ drawn from [0.5, 2].
In addition to autodependency links, for each model L = โ1.5ยท Nโ (for N = 2, L = 1 ) cross-links are chosen with linear functional dependencies f_i(x) = x.
Coefficients c_i are drawn uniformly from ยฑ[0.1, 0.5]. 30% of the links are contemporaneous (ฯ_i = 0) and the remaining 70% are lagged links with ฯ_i drawn from {1, โฆ, 5}.
Only stationary models are considered.
We have an average cross-in-degree of d = 1.5 for all network sizes (plus an auto-dependency) implying that models become sparser for larger N.
We consider the PCMCI+ algorithm and Bagged(B)-PCMCI+ where B is the number of bootstrap realizations (here 25, 50, 100, and 200). Both algorithms use partial correlation (ParCorr) for the conditional independence tests.
Performance is evaluated with recall (equivalent to True Positive Rate, TPR) and precision. For adjacencies, precision and recall are distinguished between lagged cross-links (i โ j), contemporaneous (ฯ = 0), and autodependency links (i = j).
Due to time order, lagged links (and autodependencies) are automatically oriented.
Performance of contemporaneous orientation is evaluated with contemporaneous orientation precision which is measured as the fraction of correctly oriented links (,โ,โ) among all estimated adjacencies, and with recall as the fraction of correct orientations among all true contemporaneous links.
False positive rates (FPR) are also shown to evaluate whether the methods control false positives at the chosen significance level ฮฑ_ PC.
In addition, F_1 scores are calculated for adjacencies and contemporaneous orientations as the harmonic mean of precision and recall: F_1 = 2 precision ยท recall/precision + recall. Furthermore, the fraction of conflicting links among all detected contemporaneous adjacencies is calculated.
Finally, we give the average runtimes that were evaluated on an AMD EPYC 7763 processor. All metrics (and their standard errors) are computed across all estimated graphs from 500 different additive models (and associated realizations) described in <ref> with time series length T.
Figureย <ref> depicts precision-recall curves for adjacencies as well as contemporaneous orientations obtained by varying the hyperparameter ฮฑ_PC for a model setup that stands exemplary for others (see SM). The precision-recall values of Bagged-PCMCI+ are systematically higher than those of PCMCI+, regarding adjacencies and even more pronounced for contemporaneous orientations. Moreover, larger numbers of bootstrap replicates B results in enhanced performance, but we observe no strong differences between B=50 and B=200 . More precision-recall curves are shown in the SM for different model parameters (autocorrelation, number of variables, and time sample size). Noticeably, we found that Bagged-PCMCI+ outperforms PCMCI+ more clearly in statistically more challenging regimes with higher autocorrelation a, larger number of variables N, and smaller time sample sizes T.
Figureย <ref> depicts further details for the linear Gaussian setup for varying significance level ฮฑ_PC with model parameters shown at the top right.
The right column of Fig. <ref> clearly shows that both methods (PCMCI+ and Bagged-PCMCI+) control the FPR below the significance level ฮฑ_PC (grey line), but Bagged-PCMCI+ consistently exhibits lower FPR compared to PCMCI+ for all types of links (lagged, contemporaneous, and all) and across significance levels ฮฑ_PC ranging from 0.001 to 0.1.
Thus, treating the significance level as a hyperparameter, one can use a much higher level ฮฑ_ PC for Bagged-PCMCI+ than for PCMCI+ while still controlling the FPR below ฮฑ_PC.
The adjacency results (left-most column of Fig. <ref>) for different ฮฑ_ PC show higher precision of Bagged-PCMCI+ over PCMCI+, but also slightly lower recall โ at a given ฮฑ_ PC. The aggregate meaasure F1 then is similar or larger for Bagged-PCMCI+.
For contemporaneous orientation recall the performance gain is even more evident with stronger improvement in precision and smaller improvements in recall.
There appear to be slightly fewer conflicts for Bagged-PCMCI+. Runtimes are, as expected, higher for Bagged-PCMCI+, but these can be reduced by embarassingly parallelization.
In SM, we provide additional figures for varying one of autocorrelation a, number of variables N, time sample size T, and maximum time lag ฯ_max with ฮฑ_PC = 0.01. We found that Bagged-PCMCI+ seem robust to large maximum time lags ฯ_max (even when ฯ_max is much larger than the true maximum time lag of 5) for the studied sample size T = 500.
We have also combined our bagging approach with a modified PC algorithm adapted to time series. We provide results for this experiment in SM. Similar to PCMCI+, we have found that Bagged-PC enhances its base algorithm PC in terms of precision and recall.
ยง.ยง EVALUATION OF BAGGED-PCMCI+ CONFIDENCE MEASURE
We conduct numerical experiments to assess the ability of Bagged-PCMCI+ to determine a confidence degree for links in the output graph. It is essential to have a reference point for comparing our proposed confidence measure. Ideally, we would like the Bagged-PCMCI+ link frequency obtained on a single data sample to approximate closely the frequency of links along graphs obtained independently by PCMCI+ on an infinite number of data samples , which we call true link frequencies. In practice, it is only possible to approximate the true link frequencies by using a large but limited number of data samples. Here, we design two experiments to evaluate the ability of the proposed confidence measures to approximate the true link frequencies.
In the first experiment, we generate 100 different additive models (see equation <ref>). For each of these models, we generate D=500 independent data samples with the same additive model (only the noise terms change across the samples). For each of the 500 samples, causal graphs are estimated independently using PCMCI+ and Bagged-PCMCI+. For each edge, we estimate its true link frequency by calculating the frequency of the most recurrent link types across the PCMCI+ ensemble of 500 graphs. We use the mean of our proposed confidence measure (i.e. the mean of the Bagged-PCMCI+ link frequencies along the 500 Bagged-PCMCI+ graphs) to reduce the amount of noise in the estimation. We also calculate the standard deviation of the Bagged-PCMCI+ link frequency along the 500 Bagged-PCMCI+ graphs to estimate its uncertainty. If effective, we expect the Bagged-PCMCI+ confidence measure to approximately follow the estimated true link frequency.
In Fig. <ref>A, we show the results of the latter experiment for model parameters and method parameters shown at the top right and for B=1000. We plot confidence measures against the true link frequencies for different causal dependencies (lagged, contemporaneous, or all) and for existing/absent links of the ground truth graphs.
Figure <ref>A shows that the Bagged-PCMCI+ link frequency (confidence measure) follows the true link frequency. We can notice a bias for low true link frequencies as Bagged-PCMCI+ tends to overestimate the true link frequencies between 40% to 60%. This bias seems consistent across different types of causal dependencies. It is still visible when varying the number of bootstrap realizations B as shown in SM. The one-standard-deviation error bars give indications of the uncertainties in the confidence measure and the estimated true link frequency. When taking into account both uncertainties, the error bars cross the expected x=y diagonal line more than 99% of the estimated link frequencies. This high percentage demonstrates that the Bagged-PCMCI+ confidence measure approximates the true link frequency of PCMCI+.
In the second experiment, we quantify the average absolute error of the Bagged-PCMCI+ link frequency relative to the True Link Frequency for different causal dependencies (lagged, contemporaneous, or all) and for existing links and absent links of the ground truth model. For one additive model, the true link frequency is calculated as the frequency of the most recurrent link type of PCMCI+ graphs over D=5000 independent samples. Here the Bagged-PCMCI+ link frequency is a confidence measure for the first data sample for this additive model. We calculate the mean absolute error between the confidence measures and the estimated true link frequencies. We generate a total of 500 different additive models and average the mean absolute link frequency errors. To study the effect of the number of bootstrap realizations B on the error, we vary B from 25 to 2000.
We present the results of the mean absolute link frequency errors in Fig <ref>B. For lagged links, auto links, and all links, the mean absolute errors are a bit higher than 3% which seems relatively low. For the contemporaneous links, the 7% errors demonstrate that the estimation of the contemporaneous links is a more difficult task for the current method. It is also yet not clear if the mean absolute error converges to zero if B goes to infinity. We observe that a larger B leads to a lower mean absolute error of Bagged-PCMCI+. This confirms that increasing B enhances the performance of the Bagged-PCMCI+. The mean absolute frequency error reduces by about 10% when increasing B from 25 to 500. Unfortunately, the increase in performance comes at the cost of a 20-fold increase in runtime. This is why we recommend using a number of bootstrap realizations which is adapted for the application and available computing resources, but preferably larger or equal to 100.
ยง CONCLUSION
In this paper, we have made two major contributions to time series causal discovery. First, we propose a bootstrap aggregation by majority voting that can be combined with any time series causal discovery algorithm. Here, we combine our bootstrap aggregation approach with the state-of-the-art time series causal discovery PCMCI+ algorithm (referred to as Bagged-PCMCI+). Through extensive numerical experiments, we show that Bagged-PCMCI+ greatly reduces the number of false positives compared to the base PCMCI+ algorithm. In addition, Bagged-PCMCI+ has a higher precision-recall regarding adjacencies and orientations of contemporaneous edges compared to PCMCI+. Our second contribution is a confidence measure for links in a time series graph, which is calculated as the link frequencies along the graphs learned on bootstrap replicates. Numerical experiments show that the proposed method gives a pertinent confidence measure for links of the output graph.
The main strengths of our method is that it can be coupled with any time series causal discovery algorithm, and it can be of substantial use in many real-world applications, especially for orienting contemporaneous causal links or when confidence measures for links are desired. The main weaknesses of our method so far are its higher computational cost and longer runtime. One solution to decrease runtime is to parallelize the bootstrap process. In addition, the current method of aggregating through majority voting has a limitation. It can lead to cyclic graphs that are not always desirable. Therefore, exploring alternative methods of aggregation will constitute a crucial step for future research. The topic of causal discovery is fundamental, and we believe the risk of misuse is low.
ยง ACKNOWLEDGEMENTS
K.D. and V.E. received funding from the European Research Council (ERC) Synergy Grant โUnderstanding and modeling the Earth System with Machine Learning (USMILE)โ under the European Unionโs Horizon 2020 research and innovation programme (Grant agreement No. 855187). J.R. received funding from the ERC Starting Grant CausalEarth under the European Unionโs Horizon 2020 research and innovation program (Grant Agreement No. 948112). This work used resources of the Deutsches Klimarechenzentrum (DKRZ) granted by its Scientific Steering Committee (WLA) under project ID 1083.
We thank Birgit Kรผhbacher for her insightful comments that helped improve the
paper. We also thank Tom Hochsprung for his remarks and our fruitful discussions.
plainnat
ยง SUPPLEMENTARY MATERIAL
The Supplementary Material includes:
* Results of additional numerical experiments for our proposed approach Bagged-PCMCI+
* Results of numerical experiments for an alternative approach: Bagged-PC (bagging approach combined with the time-series adapted PC algorithm instead of PCMCI+)
* More details on our conjecture that Bagged-PCMCI+ is asymptotically consistent, as given in Section 3.3 of the main text.
* Python code to reproduce the numerical experiments in a separate github repository. Please refer to the README.md in the repository for more details.
ยง S1ADDITIONAL NUMERICAL EXPERIMENTS
ยง.ยง S1.1BAGGED-PCMCI+ METHOD EVALUATION
ยง.ยง.ยง S1.1.1FURTHER PRECISION-RECALL CURVES
Precision-recall curves for additional model setups show the impact of a smaller number of variables of N=5 instead of N=10 in the main text (Fig. <ref>), increased sample size T from 200 to 500 for N=5 (Fig. <ref>), and a decreased autocorrelation coefficient a from 0.95 to 0.6 for T=500 and N=5 (Fig. <ref>). For all these model setups we also provide the individual precisions, recalls, F1-scores plots for adjacencies and contemporaneous orientations, as well as the runtimes and number of conflicts for varying ฮฑ_ PC (see Figs. <ref>, <ref>, and <ref>).
Across all these model setups, for a given ฮฑ_ PC Bagged-PCMCI+ has similar recall and higher precision as compared to PCMCI+, particularly in orienting contemporaneous links. Moreover, these improvements are stronger in the more challenging settings (high autocorrelation a, short time sample size T, and high number of variables N).
While, for a given ฮฑ_ PC, PCMCI+ can have higher adjacency recall, the fair comparison here is the area under the precision-recall curve, which is higher for Bagged-PCMCI+. This implies that one can always choose a higher ฮฑ_ PC to obtain a better recall with Bagged-PCMCI+, while still retaining the same or better precision.
ยง.ยง.ยง S1.1.2FURTHER EXPERIMENTS
Here we study in more detail the impact of different model parameters on the performance of Bagged-PCMCI+. We vary the following model parameters: autocorrelation a (Fig. <ref>), number of variables N (Fig.ย <ref>), time sample size T (Fig.ย <ref>), and maximum time lag ฯ_max (Fig.ย <ref>) for a fixed significance level ฮฑ_PC=0.01. The default model parameters, if not varied in the experiment, are a=0.95, T=500, and N=5.
For all setups, these numerical experiments confirm the lower FPR of Bagged-PCMCI+ over PCMCI+ for a fixed significance level ฮฑ_PC=0.01. For increasing autocorrelation a, we observe a slight gain in the orientation F1-score of Bagged-PCMCI+ over PCMCI+.
For small sample size T, PCMCI+ has very slighly higher adjacency F1-scores, but lower orientation F1-score.
For increasing number of variables N we observe the largest gains in both adjacency and orientation F1-scores for Bagged-PCMCI+ over PCMCI+.
There is almost no change in both Bagged-PCMCI+ and PCMCI+ for increasing the maximum time lag ฯ_max, illustrating the robustness of these methods to this hyperparameter.
ยง.ยง S1.2BAGGED-PCMCI+ CONFIDENCE MEASURE EVALUATION
Here we evaluate our proposed confidence measures for varying significance level ฮฑ_PC to study whether the bootstrapped confidence estimates approximate the estimated true link frequencies for ฮฑ_PCโ 0 (Fig. <ref>). To reduce computational time, the setup here was slightly modified compared to the main text. While we used B=1000 and L=3 (number of cross-links) in the main body of the paper, here we set B=250 and L=5.
We vary ฮฑ_PC from 0.01 to 10^-5 to study the mean absolute error between the bootstrapped confidence estimates and the estimated true link frequencies.
We summarize the results regarding mean absolute error in Tab.ย <ref>. There does seem to be a decrease in error from ฮฑ_ PC=10^-2 to ฮฑ_ PC=10^-4 across all types of link frequencies, while there are mixed results for ฮฑ_ PC=10^-5. There is a visible recurrent positive bias for low values of the true frequencies (approximately 40-60%): The bootstrapped confidence measures tend to consistently overestimate the true link frequencies for this range.
More research is needed to clarify whether the bootstrap confidence estimates do approximate the true link frequencies, or whether there are persistent biases. In this case a question would be, what this bias depends on (number of variables, graph structure, SCM properties, sample size, etc).
ยง.ยง S1.3EXPERIMENTS FOR BAGGED-PC
The previous numerical results have shown that the bagging approach leads to enhanced precision-recall when paired for PCMCI+. In order to demonstrate that this conclusion not only applies for PCMCI+ but also to other causal discovery methods, we carried out further experiments with the PC algorithm. That is, we combined our bagging approach with the PC algorithm (referred to as Bagged-PC) and compared its performance against the base PC algorithm. Both the base PC algorithm and Bagged-PC are adapted to time series as given in <cit.>.
The results demonstrate that the gain using a bagging approach is similar here: Bagged-PC shows lower FPR and higher precision-recall compared to PC, especially for contemporaneous orientations (see Fig. <ref> and Fig. <ref>).
Hence, our results show that combining a causal discovery method with our bagging approach can considerably improve the performance compared to the base causal discovery method, albeit at the expense of increased computational runtime (if not parallelized).
ยง S2DETAILS ON THE CONJECTURE OF ASYMPTOTIC CONSISTENCY
In section 3.3 of the main text, with Conjecture 1 we conjecture that Bagged-PCMCI+ is asymptotically consistent under the assumptions formulated with Assumptions 1. Here, we give more details on this conjecture as well as on which step we are not yet able to prove.
To begin, note that Assumptions 1 is almost equivalent to โAssumption 1โ from <cit.>. The only difference is that Assumption 1 from the main text requires the Faithfulness Condition <cit.> whereas โAssumptions 1โ from <cit.> requires the strictly weaker Adjacency Faithfulness Condition <cit.>. In fact, we could modify Assumption 1 to also require Adjacency Faithfulness instead of Faithfulness.
As proven in <cit.>, the PCMCI+ algorithm is asymptotically consistent under โAssumptions 1โ from <cit.>. Hence, PCMCI+ is also asymptotically consistent under Assumption 1.
As explained in Section 3.2 of the main text, the output of Bagged-PCMCI+ is the graph obtained by a majority vote across all of the B bootstrap graphs that is cast individually for every pair of variables. Since every one of the B bootstrap graphs is the result of PCMCI+ on one of the bootstrap datasets and since PCMCI+ asymptotically returns the correct graph under Assumption 1 (asymptotic consistency), it is tempting to conclude that all of the B bootstrap graphs asymptotically are the correct graph under Assumption 1. If this conclusion is true, then Bagged-PCMCI+ asymptotically returns the correct graph under Assumption 1, that is, then Bagged-PCMCI+ is asymptotically consistent under Assumption 1.
However, as thankfully pointed out to us by Tom Hochsprung, this line of reasoning might have a loophole: Assumption 1 from the main text and โAssumptions 1โ from <cit.> require consistent conditional independence (CI) tests. This assumption means that, asymptotically as the number of time steps and hence the number of samples used for CI testing goes to infinity, the CI tests make no errors. Making no errors means to always judge independence if, in fact, independence is true and to always judge dependence if, in fact, dependence is true. However, this assumption refers to the original dataset which is sampled from the true data-generating distribution. As opposed to that, the B bootstrap datasets are sampled from the empirical distribution defined by the original dataset. Consequently, the B bootstrap datasets are sampled from a different distribution than the original dataset. It is thus not immediately clear whether, when applied to a bootstrap dataset, the CI tests asymptotically always correctly detect (in-)dependence as defined by the true data-generating distribution. For example, even if in the true data-generating distribution independence holds, then for any finite number of time steps one almost surely has dependence in the empirical distribution defined by the original dataset. While the magnitude of this dependence in the empirical distribution converges to zero as the number of time steps goes to infinity, we have not yet proven that this fact implies that CI tests on the bootstrap datasets asymptotically correctly detect (in-)dependence as defined by the true data-generating distribution.
One way to tackle this could be to investigate the consistency for the limiting case that ฮฑ_ PCโ 0. One hint that this could work is our finding in Tab.ย <ref> that the mean absolute error between the bootstrapped confidence estimates and the estimated true link frequencies seems to partly converge.
Further, given the strong empirical performance of Bagged-PCMCI+ as seen in our numerical experiments, we conjecture that such an argument can finally be made and that, thus, Bagged-PCMCI+ is asymptotically consistent.
Moreover, we note that the above line of reasoning argues that asymptotically all of the B bootstrap graphs are the correct graph. While sufficient to get asymptotic consistency of Bagged-PCMCI+, this circumstance is stronger than needed: Even if not all of the B bootstrap graphs are asymptotically the correct graph, then Bagged-PCMCI+ might still recover the correct graphs by means of the majority vote. However, making an argument along these lines would require a significantly different proof.
Lastly, even if the conjecture of asymptotic consistency turns out to be false, then our extensive numerical experiments still demonstrate the usefulness of Bagged-PCMCI+ in the case of finite samples (that is, in the case of finitely many time steps).
|
http://arxiv.org/abs/2306.11464v1
|
20230620113901
|
One-to-Many Spectral Upsampling of Reflectances and Transmittances
|
[
"Laurent Belcour",
"Pacal Barla",
"Gael Guennebaud"
] |
cs.GR
|
[
"cs.GR",
"68U05",
"I.3.7"
] |
[]
[] (A)
< g r a p h i c s >
;
[yshift=-1.7cm]
(spec_usanbara)
< g r a p h i c s >
;
[xshift=5.80cm]
(0,-2.5) rectangle (2.5,2.5);
(B)
< g r a p h i c s >
;
(1.25, 2.3) node[text=tab_green] (Ct) Spectrum B;
[text=white, below=-3pt of Ct] no color effect;
(-2.5,-2.5) rectangle (0,2.5);
(C)
< g r a p h i c s >
;
(-1.25, 2.3) node[text=tab_orange] (Ct) Spectrum A;
[text=white, below=-3pt of Ct] Usambara effect;
( 0.0,-2.5) rectangle (2.5,2.5);
(-2.5,-2.5) rectangle (0.00,2.5);
[yshift=-1.7cm]
(spec)
< g r a p h i c s >
;
[xshift=11.6cm, yshift=-0.175cm]
(D)
< g r a p h i c s >
;
at (0,0.35) ๐=1;
at (1,-0.1) ๐=10;
at (-0.1, 1.1) ๐=10;
at (-1.5, 0.0) sRGB gamut;
[below=5pt of A] (tA) a) Photograph of tourmaline;
[below=5pt of B] (tB) b) One-to-many spectral upsampling;
[below=-4pt of D] (tD) c) Corresponding chromaticities w.r.t. depth;
Surprising color changes may occur due to the path length travelled by light through some materials.
This is shown in (a) with the Usambara effect observed in tourmaline: the same gem appears green or red depending on path length, due to a specific transmission spectrum (see inset).
We introduce a method for finding such non-generic spectra.
This requires to perform one-to-many spectral upsampling (b), which generates multiple spectra with a unique controllable color at a given path length, but varying colors at further lengths.
The chromaticity of each spectrum as a function of path lengths is shown in (c).
[
FRANCESCO NAVARRA
Received ; accepted
========================
Spectral rendering is essential for the production of physically-plausible synthetic images, but requires to introduce several changes in the content generation pipeline.
In particular, the authoring of spectral material properties (e.g., albedo maps, indices of refraction, transmittance coefficients) raises new problems.
While a large panel of computer graphics methods exists to upsample a RGB color to a spectrum, they all provide a one-to-one mapping.
This limits the ability to control interesting color changes such as the Usambara effect or metameric spectra.
In this work, we introduce a one-to-many mapping in which we show how we can explore the set of all spectra reproducing a given input color.
We apply this method to different colour changing effects such as vathochromism โ the change of color with depth, and metamerism.
ยง INTRODUCTION
Spectral rendering has been increasingly used in recent years, due to raising expectations in photo-realism in cinematographyย <cit.>, or to applications that require predictive results such as in architecture.
However, the generation of spectral material properties presents a challenge for artists and designersย <cit.>.
To ease the edition of reflectance and transmittance spectra, several spectral upsampling methods have been introduced in computer graphicsย <cit.>.
They produce spectra from colors, ensuring that physical bounds are achieved (e.g., reflectances must lie in the [0,1] range).
All of these methods are restricted to produce a one-to-one conversion: one RGB triplet converts to a single spectrum.
This limitation restricts the possibilities that spectral rendering offers.
Indeed, unlike RGB materials, spectral materials offer the possiblity to produce
subtle color effects, such as metamerism โ a change of color due to different illuminantsย <cit.>.
In the optics community, Metameric blacksย <cit.> have been introduced to explore the space of metamers.
In this approach, metameric spectra achieving a given desired color are sampled from a null-space in the target color space.
Unfortunately, this requires to manipulate physical constraints in a high dimensional null-space, which significantly complicates artistic control.
Most importantly, this null-space approach is not easily adapted to deal with non-linear color changes, such as those observed in the Usambara effect (see Figureย <ref>ย (a)) โ a surprising change of color due to the path length travelled by light in tourmaline gems.
In this paper, we introduce a novel one-to-many approach that enables artists to design non-generic spectra with controlled color effects.
The key idea is to build reflectance or transmittance spectra using a small set of basis functions forming a partition of unity (PU), and to express them in chromaticity space.
We primarily target non-linear effects: our representation allows us to find many spectra that achieve a same target chromaticity at a unit optical depth, while providing control over the chromaticities at further depths.
This is shown in Figureย <ref>(b,c) for a pair of spectra.
A key observation on which we elaborate in Sectionย <ref> is that a PU spectral representation is linked to generalized barycentric coordinates in chromaticity space.
As demonstrated in Sectionย <ref>, exploring all the possible barycentric coordinates reconstructing the same target chromaticity point is equivalent to exploring the space of all spectra producing the same chromaticity when integrated with respect to color matching functions of the human visual system.
We then use this geometric analogy for the design of the PU basis in Sectionย <ref>, where we show how to strike a balance between spectral smoothness and color expressivity.
With our one-to-many spectral upsampling approach, we are able to generalize the Usambara effect to any non-generic spectrum that exhibits changes of color with depth, which we suggest to call vathochromism, derived from ancient Greek vathos (depth) and chroma (colour).[We reserve the usage of the term โUsambara effectโ for the typical green-to-red color shift observed in Tourmaline gemstones.]
We show in Sectionย <ref> how to build a parametric system in which a user can pick spectra with specific constraints (such as reproducing two given chroma for different optical depths).
The same representation also provides control over metamerism, as shown in Sectionย <ref>.
We further discuss the differences with Metameric Blacks in Sectionย <ref>.
ยง PREVIOUS WORK
ยง.ยง Color changes in Nature
Many different kinds of natural materials exhibit changes of color, depending on the angle of view (goniochromism), on temperature (thermochromism) or exposition to light (photochromism) for instance.
In all those cases, the reflected or transmitted spectrum is itself changed either due to an alteration of the material itself, or to viewing conditions.
In this paper, we are instead interested in materials that exhibit color changes despite the fact that their spectral reflectance or transmittance does not change.
*Metamerism.
A common example of such materials are those that change color with a change of illumination, called metamers.
Two metameric materials can look the same under one illuminant, but will differ when lit by another illuminant.
This is explained by the fact that the human visual system integrates the product of light and material on photo-receptors, which is a many-to-one mapping.
Another related example of the impact of the illuminant on the appearance of objects is the Alexandrite effectย <cit.>.
Alexandrite gems are known to change from green when lit by sunlight to red when lit by candle light.
This particular effect has been used in computer graphics by Bergner et al.ย <cit.> for visualization purposes.
*Usambara effect. The Usambara effect was first described for a particular tourmaline found in the Umba valley in Tanzaniaย <cit.>.
It was described as a change of color (from green to red) with an increase of the optical depth of the material. It was later found that other materials (such as topaz and amber) depicted such behaviourย <cit.>.
In this work, we use the term vathochromism for such changes of color with depth.
ยง.ยง Computing Metamers in Optics
*Metameric blacks
One way to generate a pair of metameric spectra is to add to a first spectrum a spectral curve that corresponds to a black color (i.e., a zero triplet) in the target color spaceย <cit.>.
From the point of view of linear algebra, a metamer is then a point in the null-space of the color space matrixย <cit.>.
While this formalism permits the generation of arbitrary metamers, it requires to track hard constraints: a reflectance spectrum must take its values in the [0,1] range.
With finely-discretized spectra, those constraints generate a convex hull of valid spectra in a high dimensional space.
Alternate methods can avoid this dimensionality issue by using a blending of measured spectraย <cit.>.
However, this comes at a cost: each reconstructed spectrum is necessarily within the convex hull of the measured spectra.
In particular, one can only reproduce luminance within this convex hull.
*Applications
Metameric Blacks have been successfully used for camera calibrationย <cit.>, reflectance acquisitionย <cit.>, or printingย <cit.>.
However, this approach is too limiting for the artistic control of spectral assets, which is our main focus in this work.
Furthermore, working with discretized spectra has the additional drawback that spectral maxima are limited to occur at spectral bin locations, potentially preventing interesting effects.
ยง.ยง Spectral Representations in Computer Graphics
*Spectral upsampling.
In computer graphics, the use of a spectral renderer requires to convert between colors and spectraย <cit.>. Usually, the aim is to convert RGB textures (such as albedo maps, environment maps) to spectral textures with one spectral curve per texel โ a one-to-one mappingย <cit.>.
The difference between those approaches mostly resides in how spectra are built.
For instance, <cit.> and <cit.> optimize smooth spectra, <cit.>and <cit.> use a database to project colors, while <cit.> build a parametric family of spectra.
All these methods ensure that the resulting upsampled spectra are physically-plausible: they remain in the [0,1] range along the spectral dimension.
*Spectral Compression.
The storage of spectral curves raises additional difficulties.
Parametric models (such as the one of <cit.>) limit storage requirements, even allowing for on-the-fly conversion of RGB assets.
However, they severely restrict the family of spectra that can be represented.
An alternative is to decompose spectra using momentsย <cit.>.
With this approach, it is possible to reconstruct a large family of spectra while keeping memory requirements in check.
The storage of spectra is orthogonal to our work, as we could choose to use any compression method to store the spectra produced by our approach.
*Fluorescence.
Spectra defined in the visible range can be extended to incorporate fluorescence effectsย <cit.>.
While it requires dedicated rendering algorithmsย <cit.>, it expands the range of achievable appearancesย <cit.>. While compact representations for fluorescent spectra have been recently introduced in computer graphics (e.g.,ย <cit.>), we restrict our approach to spectra in the visible range and put fluorescence aside.
ยง.ยง Scope of this Work
Our goal is to extend the computer graphics toolbox with a one-to-many spectral upsampling method tailored to reflectance and transmittance.
Contrary to previous work in optics, we do not rely on convex combinations of measured spectra, since our focus is on the artistic control of color-changing effects, for vathochromism and metamerism alike.
As described in the next section, we overcome the difficulties raised by the null-space approach of Metameric Blacks by relying on a spectral Partition of Unity.
ยง SPECTRAL PARTITION OF UNITY
In this section, we use a Partition of Unity to define a space of smooth spectra and show how these spectra are related to generalized barycentric coordinates in chromaticity space.
*Partition of Unity.
A Partition of Unity (PU) is a set of K basis functions B_k : U โโ with k โ [0,K-1] such that:
โ_k B_k(x) = 1, โ x โ U.
We can use a weighted sum of these basis functions to reconstruct or approximate functions.
A notable property of a PU is that bounded weights yield bounded reconstructed functions:
โ kโ [0,K-1], w_k โ [0,1] โ f(x) = โ_k w_k B_k(x) โ [0,1].
*Reconstructing Transmission Spectra.
We use a PU created from non-uniform B-splines to produce reflectance or transmittance spectra.
The input domain is the set of visible wavelengths U = [U_0, U_1] = [385, 700].
The energy conservation constraint on reflectance and transmitance spectra is readily met through Equationย <ref>.
We will discuss the choice of the number K of B-spline basis functions, their degree and the positions of their knots later in Sectionย <ref>.
In this section, for the purpose of illustration, we rely on K=5 bases of degree 2 and uniformly spaced knots with knots at the boundaries of U having a multiplicity of 3, as shown in Figureย <ref>(left).
We also work with the sRGB color space.
*Geometric interpretation
When intergrated with respect to the CIE sensitivity functions xฬ
(ฮป), yฬ
(ฮป) and zฬ
(ฮป) shown in Figureย <ref>(right), each basis function corresponds to a XYZ color:
๐_k
= [ [ B_k,X; B_k,Y; B_k,Z ]]
= โซ B_k(ฮป) ๐ฌ(ฮป) ฮป,
with ๐ฌ(ฮป) = [ xฬ
(ฮป), yฬ
(ฮป), zฬ
(ฮป) ]^โค.
Due to the linearity of reconstruction, a weighted sum of PU basis functions yields a XYZ color that is a weighted sum of basis XYZ colors:
๐
= [ [ F_X; F_Y; F_Z ]]
= โซ f(ฮป) ฮป = โ_k w_k ๐_k.
๐
may then be converted to the xyY color space.
Using Equationย <ref>, we directly obtain its luminance F_Y = โ_k w_k B_k,Y.
Its chromaticity ๐ is slightly more complicated.
If we write |F|=F_X+F_Y+F_Z and similarly |B_k| = B_k,X + B_k,Y + B_k,Z, it is given by:
๐ = [F_X, F_Y]^โค|F| = โ_k w_k [B_k,X, B_k,Y]^โคโ_l w_l |B_l|.
which may be rewritten as:
๐ = โ_k a_k ๐_k,
a_k = w_k |B_k|/โ_l w_l |B_l|.
where the ๐_k=[B_k,X, B_k,Y]^โค|B_k| denote basis chromaticities.
Our key observation is thus that the chromaticity ๐ of a spectrum given by a vector ๐ฐ of basis coefficients is obtained as a linear combination of basis chromaticities ๐_k where the weights a_k correspond to homogeneous barycentric coordinates.
This is illustrated in Figureย <ref>, where the basis chromaticities ๐_k form a gamut of colors achievable through a given choice of basis functions B_k(ฮป).
Depending on that choice, the gamut may only partially overlap the sRGB gamut: this means that there is no ๐ฐ that can achieve a chromaticity outside the basis gamut.
A vector ๐ฐ with only two non-zero contiguous coefficients yields a unique chromaticity point on the gamut boundary, since then only a contiguous pair of barycentric coordinates is non-zero.
However, in all other cases, there will be multiple coefficient vectors ๐ฐ that map to the same chromaticity point ๐.
This is because for K>3, the set of a_k describes generalized barycentric coordinates of ๐, and is thus not unique.
In the next section, we show how to invert this many-to-one mapping.
ยง ONE-TO-MANY MAPPING
Our goal in this section is to find the equivalence class of basis coefficients ๐ฐ that yields a target chromaticity ๐ and luminance F_Y.
We do this in two stages: we first find the set of generalized barycentric coordinates that achieves the target chromaticity ๐; then we show how this maps to an equivalence class of basis coefficients, a subset of which achieves the target luminance F_Y.
ยง.ยง Achieving chromaticity
A first condition is that ๐ must lie inside the basis gamut or on its boundary.
The target chromaticity may then be expressed in terms of generalized homogeneous barycentric coordinates, using:
[[ 1 1 โฏ 1; b_0,x b_1,x โฏ b_K-1,x; b_0,y b_1,y โฏ b_K-1,y ]]
[[ a_0; a_1; โฏ; a_K-1 ]] =
[[ 1; c_x; c_y ]],
with ๐_k = [b_k,x,b_k,y]^โค and ๐ = [c_x,c_y]^โค.
Since ๐ is in the basis gamut, there is at least one triplet of bases whose chromaticity coordinates define a triangle that contains ๐.
Let's assume that these basis are the first three (one can always re-order the bases to yield such a configuration).
One solution to Equationย <ref> is then [a_0, a_1, a_2, 0, โฏ, 0]^โค = [๐_T^โค, 0]^โค, with ๐_T the vector of triangular barycentric coordinates.
Other solutions may then be obtained by adding perturbations to that vector, which may be written [a_0-ฮ a_0,a_1-ฮ a_1,a_2-ฮ a_2,a_3,โฏ,a_K-1]^โค = [(๐_T-ฮ๐)^โค, ๐_F^ โค]^โค, where ๐_F is a (K-3)D vector of barycentric coordinates that represent degrees of freedom to navigate the space of solutions, and ฮ๐ is a 3D offset vector used to preserve the homogeneous barycentric coordinate constraint.
Let us now rewrite Equationย <ref> with the following matrix form: [ T F ] ๐ = [1, ๐^โค]^โค, where T (resp. F) is the matrix corresponding to the first 3 (resp. last K-3) columns of the left hand side matrix, and ๐ is the vector of generalized barycentric coordinates.
Since we also have T ๐_T = [1, ๐^โค]^โค, it follows that:
T ฮ๐ = F ๐_F.
Now in order to navigate the space of solutions, we need bounds on ๐_F.
Because all its coefficents are barycentric coordinates, we already know that 0โค๐_F โค1, with the lower bound trivially corresponding to a zero offset vector (see Equationย <ref>).
The upper bound is not a sufficient condition since we must also make sure that 0โค๐_T - ฮ๐โค1, or in terms of the offset vector: ๐_T-1โคฮ๐โค๐_T.
Using Equationย <ref> yields the following vector inequality:
๐_T - 1โค M ๐_F โค๐_T,
where M=T^ -1 F is a 3 ร (K-4) matrix.
Note that since M may contain negative coefficients, the lower bound in Equationย <ref> may end up being used to define the upper bound on ๐_F.
We rely on an iterative approach to characterize the whole set of solutions by considering each coefficient of ๐_F in turn.
Let us start with a_3, and assume that a_4..K-1 = 0.
Equationย <ref> now becomes ๐_T - 1โค๐ฆ_0 a_3 โค๐_T, with ๐ฆ_0=[m_00, m_10, m_20]^โค the first column of M.
The constraints on offsets are then met by navigating a_3 in the [0,a_3^max] interval, with the upper bound given by:
a_3^max=min_i โ{0,1,2}a_i+H(m_i0)-1/m_i0.
The Heaviside function H(m) is used to take the sign of each matrix component into account: when mโค0 (resp. m>0), the lower (resp. upper) bound is considered.
Having chosen the n-1 first coefficients of ๐_F, the upper bound for the nth coefficient โ assuming the remaining ones are zero โ is obtained with a similar formula:
a_3+n^max=min_i โ{0,1,2}a_i+H(m_in)-1-โ_l=0^n-1 m_il a_3+l/m_in.
In the general case, a valid solution โ n โ [0..K-4] must ensure:
a_3+nโคmin_i โ{0,1,2}a_i+H(m_in)-1-โ_l โ n m_il a_3+l/m_in.
For each vector ๐_F, the offset vector is computed by ฮ๐ = M ๐_F, which yields a generalized homogeneous coordinates vector ๐ that achieves the target chromaticity ๐.
Figureย <ref> illustrates that process.
A triangle that encloses ๐ is first selected; then the space of degrees of freedom ๐_F is sampled randomly to yield a family of spectra.
ยง.ยง Achieving luminance
Given a vector of generalized barycentric coordinates ๐, we now need to invert Equationย <ref> to retrieve basis coefficients ๐ฐ.
Since any pair of basis coordinates (a_i,a_j) is related by an equation of the form a_i w_j |B_j| = a_j w_i |B_i|, the corresponding basis coefficients span a KD line.
If we pick an arbitrary non-zero barycentric coordinate โ say a_0 โ then ๐ฐ may be expressed as a function of w_0:
๐ฐ(w_0) =
[[ 1; a_1 |B_0|/a_0 |B_1|; โฏ; a_K-1 |B_0|/a_0 |B_K-1| ]] w_0 = L w_0, w_0 โ(0, w_0^max],
where the upper bound w_0^max = min{1,a_0 |B_1|/a_1 |B_0|,โฏ,a_0 |B_K-1|/a_K-1 |B_0|} is set to ensure that 0โค๐ฐโค1.
The KD line of solutions may now be restricted to a single solution by the target luminance constraint, which we write ๐ฐ(w_0)^โค๐_y = F_Y, with ๐_y=[B_0,Y, โฏ, B_K-1,Y]^โค the vector of Y coefficients of basis colors.
Using Equationย <ref>, the value of w_0 that potentially achieves the target luminance F_Y is:
w_0^โ = F_Y/L^โค๐_y.
F_Y is effectively achieved if and only if w_0^โโค w_0^max.
For that reason only a (possibly empty) subset of barycentric coordinates ๐ permits to achieve the target luminance.
This is shown in Figureย <ref>: only fully-opaque barycentric samples and their associated spectra achieve F_Y in practice.
However, this subset may be enlarged.
Indeed, relying on the bounded property of PU (Equationย <ref>) remains conservative: in some instances, we may use basis coefficients greater than 1 and still obtain energy-conserving spectra.
This means that ๐ฐ may be scaled in post process to increase the luminance of the reconstructed spectrum.
Assuming that F_Y is not achieved (i.e., ๐ฐ(w_0^max)^โค๐_Y < F_Y), we may thus obtain a closer solution in terms of luminance by using:
๐ฒ(๐ฐ) = ๐ฐ(w_0^max)/max(f^max, ๐ฐ(w_0^max)^โค๐_YF_Y),
where f^max = max_ฮป f(ฮป), and 1/f^max represents the margin by which the spectrum f(ฮป) is allowed to be scaled.
Finally, it would be useful to know a priori whether there exists at least one vector ๐ฐ of basis coefficients that achieves both the target chromaticity and luminance.
A conservative solution is to rely on the vector ๐ฐ that maximizes luminance under the constraint given by Equationย <ref>, then to check whether ๐ฒ(๐ฐ)^โค๐_y โฅ F_Y.
The vector ๐ฐ is found by solving the following linear programming problem:
๐ฐ = max_๐ฐ ๐ฐ^โค๐_y,
0โค๐ฐโค1
A ๐ฐ = 0,
where A is obtained by rewriting Equationย <ref> in terms of ๐ฐ:
A = [T F] (|๐|)^โค - |๐|^โค[[ 1; ๐ ]],
where we have used ๐=(|๐|)^โค๐ฐ|๐|^โค๐ฐ and |๐| = [|B_0|, โฏ, |B_K-1|]^โค.
An exact solution could be obtained by replacing ๐ฐ by ๐ฒ(๐ฐ) in Equationย <ref>, at the cost of a more expensive computation time.
Once we have found ๐ฐ (green spectrum in Figureย <ref>), it is trivial to retrieve the corresponding barycentric coordinates ๐ and degrees of freedom ๐_F (green diamond in Figureย <ref>).
We use ๐ฐ by default during editing (see the supplemental video).
ยง BASIS DESIGN
Until now, we have relied on a small number of basis functions (K=5) for illustration purposes.
Increasing the number K of bases has the effect of increasing the size of the equivalence class.
As shown in Figureย <ref>,
this is due to the basis gamut, which encompasses a larger area of the chromaticity diagrams when K is increased.
This has two effects on the expressivity of a given basis.
First, any chromaticity in a given RGB gamut (we consider sRGB and Adobe Wide Gamut RGB in the following) can be achieved when it is encompassed by the basis gamut.
Increasing the number of bases extends the latter as shown in Figureย <ref> (top row).
Second, chromaticities on the gamut boundary map to a single pair of non-zero barycentric coordinates; hence a basis gamut larger than the chosen RGB gamut ensures to avoid these singular equivalence classes.
However, increasing the number K of bases cannot be done without limits, since plausible reflectance and transmittance spectra should be smooth.
In addition, a smaller number of bases might be desirable for memory considerations.
In this section, from the geometric interpretation of previous sections, we design a set of PU basis functions that finds a tradeoff between expressivity and smoothness constraints.
We keep degree of 2 throughout as there is no need to ensure C^2 (or higher) continuity, and low degree splines prevent us from over-fitting issues.
*Knots warping
Besides increasing K, we may also control the position of basis knots.
To this end, we use a two-parameter family of warping functions to alter the uniform distribution of knots along the U = [U_0, U_1] interval.
We use a warping function C_s,p: [0,1] โ [0,1] introduced by <cit.>:
C_s,p(x) = {[ x^c/p^c-1 x โ [0,p],; 1 - (1-x)^c/(1-p)^c-1 , ].
with c = 21+s-1.
The two parameters (s,p) โ [0,1]^2 control the strength of warping and the position where most of the warping occurs.
A sequence of warped knots {ฮบ_k} is then produced using ฮบ_k = U_0 + C_s,t(u_k) (U_1 - U_0), where the u_k form a uniform sequence of values in the [0,1] range.
Even though a set of K B-spline basis functions of order 2 requires K+3 knots, we ignore the first and last two since the boundary knots have a multiplicity of 3.
Hence we obtain K-1 knots, with ฮบ_0=U_0 and ฮบ_K-1=U_1 as desired.
Figureย <ref> shows the effect of knots warping on K=7 basis functions and the corresponding basis gamut, demonstrating that knots warping helps achieve a wider gamut without having to increase the number K of basis functions.
The figure also shows that modifying the first and last two knots to be outside of the U interval has a negligible effect on the basis gamut, which is due to the small values of color matching functions around these boundaries (see Figureย <ref>(left)).
We choose to offset these boundary knots by 100nm outside of U, which produces more physically-plausible results when observed outside of the visible range since the reconstructed spectra then gently fade to zero outside of U.
*Expressivity-smoothness trade-offs
We need to devise metrics to quantify the degree of expressivity of a set of basis functions, as well as the smoothess of the spectra it is able to produce, depending on the positions of its knots and the number K of bases.
An expressive basis requires a wide gamut that encompasses the chosen RGB gamut as much as possible.
We thus rely on what we call the excess area ๐, which is the signed area between the basis and sRGB gamuts, normalized by the area between the horseshoe-shaped chromaticity gamut and the RGB gamut.
This area is computed by tesselating the region between the RGB and basis gamuts into quads (see Figureย <ref>(left)).
Basis smoothness is directly related to the smoothness of individual basis functions, which depends on both the number K of bases and their knots {ฮบ_k}.
We compute the smoothness of a basis set as ๐ฎ = min_k _k, where _k is the full width at half maximum of the kth basis.
As shown in Figureย <ref>, for K=7 bases and a sRGB spectrum, the excess area ๐ and smoothness ๐ฎ criteria evolve differently as a function of (s,p), the parameters of the knots warping function.
How this pair of criteria is balanced is arbitrary.
In this paper, we usually first pick a number K of basis functions, and then brute-force find the warping parameters that maximize ๐ under the constraint that ๐ฎ < 20nm.
We indicate such a (s,p) pair for K=7 in Figureย <ref> by a black cross.
ยง APPLICATIONS
We now present application scenarios where we use our one-to-many mapping to find spectra with interesting visual appearance.
Unless otherwise specified, we use warped basis functions with (s,p) parameters determined as described in the previous section.
ยง.ยง Reproducing Vathochromism
Figureย <ref> demonstrates a reproduction of the Usambara effect using our approach.
We use K=11 warped basis functions and sample the equivalence class of spectra that achieves the target chromaticity ๐=[0.38, 0.45]^โค and luminance F_y=0.46.
All such spectra are considered as transmittance spectra at a unit optical depth, which is related to the extinction coefficient ฯ_t of a medium by T_1(ฮป) = e^-ฯ_t(ฮป).
The Beer-Lambert-Bouguer law at increasing depths d is then given by T_d(ฮป) = T_1(ฮป)^d.
For each sample of the equivalence class, we then integrate the corresponding T_d over color matching functions and plot the resulting transmittance curve in the chromaticity diagram.
A pair of examples is shown in Figureย <ref>(c), where we have picked two instances of the class that reproduce the Usambara effect โ here with an orange color at large optical depths.
For rendering, we need to specify ฯ_a and ฯ_s, the absorption and scattering coefficients.
In Figureย <ref>(b), we use ฯ_s(ฮป)=T_1(ฮป) to achieve the target color on single scattering, which yields ฯ_a(ฮป) = -log T_1(ฮป) - T_1(ฮป).
*Parameterizing the equivalence class
In the previous example, randomly sampling the equivalence class and then picking spectra that achieve the desired effects only provides indirect control over achieved colors at a optical depths d>1.
For some applications, a more direct control might be desired.
Unfortunately, depending on the choice of basis, not all color appearance choices can be achieved.
We provide a more direct control by parametrizing the equivalence class for the specific case of vathochromic transmittance spectra, which we illustrate on a series of unit tests in Figureย <ref>.
The main idea is to pick an a priori set of representative spectra from the equivalence class, order them in chromaticity space, and interpolate them to navigate through a subset of relevant spectra.
We use spectra formed by all triangles that contain the target chromaticity ๐, which naturally tend to result in distinct color appearance since they only rely on three basis functions.
We then sample their transmittance curves at an arbitrary optical depth, and order the representative spectra in clockwise order around the equiluminant point E=(13,13) according to chromaticity samples, as illustrated in Figureย <ref>(left).
As demonstrated in the accompanying video, when the user specifies a hue, we interpolate among the two closest transmittance curves in the chromaticity diagram.
*Unit tests
The remainder of Figureย <ref> shows a test scene composed of a slab of homogeneous transparent and scattering medium, lit by two white point light sources: one behind and one in front[Such scenes are typically long to converge in a spectral path tracer. We will provide more converged results in a final version of the paper.].
As in Figureย <ref>, the absorption and scattering coefficients are determined from T_1(ฮป) for each of the seven representative spectra of the equivalence class to render test images.
The optical depth of light paths coming from behind is typically short, and exhibits the target chromaticity ๐ for all tests as expected.
Light paths that come from the light in front instead need to be scattered to reemerge toward the camera and exhibit target hues at greater optical depths.
*Vathochromic reflectance
Vathochromic effects may also occur with reflectance spectra, due to inter-reflections on shiny (typically metallic) materials.
Figureย <ref> illustrates this effect on a crumpled paper model.
We use K=9 warped basis functions.
In this case, the target chromaticity ๐=[0.4, 0.43]^โค and luminance F_y=0.59 control the reflectance at normal incidence R_0 after a single scattering event.
We then use the parametrization shown in Figureย <ref>(left) to span the equivalence class of reflectance spectra, using Schlick's reflectance modelย <cit.> to compute R_0(ฮป)^d at normal incidence and the corresponding reflectance curves at discrete orders d of inter-reflection.
This allows us to quickly find three different spectra that yield the same appearance in direct lighting, but exhibit the desired targeted hues in inter-reflections.
ยง.ยง Reproducing Metamerism
Our one-to-many sampler also permits the exploration of the space of metameric spectra.
Instead of directly using the PU to build a basis gamut in chromaticity space, we premultiply each element of the partition of unity with a target illuminant I(ฮป):
B_k^I(ฮป) = B_k(ฮป) I(ฮป).
This defines a different gamut in chromaticity space per illuminant:
๐_k^I = [B_k,X^I, B_k,Y^I]^โค/| B_k^I |.
Now, given a choice of illuminant โ say D65, we sample the equivalence class that achieves a target chromaticity ๐^D65, yielding a set of vectors {๐ฐ^D65} of basis coefficients.
When these vectors are used with the basis functions premultiplied by another illuminant โ say F2, they yield different chromaticities since ๐_k^F2โ ๐_k^D65.
*Unit tests
Metameric patches are shown in Figureย <ref>, where a same achromatic color in D65 is shown to correspond to a variety of different chromaticities in F2, using K=7 non-warped basis functions.
Each sample of the D65 equivalence class thus yields an element of the palette achievable through metamerism.
The greenish trend in the color palette is due to the choice of illuminant F2.
*Hidden patterns and images
If we assign two different spectra from a metameric palette to two different regions of a surface, we obtain the result shown in Figureย <ref>, where the use ๐=[0.32, 0.25]^โค and F_Y=0.8.
Here the spectrum controls the spectral diffuse albedo of a Lambertian material.
The pattern is thus hidden under D65 illumination, but revealed under F2.
Note that for rendering, we use the original basis functions B_k(ฮป), not the premultiplied ones.
A similar effect can be obtained with hidden images, as shown in Figureย <ref>.
Here we take a gray-level picture as input, and blend 8 spectra of increasing luminance from the metameric palette of Figureย <ref> to reproduce luminance gradients.
Compared to the tool of <cit.>, our approach has two advantages: 1) their tool finds a single optimal metameric spectrum while we obtain a whole metameric palette; 2) they do not impose smoothness constraints on reflectance spectra while ours are smooth by design.
ยง.ยง Performance
As shown in the supplemental video, and although the prototype is implemented as a mono-threaded Python script, our upsampling method runs in real-time for artistic design. We measured timings for an increasing number of basis elements K on an Intel i3-6100 CPU at 3.70GHz. We report that our method runs at 4.9ms for K=5, 13.1ms for K=7, 19.3ms for K=9, and 27ms for K=11.
Note that using the Metameric Blacks constructionย <cit.> requires to track a number of constraints equals to twice the number of bins of a discretized spectra. In the literature, 31 bins are commonly used. We measured that generating the convex hull for a binning of 21 bins already takes 6.3s on average per spectra when using the scipy's interface to the qhull library (with default parameters and double precision)[We do not count occurrences where the algorithm fails to find a solution.].
ยง DISCUSSION AND FUTURE WORK
We have introduced a novel method to upsample a color to an equivalence class of spectra through a well-defined one-to-many mapping.
It provides another reason to move to spectral rendering besides the production of more photorealistic rendering results: the exploration of new visual effects as well as the imitation of those found in nature (gem stones, oils, etc).
We have focused in particular on the generation of vathochromic effects, both in transmission and reflection, generalizing the intriguing Usambara effect.
We have also shown how our approach applies to metameric effects.
We now discuss its specificities.
*Differences with Metameric Blacks
The main difference with previous work in optics resides in the way reflectance spectra are represented.
Methods that rely on discretized spectra (e.g.,ย <cit.>) require a small number of spectral bins to be computationally tractable, as discussed in Sectionsย <ref> andย <ref> and result in unrealistic spectra (Figureย <ref>).
Another difference lies in the ease with which metameric sets may be explored.
In our approach, an artist can quickly pick an achievable chromaticity (in the basis gamut), for which a maximum achievable luminance is readily provided through ๐ฐ (Equationย <ref>).
In contrast, with the metameric blacks approach, when a target color results in an empty metameric set, users have to go through trial and error to find a color for which at least one spectrum exists.
An alternative is to rely on measured spectra, such as in the work of Finlayson and Morovicย <cit.>.
Similar to Schmittย <cit.>, they reconstruct metamers using barycentric coordinates in the space of spectra. That is, from a set of K measured spectra s_k(ฮป) they reconstruct r(ฮป) = โ w_k s_k(ฮป) where w_k are positive weights with โ w_k = 1 (Equationย (27) in their paper).
This imposes that r(ฮป) is in the convex hull of the s_k(ฮป). Our method only imposes validity constraints, w_k โ [0,1].
Therefore, we can always reconstruct perfect blacks (r(ฮป) = 0, โฮป), perfect whites (r(ฮป) = 1, โฮป), and achieve any target luminance Y.
On the other hand, their method trivially yields physically-realistic spectra, whereas ours is more adapted to artistic exploration.
*Limitations
An inherent limitation of spectral asset creation, already pointed out by MacAdamย MacAdam35a,MacAdam35b, is that one needs to trade saturation for luminance.
Indeed, saturated spectra necessarily have narrow bands, and our approach is no different in this respect.
This limitation might explain the difficulty to create visually-noticeable vathochromic effects in microfacet models to control the color of the multiple scattering term (see Figureย <ref>).
A direction of improvement for our method lies in the design of techniques to navigate through equivalence classes.
In particular, it would be useful to give an analytical description of bounds imposed by the target luminance F_Y (i.e., the boundary between opaque and transparent points in Figureย <ref>(middle)).
Last, even though the spectra generated by our approach are physically-plausible, they are not physically-realistic by design.
Real spectra obey physical rules of their own.
For instance, the real and imaginary parts of refractive indices are bound by the Kramers-Kronig relations.
It would thus be interesting to establish connections with physical models of spectra.
In this respect, having a large equivalence class from which to pick spectra closest to physically-realistic ones could be an advantage.
*Future work
Our method could be used to reproduce and study a number of interesting optical phenomena.
The Alexandrite effect, an instance of metamerism, is one famous example: with our approach, we could investigate whether other spectra could potentially create similar effects.
Interesting applications could be found in ecology, where the illuminant plays a crucial role in defining habitats.
For instance, we could study how families of spectra are affected by lighting at different depths under water or under a dense foliage.
We would also like to explore the extension of vathochromism to take into account fluorescence effects, which abound in nature.
Finally, we have only considered normal human color vision through the use of CIE sensitivity functions.
A captivating direction of future work would be to experiment with sensitivity functions adapted to color blindness, or even to animal vision.
eg-alpha-doi
|
http://arxiv.org/abs/2306.09138v1
|
20230615135046
|
Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
|
[
"Riccardo Zese",
"Evelina Lamma",
"Fabrizio Riguzzi"
] |
cs.AI
|
[
"cs.AI",
"cs.LO"
] |
Querying Inconsistent Uncertain Knowledge Bases]Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
R.ย Zese]Riccardo Zese0000-0001-8352-6304[a]
E.ย Lamma]Evelina Lamma0000-0003-2747-4292[b]
F.ย Riguzzi]Fabrizio Riguzzi0000-0003-1654-9703[c]
Dip. di Scienze Chimiche, Farmaceutiche e Agrarie, Universitร di Ferrara, Via Luigi Borsari 46, 44121, Ferrara, Italy
[email protected]
Dip. di Ingegneria, Universitร di Ferrara, Via Saragat 1, 44122, Ferrara, Italy
[email protected]
Dip. di Matematica e Informatica, Universitร di Ferrara, Via Macchiavelli 30, 44122, Ferrara, Italy
[email protected]
The necessity to manage inconsistency in Description Logics Knowledge Basesย (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together.
Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs.
We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.
[
[
July 31, 2023
=================
ยง INTRODUCTION
In the Semantic Web, one of the main goals is to create a Knowledge Base (KB) that is as comprehensive as possible, by connecting information published on the Web and exploiting Description Logic (DL) languages. This can be done by means of the linked data cloud, where KBs from different domains are linked together to create the data cloud.
One possible problem with this idea is that all the KBs that are connected have been developed by different authors considering different points of view. Therefore, it is extremely easy to find contradictions.
A classic example is given by the flying penguin problem, where a KB defines that all birds fly and that penguins are birds, but they are not able to fly. This is clearly a contradiction and so, if the KB asserts that pingu is a penguin, the KB is inconsistent because pingu belongs to both the concept โflyโ and to its complement. With standard reasoning techniques, when the KB is consistent, inference is trivial as anything is entailed by an inconsistent KB. For this reason, systems implementing such techniques do not allow the execution of queries when the KB is inconsistent or allow only the identification of axioms causing the inconsistency.
In non-monotonic reasoning this problem has been solved by considering a unique KB where non-monotonic knowledge representation is adopted, often with negative literals for coping with exception or abnormalities as penguins in the example above.
Nonetheless, when different KBs are merged, it is easy to obtain an inconsistent KB.
Approaches to managing inconsistent pieces of information include the definition of a Four-Valued Logic, where classical implication is accompanied by two other types of implication of greater strengthย <cit.>; the definition of different types of negationsย <cit.>; or the use of the so-called repairs, consistent subsets of axioms of the KB built when the query is asked. Given the set of repairs of a KB, there are different semantics that define the conditions the query must fulfil to be trueย <cit.>, among them, the most used are the Braveย <cit.>, ARย <cit.>, and IARย <cit.> semantics.
Despite the number of works on managing inconsistent KBs, very few proposals consider the fact that information is usually uncertain in real world domains. To fill this gap, we exploit the DISPONTE semanticsย <cit.>, where the axioms of the KBs are associated to probability values, defining the degree of belief in the presence of the axiom in the KB.
Query answering requires identifying (non-probabilistic) subsets of axioms (named worlds) where the query is true, in order to compute, via marginalization, the probability of the query. Defining the probability of the presence of an axiom could be easier for an expert of the domain.
Or else, one can associate a probability value to axioms depending on how much they trust the source of the information, giving more confidence, in the example above, to information coming from ornithologists than that coming from, e.g., Wikipedia.
Moreover, the process of computing the probability of a query is somewhat similar to the construction of the repairs, as it needs to collect consistent subsets of the axioms of the KB that make the query true. However, the results are different because we are able to return a value telling how much we can trust the truth of the query, while, with repairs, one only knows whether the query is true in a cautious/brave semantics.
One interesting feature of DISPONTE is that adding the probability does not change the syntax of the underlying logic as it exploits annotations that are built-in in Semantic Web languages.
This avoids limitations on the expressivity of the languages that are used to define the KBs and compatibility problems with other KBs.
Moreover, differently fromย <cit.>,
no pre-processing step is needed, which may be expensive for large KBs.
We also present an extension, implemented in the reasoners BUNDLE and TRILL, able to cope with inconsistent KBs. BUNDLEย <cit.> and TRILLย <cit.> are two probabilistic reasoners that answer queries under DISPONTE, the first is implemented in Prolog, while the second in Java. In particular, BUNDLE encapsulates several reasoners, such as the non-probabilistic reasoners Pelletย <cit.> and Hermitย <cit.>, or the probabilistic reasoner TRILL, which can be used both inside BUNDLE or stand-alone. As we will show, the presented extension can be easily implemented in other reasoners to allow them to answer queries w.r.t. inconsistent KBs.
The contributions of the paper are twofold. First, they show how an existing probabilistic semantics can be used to reason w.r.t. inconsistent DL KBs. The paper discusses the changes that must be implemented to allow a reasoner applying the tableau algorithm to cope with this semantics and answer queries w.r.t. possibly inconsistent KBs. Moreover, despite the number of proposals to cope with inconsistent KBs, there are few implemented systems.
Second, the paper compares the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks. We show that our approach can be easily adapted to answer queries under different repair semantics.
The paper is organized as follows. Sectionย <ref> introduces background information about Description Logics, their extension to probability by means of DISPONTE, and recalls the most common Repair semantics.
Sectionย <ref> discusses the proposed probabilistic semantics and the extensions needed by a reasoner to cope with it. Sectionย <ref> illustrates the differences of our semantics with the repair semantics, considering the Brave, AR and IAR semantics. Sectionย <ref> discusses related work.
Sectionย <ref> shows the results of some tests of two prototype reasoners implementing the extension discussed in Sectionย <ref>. The implementation of two different prototypes is also intended to demonstrate the ease with which this semantics can be adopted. Finally, Sectionย <ref> concludes the paper.
ยง PRELIMINARIES
In this section we present the background necessary for the next sections. In the first sub-section, we will briefly describe the ๐โ๐ DL. Then we will move on to introduce the semantics DISPONSE, describing its syntax, semantics and how to use it to calculate the probability of queries. Finally, we will provide the definitions of the repair semantics.
ยง.ยง Description Logics
DLs are a family of logic-based knowledge representation formalisms which are
of particular interest for representing ontologies in the Semantic Web. For a good
introduction to DLs we refer toย <cit.>.
The different DL languages represent the domain by means of individuals, concepts (sets of individuals of the domain), and roles (sets of pairs of individuals). They differ in how concepts and roles can be combined for defining complex concepts.
We briefly review the DL, however, the results presented in this paper can be applied to any DL.
Let N_I, N_R, and N_C be countably infinite sets of individual, role and concept names.
Concepts C are defined as C::= A | | โค | (Cโ C) | (Cโ C) | C | โ R.C | โ R.C where Aโ N_C, Rโ N_R.
Given C and D concepts, R โ N_R, and a,b โ N_I, a knowledge base (KB) consists of a finite set of general concept inclusion axioms (GCIs) Cโ D, called TBox, and a finite set of concept assertions a:C and role assertions (a, b):R, called ABox. Thus, given an ABox and a TBox , a knowledge base is =(,).
The semantics od DLs is formally defined using interpretations = (ฮ^ , ยท^ ), where ฮ^ is a non-empty domain, and ยท^ is an interpretation function that maps each aโ N_I to an element of ฮ ^, each Cโ N_C to a subset of ฮ^, and each Rโ N_R to a subset of ฮ^รฮ^.
The mapping ยท^ is extended to complex concepts as follows (where
R^(x) = {y | (x, y) โ R^}):
[ โค^ = ฮ^; ^ = โ
; ( C)^ = ฮ^โ C^; (C_1โ C_2)^ = C_1^โช C_2^; (C_1โ C_2)^ = C_1^โฉ C_2^; (โ R.C)^ = {xโฮ^| R^(x)โฉ C^โ โ
}; (โ R.C)^ = {xโฮ^| R^(x)โ C^} ]
A query Q over a KB is an axiom for which we want to test the entailment from the KB, written as Q. The entailment test may be reduced to checking the unsatisfiability of a
concept in the KB, i.e., the emptiness of the concept, or the inconsistency of a KB. For example, the entailment of the axiom Cโ D may be tested by checking the unsatisfiability of the concept C โ D
while the entailment of the axiom a:C may be tested by checking the inconsistency of the KB with the addition of a: C.
ยง.ยง Probabilistic Description Logics
DISPONTEย <cit.> is based on the distribution semantics <cit.> and allows the user to label some axioms with real values p โ [0,1] representing probabilities.
In DISPONTE, a probabilistic knowledge base is a set of certain axioms and probabilistic axioms.
A certain axiom takes the form of a regular DL axiom.
A probabilistic axiom takes the form p::E, where p is a probability and E a regular axiom. It means that we have degree of belief p in the presence of E in the KB.
Given a query Q, DISPONTE computes its probability by constructing worlds, i.e., non-probabilistic KBs.
To build a world, we first need to define an atomic choice, which is a couple (E_i, k) where E_i is the ith probabilistic axiom and k is either 1 or 0 and indicates whether E_i belongs to the world or not. A set of atomic choices ฮบ is defined as consistent and called composite choice when it does not contain two different atomic choices for the same E_i. If a composite choice contains an atomic choice for every probabilistic axiom of the KB, it is called a selection. A selection ฯ identifies a world w_ฯ s.t. w_ฯ = ๐โช{E_i|(E_i, 1)โฯ}, where ๐ is the set of certain axioms.
Every world has a probability P(w_ฯ)=P(ฯ) = โ_(E_i, 1)โฯp_iรโ_(E_i, 0)โฯ(1-p_i)
where p_i is the probability associated with axiom E_i, because the presence of the axioms is considered pair-wise independent.
P(w_ฯ) is a probability distribution over worlds.
Given a query Q and the set of all worlds _, the probability of Q is the sum of the probabilities of the worlds in which the query is trueย <cit.>:
P(Q) = โ_wโ_ : w QP(w)
[Flying Penguins - 1]
Consider the following KB:
(1) 0.9:: ๐๐๐๐๐ข๐๐โ๐ต๐๐๐
(2) 0.9:: ๐๐๐๐๐ข๐๐โ๐น๐๐ฆ
(3) 0.6:: ๐๐๐๐๐ข: ๐๐๐๐๐ข๐๐
The first two axioms state that penguins are birds and that penguins do not fly, both with probability 0.9. The third states that ๐๐๐๐๐ข, an individual, is a penguin with probability 0.6.
This KB is consistent and has 8 possible worlds because there are 3 probabilistic axioms, and each of them may be contained in a world or not. Thus, there are 2^3 possible worlds, which are all the possible combinations given by the probabilistic axioms. Let us consider the query Q=๐๐๐๐๐ข:๐ต๐๐๐. This query is true in two worlds, those containing the first and third axioms. The probability is P(Q) = 0.9ยท 0.9ยท 0.6 + 0.9ยท0.1ยท 0.6 = 0.54.
However, computing the probability of queries building all the worlds is infeasible because their number is exponential in the number of probabilistic axioms in the KB. To try to circumvent this problem, it is possible to resort to classical inference algorithms for collecting justifications for the query, which are usually less than the worlds in the case of large KBs, and compute the probability from them. The problem of finding justifications for a query has been investigated by various authorsย <cit.>.
A justification for a query Q is a consistent inclusion-minimal subset of logical axioms of a KB such that Q. On the other hand, a justification for the inconsistency of a KB is an inclusion-minimal subset of logical axioms of a KB such that is inconsistent. It can be see as a justification for the query โคโโฅ. all-just(Q,) indicates the set of all justifications for the query Q in the KB .
Sometimes, the term incoherent KB is used when the KB contains at least one unsatisfiable concept, i.e., a concept that cannot have individuals. Incoherence is a type of inconsistency occurring in the TBox of a KB. To simplify the notation, in this paper we will use the term inconsistency also to indicate incoherence. Note that, if the KB is consistent, there is no justification for the inconsistency. On the other hand, if the KB is inconsistent there will be at least one justification for the inconsistency.
A set of justifications defines a set of worlds _={w|jโ,jโ w}. is called covering for Q if _ is equal to the set of all the worlds in which Q succeeds, i.e., if for each w, w Q โ wโ W_; or covering for the inconsistency of if it identifies all the inconsistent worlds.
[Flying Penguins - 2]
Let us consider the KB of Exampleย <ref> and the query Q=๐๐๐๐๐ข:๐ต๐๐๐. This query has one justification, i.e., {(1),(3)}, which is also covering because it defines the two worlds where the query holds.
An effective way of computing the probability of a query from a covering set of justifications consists in compiling it into a Binary Decision Diagram (BDD).
A BDD is a rooted graph used to represent a function of Boolean variables, with one level for each variable. Each node of the graph has two children corresponding either to the 1 value or the 0 value of the variable associated with the node. Its leaves are either 0 or 1.
BDDs are used to represent the Disjunctive Normal Form (DNF) Boolean formula f_all-just(Q,)(๐) built from all-just(Q,).
[Boolean formula of a justification]
Given a justification , and a set ๐={ X_i| p_i::E_i โ} of independent Boolean random variables associated with probabilistic axioms with P(X_i=1)=p_i, where p_i is the probability of axiom E_i, the Boolean formula of a justification is f_(๐)=โ_(E_iโ)X_i.
[Boolean formula of a set of justifications]
Given a set of justifications , and a set ๐={ X_i| p_i::E_i โ} of independent Boolean random variables associated with probabilistic axioms with P(X_i=1)=p_i, where p_i is the probability of axiom E_i, the Boolean formula of the set of justifications is f_(๐)=โ_โf_(๐)=โ_โโ_(E_iโ)X_i.
From Definitionย <ref>, given a query Q and the set all-just(Q,) of all the justifications for Q in the KB , the formula f_all-just(Q,)(๐)=โ_ฯโall-just(Q,)โ_(E_iโฯ)X_i.
The probability that f_all-just(Q,)(๐) takes value 1 gives the probability of Qย <cit.>. The same applies also for inconsistency, where ๐ผ๐๐๐๐๐ is the query โคโโฅ, so f_all-just(๐ผ๐๐๐๐๐ ,)(๐) is the DNF Boolean formula representing all-just(๐ผ๐๐๐๐๐ ,). The probability that f_all-just(๐ผ๐๐๐๐๐ ,)(๐) takes value 1 gives the probability of of being inconsistent. This can be seen as a measure of the inconsistency of the KB.
In principle, to compute the probability we could resort to different possible languages, such as Sentential Decision Diagrams (SDD) or Deterministic Decomposable Negation Normal Form (d-DNNF). We use BDDs because
packages for compiling formulas into BDDs are extremely optimized and can manage BDDs of very large size (see e.g.ย <cit.>) while performing sometimes better than SDD packagesย <cit.>.
Given the BDD, we can use function Probability described by Kimmig et al. <cit.> to compute the probability.
ยง.ยง Repair Semantics
In this section we briefly recall the main definitions and semantics of repairs.
[Repair]
A repair of a KB is an inclusion-maximal subset of the ABox that is consistent together with the TBox.
Given a query Q and a KB =(,), where is the ABox and is the TBox, we can define the three main semantics for repairs as follows.
[Repair Semantics]
Brave A query Q is true over a KB under the Brave semantics, written _BraveQ, if (,) Q for at least one repair of ย <cit.>.
AR. A query Q is true over a KB under the AR semantics, written _ARQ, if (,) Q for every repair of ย <cit.>.
IAR A query Q is true over a KB under the IAR semantics, written _๐ผ๐ด๐
Q, if (,) Q where =โ_โ Rep() and Rep() is the set of all the repairs for ย <cit.>.
[University employee positions]
Consider the following KB =(,), where is:
(1) ๐๐๐๐๐๐ ๐ ๐๐โ๐๐ข๐ก๐๐โ๐ฟ๐๐๐ก๐ข๐๐๐
(2) ๐๐๐๐ ๐๐โ๐๐๐๐๐๐ ๐ ๐๐โ๐โ๐ท
(3) ๐๐๐๐๐๐ ๐ ๐๐โ๐๐ข๐ก๐๐โ๐๐๐๐ฃ๐๐๐ ๐๐ก๐ฆ๐ธ๐๐๐๐๐ฆ๐๐
(4) ๐๐๐๐๐๐ ๐ ๐๐โ๐๐ข๐ก๐๐
and is:
(5) ๐๐๐๐๐ : ๐๐๐๐ ๐๐
(6) ๐๐๐๐๐ : ๐๐๐๐๐๐ ๐ ๐๐
(7) ๐๐๐๐๐ : ๐๐ข๐ก๐๐
The KB states that a professor which is also a tutor is a lecturer (1), that a person who is a professor is a PhD (2), and that professors and tutors are university employees (3). A professor cannot be a tutor (4). Finally, Alice is a person (5), a professor (6) and a tutor (7).
It is easy to see that the ABox of this KB is inconsistent w.r.t. the TBox.
This KB has two repairs: (I) _I=โ{(6)} and (II) _II=โ{(7)}.
The query Q_1=๐๐๐๐๐:๐ฟ๐๐๐ก๐ข๐๐๐ is false under the three semantics because it is impossible to find a repair where Alice is both a professor and a tutor.
The query Q_2=๐๐๐๐๐:๐โ๐ท is true under the Brave semantics because it is true in _II.
The query Q_3=๐๐๐๐๐:๐๐๐๐ฃ๐๐๐ ๐๐ก๐ฆ๐ธ๐๐๐๐๐ฆ๐๐ is true under the AR semantics because it is true in every repair.
Finally, the query Q_4=๐๐๐๐๐:๐๐๐๐ ๐๐ is true under the IAR semantics because it is true in the intersection of _I and _II, that contains only axiomย (5).
ยง QUERYING INCONSISTENT KNOWLEDGE BASES
We can use the definitions from Sectionย <ref> to define the probability of a query Q given a (probabilistic) possibly inconsistent KB , with Q an axiom. In Sectionย <ref>, we will compare our proposal with the repair semanticsย <cit.>.
We use a probability value
to mark which axioms may be incorrect.
Under this semantics, given a KB and a query Q, the probability of Q is:
P_C(Q)=P(Q|๐ถ๐๐๐ )=P(Q,๐ถ๐๐๐ )/P(๐ถ๐๐๐ )
Here, P(๐ถ๐๐๐ ) is the probability that the KB is consistent, i.e., the probability of the formula f_all-just(๐ผ๐๐๐๐๐ ,), while P(Q,๐ถ๐๐๐ ) is the probability of the formula f_all-just(Q,)โง f_all-just(๐ผ๐๐๐๐๐ ,).
This consists is equivalent to all the consistent worlds and checking if the query holds in each.
The final probability is the probability of the query within the consistent worlds over the probability of the consistency of the KB.
[Flying Penguins - 3]
Consider the query Q=๐๐๐๐๐ข:๐น๐๐ฆ and the following KB (slightly different from that of Ex.ย <ref>):
(1)0.9:: ๐ต๐๐๐โ๐น๐๐ฆ (2)๐๐๐๐๐ข๐๐โ๐ต๐๐๐
(3)0.9:: ๐๐๐๐๐ข๐๐โ๐น๐๐ฆ (4)๐๐๐๐๐ข: ๐๐๐๐๐ข๐๐
In this case we have four different worlds:
w_1 = {(1),(2),(3),(4)} w_2 = {(1),(2),(4)}
w_3 = {(2),(3),(4)} w_4 = {(2),(4)}
World w_1 is inconsistent, therefore it does not contribute to the probability of (Q,๐ถ๐๐๐ ). Among the other worlds, the query Q is true only in w_3, which has probability 0.9ยท0.1=0.09. The probability of the KB to be consistent is
P(w_2)+P(w_3)+P(w_4)= (P(1)ยท(1-P(3))) +
((1-P(1))ยท P(3)) +
((1-P(1))ยท(1-P(3)))
= 0.9ยท0.1+0.1ยท0.9+0.1ยท0.1
= 0.19
So, P_C(Q)=0.09/0.19=0.474.
An interesting result can be seen if axiom (3) is non-probabilistic. In this case, the KB has the two worlds {(1),(2),(3),(4)} and {(2),(3),(4)}, having probability 0.9 and 0.1, respectively. The first world is inconsistent while Q holds in the second. The probability of consistency is P(๐ถ๐๐๐ )=0.1 because there is a single world that is consistent. In this world the query is true, so the probability P(Q,๐ถ๐๐๐ )=0.1. As result, we obtain that P_C(Q)=1. On the other hand, the query Q=๐๐๐๐๐ข:๐น๐๐ฆ takes probability 0. The same results can be achieved with every value of probability of axiom (1) that is strictly lower than 1. This example shows how a correct design of the knowledge is important. Indeed, by associating a probability value to axiom (1), which is not always true in the domain, axioms that are certain acquire, in a way, more importance. The information that penguins do not fly is certain because there are no species of penguins that have the ability to fly, thus, irrespectively of the probability of axiom (1), the query Q is certainly true.
This process maintains unchanged the probability of queries w.r.t. a consistent KB. The proof of this statement is trivial because, if the KB is consistent, all the worlds are so, and therefore the computation of the query is equivalent to the former definition of the DISPONTE semantics.
A second result that is less obvious is that, given a query Q and an inconsistent KB , if Q does not depends on axioms causing inconsistency, its probability has the same value than that which can be computed w.r.t. any 'โ, where ' is an inclusion-maximal subset of axioms from that is consistent. If we remove the probability from the probabilistic axioms of ', it is equivalent to a repair for , therefore Q is true in every repair for . Intuitively, Q and ๐ถ๐๐๐ are independent because Q is true in every '. The axioms contained in the justifications for Q do not appear in the justifications for the inconsistency, and so in the formula for ๐ถ๐๐๐ and vice-versa. The random variables associated with Q and ๐ถ๐๐๐ are independent, therefore the computation of the conditional probability P_C(Q) is
P_C(Q)=P(Q|๐ถ๐๐๐ )=P(Q)ยท P(๐ถ๐๐๐ )/P(๐ถ๐๐๐ )=P(Q)
ยง.ยง Reasoning Algorithm
One of the most used approaches to compute justifications is the tableau algorithmย <cit.>. Sebastiani and Vescoviย <cit.> defined an approach for finding justifications in the โฐโ DL that builds a Horn propositional formula of polynomial size and applies Boolean Constraint Propagation. Arif et al.ย <cit.> used implicit hitting set dualization by exploiting a SAT solver. Baader and colleaguesย <cit.> presented different approaches that create a Boolean formula, called pinpointing formula, which represents the set of all the justifications for a query w.r.t. ๐ฎโ KBs. This approach is implemented, for example, in TRILL^P and TORNADOย <cit.>, two sub-systems of TRILL. These two sub-systems are not considered in this paper because the first needs a further processing of the results to apply our approach, while the second returns results in a form not suitable for our purposes.
We now describe the tableau approach that is used in the reasoners we extended, TRILLย <cit.> and BUNDLEย <cit.>. A tableau is a graph where each node represents an individual a and is labelled
with the set of concepts (a) to which a belongs. Each edge
โจ a,bโฉ in the graph is labelled with the set of roles โ(โจ a,bโฉ) to
which the couple (a, b) belongs.
The algorithm proves an axiom by contradiction by repeatedly applying a set of consistency preserving tableau expansion rules until a clash (i.e., a contradiction) is detected or a clash-free graph is found to which no more rules are applicable.
A clash is a couple (C, a) where C and C are
present in the label of the node a, i.e., {C, C}โ(a).
The expansion rules modify the labels of the nodes and edges of the tableau and update a tracing function ฯ, which associates a set of justifications to each concept or role of a label in the tableau. For example, the value of the tracing function for the label C of node a when axiom a:C is in the KB is set to {{a:C}}, i.e., a set of justifications containing one single justification that, in turns, contains the axiom a:C.
Given a query, once the tableau is fully expanded, to build the justification for the query, the labels that cause clashes are collected. Then, the justifications of these labels are joined to form the justification for the query. We refer the interested reader toย <cit.> for a detailed overview.
The expansion rules
are divided into deterministic and non-deterministic. As stated above, the first, when applied to a tableau, produce a single new tableau. The latter, when applied to a tableau, produce a set of tableaux.
When the tableau algorithm adds the negation of the query to the tableau, the value of its tracing function is set to {โ
}, i.e., an empty justification, because that information does not come from the KB. So, if a clash is detected, it is not possible to know what causes it: an inconsistency, or the query. Thus reasoners usually perform a consistency check before expanding the tableau, preventing its execution if the KB is inconsistent.
This is due to the fact that, when the axioms involved in the justification of a query also cause an inconsistency, the justifications for the query may be subsets of those for the inconsistency. In Exampleย <ref>, given the query Q=๐๐๐๐๐ข:๐น๐๐ฆ, the single justification for the query is the set of axioms {(4),(2),(1)}, while that for the inconsistency is {(4),(2),(1),(3)}. In this case, when extracting the justifications from the values of the tracing function of the labels that create the clashes, the latter is not collected because it is a superset of the first.
A simple way to solve this problem is to change the tracing function so that it adds a placeholder to the justifications for the negation of the query to keep the justifications that contain this placeholder separated from those without it, which are justifications caused by an inconsistency.
Basically, the tracing function for the negation of the query is initialized as {{Q_p}}, where Q_p is a fake axiom that does not appear in the KB. This axiom represents a flag indicating that the label has been created because of the query.
Then, the standard expansion rules can be applied to expand the tableau in the usual way, because Q_p acts like an axiom. Therefore, the tableau algorithm remains unchanged. At the end, if there are clashes, the justifications for the query will contain Q_p, those due to the inconsistency will not.
An important aspect of this implementation is that all the results about completeness and correctness of the tableau (see <cit.>) still apply, since the tableau algorithm is not modified. This also allows the application of this approach to every DL for which tableau expansion rules have been defined.
[Flying Penguins - 4]
Consider the KB of Exampleย <ref> where the probability values from the axioms have been removed.
(1) ๐ต๐๐๐โ๐น๐๐ฆ (2)๐๐๐๐๐ข๐๐โ๐ต๐๐๐
(3) ๐๐๐๐๐ข๐๐โ๐น๐๐ฆ (4)๐๐๐๐๐ข: ๐๐๐๐๐ข๐๐
Let us remove axiom (3). The classic tableau creates a tableau containing a single node, corresponding to ๐๐๐๐๐ข, labelled with the concept ๐๐๐๐๐ข๐๐, having as value of ฯ the set of justifications {{(4)}}.
Given the query Q=๐๐๐๐๐ข:๐น๐๐ฆ, the tableau is updated by adding the label ๐น๐๐ฆ to the node of ๐๐๐๐๐ข, with the value of ฯ equals to {โ
}.
Expanding this tableau by following the axioms of the KB, the label ๐น๐๐ฆ will be added to the node for ๐๐๐๐๐ข, with the value for ฯ corresponding to {{(4),(2),(1)}}. In this tableau there is a clash because the node for ๐๐๐๐๐ข contains both the labels ๐น๐๐ฆ and ๐น๐๐ฆ. A justification can be found by joining the values of the tracing function of the two labels, i.e., โ
โช{(4),(2),(1)}={(4),(2),(1)}.
If the KB contains also axiom (3), during the expansion of the tableau, the value of ฯ for the label ๐น๐๐ฆ is updated by adding the justification {(4),(3)}. However, this justification cannot be added because the initial value of the tracing function contains the empty set, which is a subset of any other set, so the tableau cannot correctly discriminate between justifications due to the query and to the inconsistency.
If we consider this example with the new tracing function, the value of ฯ for the label of the query ๐น๐๐ฆ is initialized as {{Q_p}}. Therefore, the justifications for Q will be {Q_p,(4),(2),(1)}, while that for the inconsistency will be {(4),(2),(1),(3)}. In this case, during the expansion of the tableau, the value of ฯ for the label ๐น๐๐ฆ can be updated by adding the justification {(4),(3)}, because the justification due to the inconsistency is no more a superset of {Q_p}, which is due to the query, and the reasoner can easily discriminate between the two.
Another possible way to solve this problem is to directly add the negation of the query to the KB by using a name known by the reasoner. For example, if the query is the axiom a:C, it is sufficient to add the axioms a:C_Q_p and C_Q_pโ C where C_Q_p is a fresh concept not contained in the KB. Now, it is possible to run a reasoner that can return the justifications for the inconsistency of a KB and split all the justifications in two sets, one containing justifications with axioms involving the concept C_Q_p and one containing those with axioms not involving the fresh concept. The first set will contain the justifications for the query (in our case the axioms a:C_Q_p and C_Q_pโ C must be removed from the justifications in order to collect justifications containing only axioms from the original KB), while the second will contain the justifications for the inconsistency. In this way, we do not need to modify the reasoner, nor the tracing function.
The whole reasoning flow can be divided into 4 different steps, described below, and shown in Figureย <ref>.
Once all the justifications are collected, the BDD BDDQ for the query Q is built from all-just(Q,) while that for the consistency of the KB BDDC is generated by negating the BDD for all-just(๐ผ๐๐๐๐๐ ,). The next step consists of joining BDDQ and BDDC to create a BDD BDDQC=BDDQโง BDDC from which the probability P(Q,๐ถ๐๐๐ ) can be computed. The probability P(๐ถ๐๐๐ ) can be computed directly from BDDC. Finally, the probability P_C(Q) is computed from the two BDDs as P(Q,๐ถ๐๐๐ )/P(๐ถ๐๐๐ ).
Regarding complexity, there are two problems to consider. The first is finding justifications, whose complexity has been deeply studied and depends on the logic usedย <cit.>. In particular, Corollary 15 in <cit.> shows that
finding all the justifications cannot be solved in output polynomial time for DL-Lite_bool TBoxes unless P = NP.
Since DL-Lite_bool is a sublogic of , this result also holds for and all its extensions.
Despite these results, it has been shown that all justifications can be found over many real world ontologies within a few seconds <cit.>.
The second problem is building the BDD and computing the probability, that can be seen as the problem of computing the probability of a sum-of-products <cit.>. While the problem is #P-hard, algorithms based on BDDs were able to solve problems with hundreds of thousands of variables (see e.g. the works on inference on probabilistic logic programs
<cit.>). Methods for weighted model counting <cit.> can be used as well to solve the sum-of-products problem.
The class #P <cit.> describes counting problems associated with decision problems in NP. More formally, #P is the class of function problems of the form โcompute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time.
A prototypical #P problem is the one of computing the number of satisfying assignments of a CNF Boolean formula.
#P problems were shown very hard.
First, a #P problem must be at least as hard as the corresponding NP problem. Second, Toda <cit.> showed that a polynomial-time machine with a #P oracle (P^#P) can solve all problems in PH, the polynomial hierarchy.
Note that the extensions introduced in this paper do not change the complexity of the two problems, and the original algorithms (those without the extensions) proposed for solving these two problems were shown to be able to work on inputs of real world size[E.g., NCI ontology (<https://ncit.nci.nih.gov/ncitbrowser/>) with 3,382,017 axioms and ๐ฎโ expressiveness, or FMA (<http://si.washington.edu/projects/fma>) with 88,252 axioms in the TBox and RBox and 237,382 individuals and ๐โ๐๐ชโ๐ฉ(๐) expressiveness.]ย <cit.>.
ยง COMPARISON WITH THE REPAIR SEMANTICS
In this section we compare our proposal with the repair semanticsย <cit.>, where the authors consider KBs where the TBox is consistent while the ABox may be inconsistent w.r.t. the TBox.
For the sake of comparison, we construct the worlds by including all the TBox axioms and some axioms from the ABox.
As already discussed, a world is a subset of axioms of the original KB, i.e., it can be seen as a smaller KB or a sort of repair. A consistent world is a world that is consistent. Conversely, an inconsistent world is a world that is inconsistent.
Since repairs correspond to all inclusion-maximal consistent subsets of ABox axioms, they can be viewed as `possible worlds'ย <cit.>. A repair is an inclusion-maximal subset of the ABox that is TBox-consistent, therefore, with a small abuse of the notation, we can consider a repair as a KB having the entire TBox and the set of axioms
from the ABox selected by the repair. Therefore, given Rep(), the set of all repairs, and _ the set of all worlds, we have Rep()โ_. Conversely,
a consistent world w defines a set of repairs Rep_w() ={| is a repair, wโ}.
The next Lemma follows from these definitions.
Given a KB and a world w consistent w.r.t. , then the set Rep_w() is not empty.
Each world w contains all the axioms from the TBox and some axioms from the ABox. If a world is consistent then we can add ABox axioms to it until it becomes equivalent to a repair, i.e., it contains all the assertions contained in the repair. So given a consistent world, this represents one or more repairs.
By refutation, suppose there exists a consistent world w such that Rep_w() is empty. This means that there exists a set of axioms โ w such that it is not contained in any repair. By definition of repairs, if โ for every repair , then is TBox-inconsistent. So, w cannot be consistent.
It is also important to note that a repair can be obtained by removing a hitting set of the justifications.
Given these definitions, we can state the following theorems.
Given a possibly inconsistent KB , a query Q, and the Boolean formula f_all-just(๐ผ๐๐๐๐๐ ,), Q is true under the Brave semantics, i.e., _BraveQ iff there exists at least one justification for Q w.r.t. such that the corresponding Boolean formula ฯ joined with f_all-just(๐ผ๐๐๐๐๐ ,) is satisfiable.
From Sectionย <ref>, a justification identifies a set of worlds. The smallest one is the world containing the axioms in the justification. The set of justifications all-just(๐ผ๐๐๐๐๐ ,) is represented by the formula f_all-just(๐ผ๐๐๐๐๐ ,). Given a satisfying truth assignment of the variables of f_all-just(๐ผ๐๐๐๐๐ ,), it is possible to build a corresponding set of axioms
consisting of all those axioms E_i whose corresponding variable X_i is given value true in the assignment. These axioms are a justification for the inconsistency of the KB. Therefore, they define a set of worlds such that each world is inconsistent.
Analogously, from a satisfying truth assignment of the variables of the formula f_all-just(๐ผ๐๐๐๐๐ ,) it is possible to create sets of axioms that represents the justifications for the consistency of the KB, and so, the consistent worlds. Therefore, if there is at least one justification for Q w.r.t. with ฯ its representation as Boolean formula, from a satisfying truth assignment of the variables of ฯโง f_all-just(๐ผ๐๐๐๐๐ ,) it is possible to create at least one world where Q is true. From Lemmaย <ref> there exists at least one repair such that (,) Q. So, _BraveQ. Basically, this consists in finding all the consistent worlds and checking if Q holds in at least one of these worlds.
If the formula ฯโง f_all-just(๐ผ๐๐๐๐๐ ,) is not true for every ฯ, it is not possible to find a consistent world where the query Q is true. Hence, Q is false in every consistent world and thus, from Lemmaย <ref>, it is so in every repair. So, _BraveQ.
To compare our semantics with the AR semantics, we first need to give three more definitionsย <cit.>.
[Conflictย <cit.>]
A conflict of = (,) is an inclusion-minimal subset of that is inconsistent together with . The set of conflicts of is denoted ๐๐๐๐๐().
Each conflict corresponds to a justification for the inconsistency that contains the assertions of the conflict and some axioms from , i.e., given a justification jโall-just(๐ผ๐๐๐๐๐ ,), if we remove from the justification the terminological axioms we obtain the conflict {E_i | E_i โ jโฉ}.
[Set of conflicts of a set of assertions]
Given = (,), the set of conflicts of a set of assertions โ, denoted as ๐๐๐๐๐(,) is
๐๐๐๐๐(,)= {|โ๐๐๐๐๐(),โฉโ โ
}
[Causeย <cit.>]
A cause for a Boolean query Q in a KB = (,) is an inclusion minimal subset โ consistent with such that (,) Q. We use ๐๐๐ข๐ ๐๐ (Q,) to refer to the set of causes for Q in .
Given a query Q, each cause corresponds to a justification for the query Q that contains the assertions of the cause and some axioms from , i.e., given a justification jโall-just(Q,), if we remove from the justification the terminological axioms we obtain the cause ๐๐๐ข๐ ๐(j)={E_i | E_iโ jโฉ}. Thus, from the set all-just(Q,) we can define the set of causes of a set of justifications.
[Set of causes of the set of justifications]
Given = (,), a query Q and its set of justifications all-just(Q,), a set of causes of the set of justifications ๐๐๐ข๐ ๐(all-just(Q,)) is defined as
๐๐๐ข๐ ๐(all-just(Q,)) = {๐๐๐ข๐ ๐(j)|jโall-just(Q,)}
This theorem derives from Theorem 4.11 and Remark 4.12 of <cit.>. Given a possibly inconsistent KB , a query Q, and the Boolean formulae f_all-just(๐ผ๐๐๐๐๐ ,) and f_all-just(Q,),
Q is not true under the AR semantics, i.e., _ARQ iff f_all-just(Q,)โง f_all-just(๐ผ๐๐๐๐๐ ,) is satisfiable, where the formula f_all-just(Q,) is defined as
f_all-just(Q,) = f_all-just(Q,)^1โ f_all-just(Q,)^2
where, with X_, new Boolean variables representing the different ways of contradicting ,
f_all-just(Q,)^1=
โ_โ๐๐๐ข๐ ๐๐ (Q,)โ_โ๐๐๐๐๐(,)X_,
and, with ๐ฃ๐๐๐ (f) the set of variables appearing in f,
f_all-just(Q,)^2=โ_X_,โ๐ฃ๐๐๐ (f_all-just(Q,)^1)โ_ฮฒโโ X_,โจ X_ฮฒ
The proof follows the same steps of <cit.>, f_all-just(๐ผ๐๐๐๐๐ ,) represents the set of consistent worlds, while f_all-just(Q,) the set of worlds where Q is not true. It is built in two steps: the first builds the formula f_all-just(Q,)^1 representing that every cause is contradicted, and the second builds the formula f_all-just(Q,)^2 ensuring that when a cause is contradicted, every axiom of the conflict not belonging to is present. If the formula f_all-just(Q,)โง f_all-just(๐ผ๐๐๐๐๐ ,) has an assignment of the Boolean variables that makes it satisfiable, then there is a consistent world where the query is not true, and so, from Lemmaย <ref>, there is at least one repair where Q is not true.
If the formula f_all-just(Q,)โง f_all-just(๐ผ๐๐๐๐๐ ,) is not satisfiable, then it is not possible to find any consistent world where the query Q is false. Hence, Q is true in every consistent world and thus, from Lemmaย <ref>, it is so in every repair. So, _ARQ.
Given a possibly inconsistent KB , a query Q, and the set of all the ABox axioms that appear in at least one justification for the inconsistency, Q is true under the IAR semantics, i.e., _๐ผ๐ด๐
Q, iff there exists at least one justification for Q w.r.t. such that none of its axioms belongs to .
By definition, the justifications for the inconsistency of a KB tell which combinations of axioms cause the inconsistency. Speaking in terms of repairs, given the set of the justifications for the inconsistency, we can collect all the axioms that are considered as possibly causing inconsistency. For example, as inย <cit.>, these could be the ABox axioms. Given the set containing all these axioms, every repair will be defined as the initial set of axioms from the KB to which at least one of the axioms from have been removed in order to make the repair consistent. This means that the intersection of all the repairs can be found as =โ_โ Rep()=โ. Therefore, for Q to be true in the intersection of the repairs is necessary that there is at least one justification that does not contain any axiom from .
Suppose that every justification for Q contains at least one axiom from . Pick a justification j and an axiom from it that is in :
this axiom cannot be in the intersection of the repairs, so j is no longer a justification for Q in the intersection of the repairs.
Since this is true for all justifications for Q, Q has no justification in the intersection of the repairs so _๐ผ๐ด๐
Q
It is worth noting that our approach is more general than the repair semantics. Indeed, even if we remove the assumption of considering only the ABox axioms as possible causes of inconsistency, Theoremsย <ref>, <ref>, and <ref> still hold. Suppose that we consider every axiom of the KB as possible cause of inconsistency, if an axiom does not appear in any justification for the inconsistency, then it appears in every repair. On the other hand, if an axiom is contained in a justification for the inconsistency, then there will be at least one repair that does not contain it. Since in our proofs we consider only the axioms in the justifications, they hold irrespectively of the types of axioms considered.
Moreover, all these results can be checked by means of BDDs. For example, given the set of justifications for the inconsistency, we can find the Boolean formula f_all-just(๐ผ๐๐๐๐๐ ,) representing it and compile the formula into a BDD BDDI. To find f_all-just(๐ผ๐๐๐๐๐ ,), it is sufficient to negate the BDDI. Similarly, to see if a formula is not satisfiable, once compiled in a BDD it is sufficient to see whether the BDD corresponds with the 0 leaf.
Therefore, our approach is more general in the sense that it does not impose limitations on the KB because it allows both ABox and TBox to be inconsistent. Moreover, it features some desirable characteristics, such as: it can handle every DL language equipped with a reasoner able to return justifications; there are two different tools already available to cope with this semantics; it directly works on DL KBs, without resorting to DBMS or converting to different languages; it can return justifications together with the probability of the query, making the results more informative.
Finally, to the best of our knowledge, this approach is the first implementation of a general DL reasoner that can be used to answer queries under the AR semanticsย <cit.>. Other approaches can answer queries under the AR semantics in the database settings, restricting the possible languages that can be used, and so the expressivity of the DL used to model the KB.
ยง RELATED WORK
There are various lines of work on the topic of reasoning in case of inconsistency.
For example, the standard DL semantics can be extended by the definition of a Four-Valued Logic, where classical implication is used together with two other types of implication of greater strengthย <cit.>.
Their definition is given in terms of projections of concepts, i.e., every concept in a KB is associated with two sets of individuals containing those known to belong to the concept and those known to belong to its complement. These two sets can overlap. Each implication can be translated into classical DL sub-class implications in a pre-processing step of the KB.
Another possible semantics introduces different types of negations, as inย <cit.>, where two types of negation are considered to handle information known to be false and information that is not known to be true. However, these approaches force the developers of the KBs to distinguish the different versions of implications or negations, which may be not intuitive for those who are not expert of logics. Changing the standard syntax and semantics can also bring compatibility issues with other KBs and the impossibility of using standard reasoners.
Another approach considers the repairs of an inconsistent KB. As already discussed, the repairs are parts (inclusion-maximal subsets) of the assertional axioms of the (inconsistent) KB that are consistent with the terminological axioms. These represent the possible ways of repairing an inconsistent KB by preserving as much information as possible (in the sense of set inclusion) to obtain a consistent KB. There could be many different repairs, depending on how many assertional axioms cause the inconsistency. There are several ways to build repairs, e.g., Baader et al. <cit.> look for optimal repairs where the least number of consequences is removed w.r.t. โฐโ KBs, and several semantics based on the repairs, making the inference process tolerant to inconsistency. A query is true w.r.t. the inconsistent KB if,
for example, it is true in every repair of the KB (AR semanticsย <cit.>), in the intersection of the repairs (IAR semanticsย <cit.>), or in at least one repair (Brave semanticsย <cit.>).
A comprehensive introduction to repairs can be found inย <cit.>.
One of the most prominent ways to answer queries under the AR semantics reduces the problem to SATย <cit.>. A possibility of avoiding the use of a SAT solver and building the repairs is to iteratively select a subset of the (inconsistent) KB until the subset entails the queryย <cit.>. This approach is comparable to the Brave semantics. Ludwig and Peรฑalozaย <cit.> proposed to precompile and save all the repairs (possibly exponentially many) to answer subsumption queries w.r.t. TBox in โฐโ, thus without considering individuals in the ABox. Answering queries under Brave, AR, IAR and other semantics is efficient if all the repairs are precompiled. They also proposed a second, and more efficient, way: labelling every subsumption axiom with the set of repairs that contain it. This requires polynomial space without effectively building the repairs, which can be read using a directed acyclic graph.
Some approaches propose the use of priority levels or weights <cit.> to select repairs with higher priority/weight first or use weights to stratify the KBs and build sub-ontologies by keeping as many axioms with higher weights as possible <cit.>. To facilitate the parameter assignment task, one can exploit modularization-based approaches (such asย <cit.>) to find local modules and assign the parameters considering only axioms in individual modules. Another way of facilitating the work of knowledge engineers, especially in very large KBs, could be to consider the reliability of data source (e.g., associate a confidence level to each data source and use this value as a weight for the information extracted from it) or to consider DISPONTE and use EDGEย <cit.> or LEAPย <cit.>, two tools for automatically learning parameters of a DISPONTE KB by exploiting data contained in it.
Finally, other approaches translate the KB into logic programming clausesย <cit.>, performing inference through argumentation or abduction. Argumentation is exploited also in the work of Bouzeghoub et al.ย <cit.>. They present an approach based on possibilistic DLย <cit.>, where axioms of a KB are associated with a real value representing its confidence degree. A Possibilistic DL KB defines a possibility distribution over worlds and assign a necessity measure to axioms. This necessity measure is used to check the entailment of the axiom w.r.t. the KB. The management of inconsistency is done by means of argumentation, by constructing an argumentation tree to find arguments and rebuttals in order to find justifications for the query. This approach shares many ideas with our approach. Unfortunately, the probabilistic semantics cannot be directly compared because the numerical values do not have the same meaning. Moreover, an implementation of their work is not available, so it is not possible to compare the two approaches empirically.
Despite the number of approaches presented above, there is a lack of proposals that exploit probabilistic information even though it is pervasive in real world domains.
A preliminary idea was presented by <cit.> where the authors consider probabilistic Datalogยฑ ontologies and repairs that can be built by removing also terminological axioms from the original KB, i.e., the inconsistency may also come from TBox axioms. They consider the possible world semantics, which corresponds to DISPONTE, and provide complexity results but not a system for performing such type of inference.
Other approaches consider completely different logics, e.g., inย <cit.> the authors propose a probabilistic logic based on the probabilistic extensions of the Belnap-Dunn logicย <cit.> combined with a bilattice logic, or an extension of Lukasiewicz's logicย <cit.>, defining a two-layer modal logical framework to account for belief in agent systems working on possibly inconsistent probabilistic information. However, these logics are different from DLs, thus it is not possible to make a meaningful comparison and they cannot be applied to the KBs connected in the open linked data cloud. Potyka and Thimm defined reasoning on linear probabilistic KBs <cit.>, where the uncertain knowledge in the KB is represented as linear probabilistic constraints, covering different logical formalisms such as Nilsson's logicย <cit.> or Lukasiewicz's logicย <cit.>. In this approach, a so called generalized model is used to allow probabilistic entailment on inconsistent KBs. A generalized model is defined as a probability distribution that minimally violates the KB. So, the probability values associated with the constraints must be carefully chosen in order not to violate the probability distribution. Moreover, since the violation is computed on the constraints, a second KB must be added that contains a consistent set of integrity constraints. Differently, DISPONTE considers each axiom as independent so the assignment of a probability value can be done independently for each axiom. Koch and Olteanuย <cit.> consider probabilistic databases and apply a conditioning operation that removes possible worlds not fulfilling given conditions, where conditions are a kind of constraints. This is similar to building repairs. Then, the query is asked w.r.t. the reduced database, and a confidence computation will return Bayesian conditional probabilities w.r.t. the original database. Both steps are NP-hard, while our approach is #P-hard, as we discussed in the Sectionย <ref>.
Tammet et al.ย <cit.> consider FOL, a semantics based on degrees of belief, and present an algorithm with various steps: they first calculate the decreasing confidence by means of a modified resolution search collecting different answers and justifications. Next the justifications are combined by means of the cumulation operation. Finally, the algorithm collects negative evidence for all the answers obtained so far, separately for each individual answer. This search is also split into resolution and cumulation. The search may not terminate, so a time limit is imposed, and the answer may not be complete.
Both proposals share similarities with our approach, the first in the use of the probabilistic semantics, the latter in the search of two sets of justifications.
From a different point of view, the use of probability to measure the inconsistency of a KB is related to the plethora of inconsistency measures defined in literatureย <cit.>. An inconsistency measureย <cit.> assesses the severity of inconsistency, it gives the amount of inconsistency in a knowledge base, possibly with respect to the size of the KB. Usually, measures that fall in the first case are called absolute, differently from the latter, called relative, that consider, e.g., the number of assertions in the ABox, or the number of axioms in the whole KB. These measures can help in handling inconsistency. However, justifications are needed to debug the KB.
In the last decades, many different measures have been proposed. For example, Hunter and Konieczny define a measure that assigns 1 to a KB that is inconsistent and 0 otherwise, and a measure that counts the number of minimal inconsistent subsets of the KB <cit.>. This has been extended to take into account also the number of contradictory set of axioms <cit.>. Another approach is to measure the ratio of the number of atoms in the minimal inconsistent subsets over the total number of atomsย <cit.>. The d-hit inconsistency measure <cit.> indicates the size of the smallest set that has a non-empty intersection with every minimal inconsistent subset. The Hitting Set inconsistency measure is based on the concept of a hitting set H and calculates the smallest size that H might have, where H is a set of classical interpretations such that each formula is true in at least one interpretation. The maximal sets of subsets of the KB that can be made true by an interpretation are exactly the maximal consistent sets.
Some of these measures consider Priestโs three valued logic LPย <cit.>, in which, besides the classical truth values โtrueโ T and โfalseโ F, a third truth value B denoting inconsistency is considered. Thus, the minimum number of atoms that need to be assigned B to get at least a model of KB in Priest's logic can be used as an inconsistency measureย <cit.>, possibly divided by the number of atomsย <cit.>. Others consider the KB as a set of consistent logic formulae (inconsistent formulae can be split into consistent sub-formulae). Each consistent formula has at least one model, which is a world, which can be represented by a point in a Euclidean space. Using these points, it is possible to define measures based on the distance between the points in the Euclidean spaceย <cit.>.
In a broad sense, computing the probability of the inconsistency of a KB is equivalent to computing an inconsistency measure that depends on the degree of belief in the axioms of the KB. From this point of view, the resulting measure may be considered as something in between of an absolute and a relative measure, because it does not consider the size of the KB, but the probabilities specified in it.
De Bona et al.ย <cit.> provide a classification of inconsistency using a bipartite graph relating logical formulae (built on axioms)
and minimal inconsistent subsets of the KB. This allows one to compute different measures based on the count of formulae or subsets of the axioms of the KB from the bipartite graph. However, a direct comparison among all these measures is not possible in the majority of cases, because each has its
rationale and grows differently as the KB increases.
ยง EXPERIMENTS
To test the feasibility of our approach, we extended our reasoners TRILLย <cit.> and BUNDLEย <cit.> as described in the previous sections. TRILL is written in Prolog and can handle KBs, while BUNDLE is written in Java and uses an underlying non-probabilistic OWL reasoner to collect the set of justifications w.r.t. OWL 2 (essentially ) KBs. In particular, BUNDLE embeds Pelletย <cit.>, Hermitย <cit.>, FaCT++ย <cit.> and JFaCT[<http://jfact.sourceforge.net/>] as OWL reasoners, and three justification generators, namely GlassBox (only for Pellet), BlackBox and OWL Explanation. Moreover, BUNDLE also encapsulates TRILL.
In this paper we refer to the extension of the reasoners TRILL and BUNDLE as TRILLInc and BUNDLEInc respectively. In their original version, both reasoners, in case of an inconsistent KB, can be used only to collect the justifications for the inconsistency of the KB, while their extension apply the notions explained in this paper.
As regards TRILL and TRILLInc, on the almost 8000 lines of code of TRILL, we needed to add only 144 lines of code and to modify another hundred or so lines of code to implement the computation of P_C(Q). To implement the computation of the repair semantics we needed to add approximatively 170 lines of code. TRILLInc implements the extension of the tracing function.
The code of TRILLInc, together with the KBs used in this section, is available on GitHub[<https://github.com/rzese/trill_inc>].
As regards BUNDLE and BUNDLEInc, we adapted the code so that all the internal reasoners can be used to solve queries w.t.y. possibly inconsistent KBs. The code of BUNDLEInc implements the addition of the negation of the query in the KB by means of the fresh concept C_Q_P. It is available on Bitbucket[<https://bitbucket.org/machinelearningunife/bundle/src/bundle_inc/>].
We considered KBs similar of those presented in Testย 3 ofย <cit.>, where the authors built 5 KBs of increasing size containing the following axioms for i varying from 1 to n, with nโ{2,4,6,8,10}:
[ B_i-1โ P_iโ Q_i P_iโ B_i Q_iโ B_i ]
where B_i, Q_i, P_i are simple concepts. However, in principle, they can be concept expressions of any expressivity handled by the reasoner.
These KBs present a number of justifications for the query Q=x:B_n that grows exponentially with n. The choice of this KB is to force the creation of a number of justifications that grows exponentially, preferring larger number of justifications instead of bigger KBs. The most expensive operation in a reasoner based on the tableau is the application of expansion rules since it needs the management of choice points, backtracking and, possibly, creating new tableaux depending on the implementation of the reasoner. These KBs stress the entire inference process, from justification finding to the management of the BDDs. Moreover, it is easy to decide where to put inconsistency.
In this test, we created a KB for each nโ{3,4,5,6,7,8,9,10} to which we added the class assertion x:B_0.
Every axiom of the KB has been made probabilistic by annotating it with a random value of probability.
Then, for each KB, we created a second version in which we added a disjoint-classes axiom asserting that classes B_j and B_k are disjoint, with j,k set as explained below. This, combined with the class assertion axiom, makes the KBs inconsistent.
We built a KB for each value of n in order to see how the running time changes as n increases.
We run the query Q=x:B_n 10 times w.r.t. each KB in order to compute the average running time for answering the query. We compared the running time of the original version of the reasoner to solve Q w.r.t. the KBs without the disjoint-classes axiom, with the running time taken by our version of the reasoner w.r.t.:
* KBs without the disjoint-classes axiom, so consistent, to see how much overhead we add to the whole process and so, how much the introduced extension affects the reasoning in case of consistent KBs;
* KBs with the disjoint-classes axiom considering the classes B_0 and B_1 (j=0,k=1), thus inconsistent, where there are only two justifications of the inconsistency consisting both of three axioms, to see how the running time changes in the best case, i.e., when collecting justifications for the inconsistency is trivial;
* KBs with the disjoint-classes axiom considering the classes B_n and B_n-1 (j=n,k=n-1), thus inconsistent, whose justifications are the same of those of the query, to see how the running time changes in the worst case. In the worst case, i.e., n=10, the KB contains 30 axioms but there are 2^n+2^n=1024+1024 justifications for the query Q, a situation difficult to achieve even with large KBs.
Tableย <ref> shows, for each KB, the ratio between the running time of TRILL and that of TRILLInc in the settings above. Average ratios between the running time of TRILL and TRILLInc are also shown, computed by considering all the KBs and the KBs of sizes from 5 to 10, i.e., the KBs with a significant number of justifications and so KBs where the extension could affect more the running time. The average running time in seconds with their Relative Standard Deviation in percentage (%RSD) are shown for both TRILL and TRILLInc.
All the tests have been performed on a Linux machine with equipped with Intelยฉ CoreTM i7-8565U CPU @ 1.80GHz, with 16 GiB of RAM.
As one can see, TRILLInc adds an overhead which spans between less than 1% (Ratio 1.00, i=2, setting 2) and 220% (Ratio 3.21, i=6, setting 3). For small KBs, the overhead on the reasoning time is mitigated by an initialization phase of the reasoner, which affects all the executions. With the increase of the size of the KBs, this phase becomes more and more negligible. However, in the worst case, setting 3, the extension to the reasoner implemented in TRILLInc increases the running time of a factor 2.43 on average, which increases to an average of 2.88 considering only the larger KBs (size 5 to 10). Moreover, from setting 1 we can see that, in the case of consistent KBs, the overhead is around 15% of the running time, which is acceptable given that we also compute the repair semantics. So, the presented extension does not significantly affect the running time w.r.t. consistent or inconsistent KBs with few, small justifications for the inconsistency.
There is also a fourth case, when the query is x:B_1 while B_n and B_n-1 are disjoint. In this case, when comparing the performance of TRILLInc with that of TRILL (w.r.t. the KBs without the disjoint-classes axiom) the increase is similar to those of setting 2. However, one should note that, in case of an inconsistent KB, classical reasoners such as TRILL do not perform inference. In this case, the only query one can pose is whether the KB is inconsistent, then repair it using the collected justifications if possible, and ask the original query w.r.t. the debugged KB. Therefore, considering all these steps and that the proposed extension combines the search for justifications for queries with that for justifications of inconsistency instead of doing them separately, the ratio may decrease significantly and possibly become smaller than 1.
In order to empirically compare our approach with the repair semantics, we ran BUNDLE using its default settings, i.e., using Pellet with the GlassBox justification generator, against CQApriย <cit.>, a system implemented for querying DL-Lite KB under AR, IAR and Brave semantics. CQApri is implemented in Java and exploits a relational database to store the assertions of the KB, a query rewriting engine for the DL-Lite language and a SAT solver to answer queries under AR semantics.
For the comparison, we considered a KB used by the authors of CQApri to test their system[Available at <https://www.lri.fr/ย bourgaux/CQApri>]. They used a simplified version of the Lehigh University Benchmark (LUBM) ontologyย <cit.>, which differs from the original version for the removal of the axioms that cannot be modelled by DL-Lite, and generated different ABoxes of increasing size containing inconsistencies w.r.t. the simplified LUBM TBox.
In particular, we considered the version , from which we created an OWL KB to use with BUNDLE containing all the assertions that CQApri stores in the database. The resulting KB models one university and contains 108,864 axioms in the ABox (28,023 class assertions, 47,640 object property assertions and 33,201 data property assertions) for a total of 127,320 axioms in the KB. From this KB, we randomly created 200 Boolean queries of the form a_i:C_i and (a_i,b_i):R_i by sampling individuals a_i and b_i, classes C_i and object properties R_i. Each of these 200 queries is created to have at least one justification. We have also created 100 queries in the same way but having zero justifications, in order to test the performance of BUNDLE even in case the query is not entailed. The latter set of queries cannot be given in input to CQApri, because it exits with an error.
Tableย <ref> shows the results in terms of averaged running time for BUNDLE and CQApri to solve the 200 queries with justifications, while Tableย <ref> contains more details about the results of BUNDLE when solving queries with and without justifications. The average time in milliseconds for computing the repair semantics on the set of 200 queries is 0.05ยฑ 0.2 in case of no justifications and 0.08ยฑ0.5 in case of 1 justification for the query. The high standard deviation in the first case is due to the fact that the computation is almost instantaneous, i.e., many queries present a repair computation time of 1 or less than 1 millisecond, while in some cases this time reaches 6 milliseconds. But this difference could also depend on the load of the test machine. Finally, the time for computing the repair semantics is 0.13 ยฑ 0.42 in case of more than 1 justifications. The high standard deviation in this case is due to the high variability in the number of justifications.
As can be seen, BUNDLE has an average running time of more than 218 seconds, while CQApri can solve the same queries in little less than 12 seconds on average.
However, it is important to bear in mind the main differences between the two systems:
* CQApri is tailored to DL-Lite, so it imposes a stronger limitation on the expressivity than BUNDLE, which considers OWL 2 (). This means that CQApri cannot be used to run the queries of Exampleย <ref> because complex concepts such as intersections can be used only as superclasses.
* CQApri makes use of a DBMS to store and access the assertions of the ABox, while BUNDLE stores internally the entire KB. Thus, CQApri can scale better than BUNDLE in terms of size of KB.
* CQApri answer queries under Brave, AR, and IAR semantics, while BUNDLE adds the resolution under DISPONTE and returns the set of justifications for both the inconsistency and the query, allowing to have a full analysis of the query and collect information for debugging the KB at the same time. CQApri can return the repairs, but in this comparison we did not ask for that, because we think it is unfair to ask more information to a system in a comparison only regarding query answering. We would like to highlight that also for queries with 79 justifications, the check for the repair semantics in BUNDLE is almost instantaneous (with an average time to compute the answer under the repair semantics averaged on all the 200 queries of 0.0001 ยฑ 0.0003 seconds).
* CQApri can also solve conjunctive queries, while BUNDLE only Boolean queries.
We considered BUNDLE in the comparison in order to show that every DL reasoner available can be used to exploit DISPONTE to answer queries w.r.t. possibly inconsistent KBs. However, all the considerations done about BUNDLE are also valid for TRILL.
ยง CONCLUSIONS
We have presented a simple but effective approach to cope with inconsistent KBs, which does not require changing the syntax of the logic and exploits the probabilistic semantics DISPONTE.
Inference algorithms that use the tableau and are able to build the set of all the justifications of a query can be easily adapted to the new task. This allows the application of this approach to every DL language equipped with a suitable set of tableau expansion rules. We implemented the proposed extension in TRILL, developing TRILLInc, and BUNDLE, developing BUNDLEInc, and tested them w.r.t. two different KBs. For the future, we plan to study the generalization to FOL of the presented extensions of the semantics.
ยง ACKNOWLEDGMENT
This work was partly supported by the โNational Group of Computing Science (GNCS-INDAM)โ.
alphaurl
SPCG+07
[AMM15]DBLP:conf/sat/ArifMM15
M.ย Fareed Arif, Carlos Mencรญa, and Joรฃo Marques-Silva.
Efficient MUS enumeration of horn formulae with applications to
axiom pinpointing.
In Marijn Heule and Seanย A. Weaver, editors, SAT 2015, volume
9340 of LNCS, pages 324โ342. Springer, 2015.
https://doi.org/10.1007/978-3-319-24318-4_24
doi:10.1007/978-3-319-24318-4_24.
[BB16]DBLP:conf/rweb/BienvenuB16
Meghyn Bienvenu and Camille Bourgaux.
Inconsistency-tolerant querying of description logic knowledge bases.
In Jeffย Z. Pan, Diego Calvanese, Thomas Eiter, Ian Horrocks, Michael
Kifer, Fangzhen Lin, and Yuting Zhao, editors, RW 2016, volume 9885 of
LNCS, pages 156โ202. Springer, 2016.
https://doi.org/10.1007/978-3-319-49493-7_5
doi:10.1007/978-3-319-49493-7_5.
[BBG14]DBLP:conf/aaai/BienvenuBG14
Meghyn Bienvenu, Camille Bourgaux, and Franรงois Goasdouรฉ.
Querying inconsistent description logic knowledge bases under
preferred repair semantics.
In Carlaย E. Brodley and Peter Stone, editors, AAAI-2014, pages
996โ1002. AAAI Press, 2014.
URL:
<http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8231>.
[BBG19]DBLP:journals/jair/BienvenuBG19
Meghyn Bienvenu, Camille Bourgaux, and Franรงois Goasdouรฉ.
Computing and Explaining Query Answers over Inconsistent DL-Lite
Knowledge Bases.
Journal of Artificial Intelligence Research, 64:563โ644, 2019.
https://doi.org/10.1613/jair.1.11395
doi:10.1613/jair.1.11395.
[Bel19]belnap2019computer
Nuelย D Belnap.
How a computer should think.
In New Essays on Belnap-Dunn Logic, pages 35โ53. Springer,
2019.
[BFMN20]DBLP:conf/dali/BilkovaFMN20
Marta Bรญlkovรก, Sabine Frittella, Ondrej Majer, and Sajad Nazari.
Belief based on inconsistent information.
In Manuelย A. Martins and Igor Sedlรกr, editors, DaLรญ
2020, volume 12569 of LNCS, pages 68โ86. Springer, 2020.
https://doi.org/10.1007/978-3-030-65840-3_5
doi:10.1007/978-3-030-65840-3_5.
[BG20]DBLP:journals/ai/BesnardG20
Philippe Besnard and John Grant.
Relative inconsistency measures.
Artificial Intelligence, 280:103231, 2020.
https://doi.org/10.1016/j.artint.2019.103231
doi:10.1016/j.artint.2019.103231.
[BGHK19]DBLP:journals/jair/BonaGHK19
Glauberย De Bona, John Grant, Anthony Hunter, and Sรฉbastien Konieczny.
Classifying inconsistency measures using graphs.
Journal of Artificial Intelligence Research, 66:937โ987, 2019.
https://doi.org/10.1613/jair.1.11852
doi:10.1613/jair.1.11852.
[BHS08]Badeer:2008:DL:52211
Franz Baader, Ian Horrocks, and Ulrike Sattler.
Description Logics, chapterย 3, pages 135โ179.
Elsevier, Amsterdam, 2008.
[BJMR17]DBLP:conf/webi/BouzeghoubJMR17
Amel Bouzeghoub, Saรฏd Jabbour, Yue Ma, and Badran Raddaoui.
Handling conflicts in uncertain ontologies using deductive
argumentation.
In Amitย P. Sheth, Axel Ngonga, Yin Wang, Elizabeth Chang, Dominik
Slezak, Bogdan Franczyk, Rainer Alt, Xiaohui Tao, and Rainer Unland, editors,
Proceedings of the International Conference on Web Intelligence,
Leipzig, Germany, August 23-26, 2017, pages 65โ72. ACM Press, 2017.
https://doi.org/10.1145/3106426.3106454
doi:10.1145/3106426.3106454.
[BKKN21]baader2021computing
Franz Baader, Patrick Koopmann, Francesco Kriegel, and Adrian Nuradiansyah.
Computing optimal repairs of quantified ABoxes wrt static EL
TBoxes.
In CADE 2021, 2021.
[BP10a]DBLP:journals/jar/BaaderP10
F.ย Baader and R.ย Peรฑaloza.
Automata-based axiom pinpointing.
Journal of Automated Reasoning, 45(2):91โ129, 2010.
[BP10b]DBLP:journals/logcom/BaaderP10
F.ย Baader and R.ย Peรฑaloza.
Axiom pinpointing in general tableaux.
Journal of Logic and Computation, 20(1):5โ34, 2010.
[BR13]DBLP:conf/ijcai/BienvenuR13
Meghyn Bienvenu and Riccardo Rosati.
Tractable approximations of consistent query answering for robust
ontology-based data access.
In Francesca Rossi, editor, IJCAI 2013, pages 775โ781. AAAI
Press/IJCAI, 2013.
URL:
<http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6904>.
[CD08]DBLP:journals/ai/ChaviraD08
Mark Chavira and Adnan Darwiche.
On probabilistic inference by weighted model counting.
Artificial Intelligence, 172(6-7):772โ799, 2008.
[CLP16]DBLP:conf/ecai/CeylanLP16
ฤฐsmailย ฤฐlkan Ceylan, Thomas Lukasiewicz, and Rafael Peรฑaloza.
Complexity results for probabilistic
datalog^.
In Galย A. Kaminka, Maria Fox, Paolo Bouquet, Eyke Hรผllermeier,
Virginia Dignum, Frank Dignum, and Frank van Harmelen, editors, ECAI
2016, volume 285 of Frontiers in Artificial Intelligence and
Applications, pages 1414โ1422. IOS Press, 2016.
https://doi.org/10.3233/978-1-61499-672-9-1414
doi:10.3233/978-1-61499-672-9-1414.
[CRZ+18]CotRigZesBelLam-IC-2018
Giuseppe Cota, Fabrizio Riguzzi, Riccardo Zese, Elena Bellodi, and Evelina
Lamma.
A modular inference system for probabilistic description logics.
In Davide Ciucci, Gabriella Pasi, and Barbara Vantaggi, editors, SUM 2018, volume 11142 of LNCS, pages 78โ92, Heidelberg, Germany,
2018. Springer.
URL: <http://mcs.unife.it/ย friguzzi/Papers/CotRigZes-SUM18.pdf>,
https://doi.org/10.1007/978-3-030-00461-3_6
doi:10.1007/978-3-030-00461-3_6.
[CZB+15a]CotZes15-AIIADC-IW
Giuseppe Cota, Riccardo Zese, Elena Bellodi, Evelina Lamma, and Fabrizio
Riguzzi.
Learning probabilistic ontologies with distributed parameter
learning.
In Elena Bellodi and Alessio Bonfietti, editors, Doctoral
Consortium (DC) co-located with the 14th Conference of the Italian
Association for Artificial Intelligence (AI*IA 2015), volume 1485 of CEUR Workshop Proceedings, pages 7โ12, Aachen, Germany, 2015.
[CZB+15b]CotZesBel15-ILP-IC
Giuseppe Cota, Riccardo Zese, Elena Bellodi, Fabrizio Riguzzi, and Evelina
Lamma.
Distributed parameter learning for probabilistic ontologies.
In Katsumi Inoue, Hayato Ohwada, and Akihiro Yamamoto, editors, 25th International Conference on Inductive Logic Programming, 2015.
[DKT07]DBLP:conf/ijcai/RaedtKT07
Luc De Raedt, Angelika Kimmig, and Hannu Toivonen.
ProbLog: A probabilistic Prolog and its application in link
discovery.
In Manuelaย M. Veloso, editor, 20th International Joint
Conference on Artificial Intelligence (IJCAI 2007), volumeย 7, pages
2462โ2467. AAAI Press, 2007.
[DWS15]DBLP:conf/aaai/DuWS15
Jianfeng Du, Kewen Wang, and Yi-Dong Shen.
Towards tractable and practical abox abduction over inconsistent
description logic ontologies.
In Blai Bonet and Sven Koenig, editors, AAAI 2015, pages
1489โ1495. AAAI Press, 2015.
URL:
<http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9389>.
[FGMT19]DBLP:conf/aike/FiorentinoGMT19
Nicola Fiorentino, Sergio Greco, Cristian Molinaro, and Irina Trubitsyna.
ACQUA: approximate consistent query answering over inconsistent
knowledge bases.
In 2nd IEEE International Conference on Artificial
Intelligence and Knowledge Engineering, AIKE 2019, Sardinia, Italy, June
3-5, 2019, pages 107โ110. IEEE Press, 2019.
https://doi.org/10.1109/AIKE.2019.00027
doi:10.1109/AIKE.2019.00027.
[FH10]fang2010reasoning
Jun Fang and Zhisheng Huang.
Reasoning with inconsistent ontologies.
Tsinghua Science and Technology, 15(6):687โ691, 2010.
[GCS10]DBLP:journals/aai/GomezCS10
Sergioย Alejandro Gรณmez, Carlosย Ivรกn Chesรฑevar, and
Guillermoย Ricardo Simari.
Reasoning with inconsistent ontologies through argumentation.
Appl. Artif. Intell., 24(1&2):102โ148, 2010.
https://doi.org/10.1080/08839510903448692
doi:10.1080/08839510903448692.
[GH11]DBLP:conf/ijcai/GrantH11
John Grant and Anthony Hunter.
Measuring the good and the bad in inconsistent information.
In Toby Walsh, editor, 22nd International Joint Conference on
Artificial Intelligence (IJCAI 2011), pages 2632โ2637. AAAI Press/IJCAI,
2011.
URL: <https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-438>.
[GH13]DBLP:conf/ecsqaru/GrantH13
John Grant and Anthony Hunter.
Distance-based measures of inconsistency.
In Lindaย C. vanย der Gaag, editor, ECSQARU 2013, volume 7958
of LNCS, pages 230โ241. Springer, 2013.
https://doi.org/10.1007/978-3-642-39091-3_20
doi:10.1007/978-3-642-39091-3_20.
[GPH05]DBLP:journals/ws/GuoPH05
Yuanbo Guo, Zhengxiang Pan, and Jeff Heflin.
LUBM: A benchmark for OWL knowledge base systems.
Journal of Web Semantics, 3(2-3):158โ182, 2005.
https://doi.org/10.1016/j.websem.2005.06.005
doi:10.1016/j.websem.2005.06.005.
[Gra78]DBLP:journals/ndjfl/Grant78
John Grant.
Classifications for inconsistent theories.
Notre Dame J. Formal Log., 19(3):435โ444, 1978.
https://doi.org/10.1305/ndjfl/1093888404
doi:10.1305/ndjfl/1093888404.
[HK08]DBLP:conf/kr/HunterK08
Anthony Hunter and Sรฉbastien Konieczny.
Measuring inconsistency through minimal inconsistent sets.
In Gerhard Brewka and Jรฉrรดme Lang, editors, KR-2008, pages 358โ366. AAAI Press, 2008.
URL: <http://www.aaai.org/Library/KR/2008/kr08-035.php>.
[HKS06]horrocks2006even
Ian Horrocks, Oliver Kutz, and Ulrike Sattler.
The even more irresistible SROIQ.
In Patrick Doherty, John Mylopoulos, and Christopherย A. Welty,
editors, KR-2006, pages 57โ67. AAAI Press, 2006.
URL: <http://dl.acm.org/citation.cfm?id=3029947.3029959>.
[HS07]DBLP:journals/jar/HorrocksS07
Ian Horrocks and Ulrike Sattler.
A tableau decision procedure for ๐ฎโ๐ชโ๐ฌ.
Journal of Automated Reasoning, 39(3):249โ276, 2007.
[HWKP06]extended_tracing
Christian Halaschek-Wiener, Aditya Kalyanpur, and Bijan Parsia.
Extending tableau tracing for ABox updates.
Technical report, University of Maryland, 2006.
[Kal06]Kalyanpurphd
Aditya Kalyanpur.
Debugging and Repair of OWL Ontologies.
PhD thesis, The Graduate School of the University of Maryland, 2006.
[KDD+11]DBLP:journals/tplp/KimmigDRCR11
Angelika Kimmig, Bart Demoen, Luc De Raedt, Vitorย Santos Costa, and Ricardo
Rocha.
On the implementation of the probabilistic logic programming language
ProbLog.
Theory and Practice of Logic Programming, 11(2-3):235โ262,
2011.
[KLM03]DBLP:conf/ijcai/KoniecznyLM03
Sรฉbastien Konieczny, Jรฉrรดme Lang, and Pierre Marquis.
Quantifying information and contradiction in propositional logic
through test actions.
In Georg Gottlob and Toby Walsh, editors, IJCAI-03, Proceedings
of the Eighteenth International Joint Conference on Artificial Intelligence,
Acapulco, Mexico, August 9-15, 2003, pages 106โ111. Morgan Kaufmann, 2003.
URL: <http://ijcai.org/Proceedings/03/Papers/015.pdf>.
[KO08]DBLP:journals/pvldb/KochO08
Christoph Koch and Dan Olteanu.
Conditioning probabilistic databases.
Proc. VLDB Endow., 1(1):313โ325, 2008.
URL: <http://www.vldb.org/pvldb/vol1/1453894.pdf>, https://doi.org/10.14778/1453856.1453894
doi:10.14778/1453856.1453894.
[KPHE07]DBLP:conf/semweb/KalyanpurPHS07
A.ย Kalyanpur, B.ย Parsia, M.ย Horridge, and E.Sirin.
Finding all justifications of owl dl entailments.
In K.ย Aberer, K.ย Choi, N.ย Fridman Noy, D.ย Allemang, K.ย Lee, L.ย J.ย B.
Nixon, J.ย Golbeck, P.ย Mika, D.ย Maynard, R.ย Mizoguchi, G.ย Schreiber, and
P.ย Cudrรฉ-Mauroux, editors, The Semantic Web, 6th International
Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC
2007, Busan, Korea, November 11-15, 2007, volume 4825 of Lecture Notes
in Computer Science (LNCS), pages 267โ280, Berlin, 2007. Springer.
[KTK22]DBLP:journals/tplp/KieselTK22
Rafael Kiesel, Pietro Totis, and Angelika Kimmig.
Efficient knowledge compilation beyond weighted model counting.
Theory and Practice of Logic Programming, 22(4):505โ522, 2022.
https://doi.org/10.1017/S147106842200014X
doi:10.1017/S147106842200014X.
[LLR+10]DBLP:conf/rr/LemboLRRS10
Domenico Lembo, Maurizio Lenzerini, Riccardo Rosati, Marco Ruzzi, and
Domenicoย Fabio Savo.
Inconsistency-tolerant semantics for description logics.
In Pascal Hitzler and Thomas Lukasiewicz, editors, RR 2010,
volume 6333 of LNCS, pages 103โ117. Springer, 2010.
https://doi.org/10.1007/978-3-642-15918-3_9
doi:10.1007/978-3-642-15918-3_9.
[LP14]DBLP:conf/jelia/LudwigP14
Michel Ludwig and Rafael Peรฑaloza.
Error-tolerant reasoning in the description logic โฐโ.
In Eduardo Fermรฉ and Joรฃo Leite, editors, JELIA
2014, volume 8761 of LNCS, pages 107โ121. Springer, 2014.
https://doi.org/10.1007/978-3-319-11558-0_8
doi:10.1007/978-3-319-11558-0_8.
[Luk99]DBLP:journals/jair/Lukasiewicz99
Thomas Lukasiewicz.
Probabilistic deduction with conditional constraints over basic
events.
Journal of Artificial Intelligence Research, 10:199โ241, 1999.
https://doi.org/10.1613/jair.577
doi:10.1613/jair.577.
[MHL08]DBLP:conf/dlog/MaHL08
Yue Ma, Pascal Hitzler, and Zuoquan Lin.
Paraconsistent reasoning for expressive and tractable description
logics.
In Franz Baader, Carsten Lutz, and Boris Motik, editors, DL-2008, volume 353 of CEUR Workshop Proceedings. CEUR-WS.org, 2008.
URL: <http://ceur-ws.org/Vol-353/MaHitzlerLin.pdf>.
[Nil86]DBLP:journals/ai/Nilsson86
N.ย J. Nilsson.
Probabilistic logic.
Artificial Intelligence, 28(1):71โ87, 1986.
[Pri79]DBLP:journals/jphil/Priest79
Graham Priest.
The logic of paradox.
J. Philos. Logic, 8(1):219โ241, 1979.
https://doi.org/10.1007/BF00258428
doi:10.1007/BF00258428.
[PS09]DBLP:conf/dlog/PenalozaS09
Rafael Peรฑaloza and Baris Sertkaya.
Axiom pinpointing is hard.
In Bernardoย Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike
Sattler, editors, DL 2009, volume 477 of CEUR Workshop
Proceedings. CEUR-WS.org, 2009.
URL: <http://ceur-ws.org/Vol-477/paper_39.pdf>.
[PS10a]DBLP:conf/dlog/PenalozaS10
Rafael Peรฑaloza and Baris Sertkaya.
Complexity of axiom pinpointing in the dl-lite family.
In Volker Haarslev, David Toman, and Grantย E. Weddell, editors, DL 2010, volume 573 of CEUR Workshop Proceedings. CEUR-WS.org, 2010.
URL: <http://ceur-ws.org/Vol-573/paper_24.pdf>.
[PS10b]DBLP:conf/ecai/PenalozaS10
Rafael Peรฑaloza and Baris Sertkaya.
Complexity of axiom pinpointing in the dl-lite family of description
logics.
In Helder Coelho, Rudi Studer, and Michaelย J. Wooldridge, editors,
ECAI-2010, volume 215 of FRONTIERS, pages 29โ34. IOS Press,
2010.
URL:
<http://www.booksonline.iospress.nl/Content/View.aspx?piid=17709>.
[PT17]DBLP:journals/ijar/PotykaT17
Nico Potyka and Matthias Thimm.
Inconsistency-tolerant reasoning over linear probabilistic knowledge
bases.
International Journal of Approximate Reasoning, 88:209โ236,
2017.
https://doi.org/10.1016/j.ijar.2017.06.002
doi:10.1016/j.ijar.2017.06.002.
[QD12]DBLP:conf/rweb/QiD12
Guilin Qi and Jianfeng Du.
Reasoning with uncertain and inconsistent OWL ontologies.
In Thomas Eiter and Thomas Krennwallner, editors, RW 2012,
volume 7487 of LNCS, pages 211โ244. Springer, 2012.
https://doi.org/10.1007/978-3-642-33158-9_6
doi:10.1007/978-3-642-33158-9_6.
[QJPD11]DBLP:journals/ijis/QiJPD11
Guilin Qi, Qiu Ji, Jeffย Z. Pan, and Jianfeng Du.
Extending description logics with uncertainty reasoning in
possibilistic logic.
Int. J. Intell. Syst., 26(4):353โ381, 2011.
https://doi.org/10.1002/int.20470
doi:10.1002/int.20470.
[RBL+14]RigBel14-URSWa-BC
Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, Riccardo Zese, and Giuseppe
Cota.
Learning probabilistic description logics.
In Fernando Bobillo, Rommelย N. Carvalho, Pauloย C.G. Costa, Claudia
d'Amato, Nicola Fanizzi, Kathrynย B. Laskey, Kennethย J. Laskey, Thomas
Lukasiewicz, Matthias Nickles, and Michael Pool, editors, Uncertainty
Reasoning for the Semantic Web III, volume 8816 of LNCS, pages 63โ78.
Springer International Publishing, Berlin, Heidelberg, 2014.
https://doi.org/10.1007/978-3-319-13413-0_4
doi:10.1007/978-3-319-13413-0_4.
[RBLZ13]RigBelLamZese13-RR13a-IC
Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, and Riccardo Zese.
Parameter Learning for Probabilistic Ontologies.
In Wolfgang Faber and Domenico Lembo, editors, 7th International
Conference on Web Reasoning and Rule Systems (RR 2013), Mannheim, Germany,
volume 7994 of LNCS, pages 265โ270. Springer Berlin Heidelberg, 2013.
https://doi.org/10.1007/978-3-642-39666-3_26
doi:10.1007/978-3-642-39666-3_26.
[RBLZ15]RigBelLamZes15-SW-IJ
Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, and Riccardo Zese.
Probabilistic description logics under the distribution semantics.
Semant. Web, 6(5):447โ501, 2015.
https://doi.org/10.3233/SW-140154
doi:10.3233/SW-140154.
[RCDB03]RCDB03
A.ย Rauzy, E.ย Chรขtelet, Y.ย Dutuit, and C.ย Bรฉrenguer.
A practical comparison of methods to assess sum-of-products.
Reliability Engineering and System Safety, 79(1):33โ42, 2003.
[Rig07]Rig-AIIA07-IC
Fabrizio Riguzzi.
A top down interpreter for LPAD and CPยญ-logic.
In 10th Congress of the Italian Association for Artificial
Intelligence, (AI*IA 2007), volume 4733 of LNAI, pages 109โ120.
Springer, 2007.
https://doi.org/10.1007/978-3-540-74782-6_11
doi:10.1007/978-3-540-74782-6_11.
[Rig09]Rig09-LJIGPL-IJ
Fabrizio Riguzzi.
Extended semantics and inference for the independent choice logic.
Logic Journal of the IGPL, 17(6):589โ629, 2009.
https://doi.org/10.1093/jigpal/jzp025
doi:10.1093/jigpal/jzp025.
[RLBZ13]RigBelLamZese13-RR13b-IC
Fabrizio Riguzzi, Evelina Lamma, Elena Bellodi, and Riccardo Zese.
BUNDLE: A reasoner for probabilistic ontologies.
In Wolfgang Faber and Domenico Lembo, editors, 7th International
Conference on Web Reasoning and Rule Systems (RR 2013), Mannheim, Germany,
volume 7994 of LNCS, pages 183โ197. Springer Berlin Heidelberg, 2013.
https://doi.org/10.1007/978-3-642-39666-3_14
doi:10.1007/978-3-642-39666-3_14.
[RS10]RigSwi10-ICLP10-IC
Fabrizio Riguzzi and Terrance Swift.
Tabling and answer subsumption for reasoning on logic programs with
annotated disjunctions.
In ICLP TC 2010, volumeย 7 of LIPIcs, pages 162โ171.
Schloss Dagstuhl, 2010.
https://doi.org/10.4230/LIPIcs.ICLP.2010.162
doi:10.4230/LIPIcs.ICLP.2010.162.
[RS11]RigSwi11-ICLP11-IJ
Fabrizio Riguzzi and Terrance Swift.
The PITA system: Tabling and answer subsumption for reasoning under
uncertainty.
Theory and Practice of Logic Programming, 11(4โ5):433โ449,
2011.
https://doi.org/10.1017/S147106841100010X
doi:10.1017/S147106841100010X.
[Sat95]DBLP:conf/iclp/Sato95
Taisuke Sato.
A statistical learning method for logic programs with distribution
semantics.
In Leon Sterling, editor, Logic Programming, Proceedings of the
Twelfth International Conference on Logic Programming, Tokyo, Japan, June
13-16, 1995, pages 715โ729. MIT Press, 1995.
https://doi.org/10.7551/mitpress/4298.003.0069
doi:10.7551/mitpress/4298.003.0069.
[SBK05]DBLP:conf/aaai/SangBK05
Tian Sang, Paul Beame, and Henryย A. Kautz.
Performing bayesian inference by weighted model counting.
In AAAI-2005, pages 475โ482, Palo Alto, California USA, 2005.
AAAI Press.
[SC03]DBLP:conf/ijcai/SchlobachC03
S.ย Schlobach and R.ย Cornet.
Non-standard reasoning services for the debugging of description
logic terminologies.
In G.ย Gottlob and T.ย Walsh, editors, IJCAI-03, Proceedings of
the Eighteenth International Joint Conference on Artificial Intelligence,
Acapulco, Mexico, August 9-15, 2003, pages 355โ362, San Francisco, CA, USA,
2003. Morgan Kaufmann Publishers Inc.
[SMH08]shearer2008hermit
Rob Shearer, Boris Motik, and Ian Horrocks.
HermiT: A highly-efficient OWL reasoner.
In Catherine Dolbear, Alan Ruttenberg, and Ulrike Sattler, editors,
Proceedings of the Fifth OWLED Workshop on OWL: Experiences and
Directions, collocated with the 7th International Semantic Web Conference
(ISWC-2008), Karlsruhe, Germany, October 26-27, 2008, volume 432 of CEUR Workshop Proceedings, pages 1โ10. Sun SITE Central Europe, 2008.
[SPCG+07]DBLP:journals/ws/SirinPGKK07
E.ย Sirin, B.ย Parsia, B.ย Cuenca-Grau, A.ย Kalyanpur, and Y.ย Katz.
Pellet: A practical OWL-DL reasoner.
Journal of Web Semantics, 5(2):51โ53, 2007.
[SQJH08]DBLP:conf/aswc/SuntisrivarapornQJH08
Boontawee Suntisrivaraporn, Guilin Qi, Qiu Ji, and Peter Haase.
A modularization-based approach to finding all justifications for
OWL DL entailments.
In John Domingue and Chutiporn Anutariya, editors, ASWC-2008,
volume 5367 of LNCS, pages 1โ15. Springer, 2008.
https://doi.org/10.1007/978-3-540-89704-0_1
doi:10.1007/978-3-540-89704-0_1.
[SV09]DBLP:conf/cade/SebastianiV09
Roberto Sebastiani and Michele Vescovi.
Axiom pinpointing in lightweight description logics via horn-sat
encoding and conflict analysis.
In Renateย A. Schmidt, editor, CADE 2009, volume 5663 of LNCS, pages 84โ99. Springer, 2009.
https://doi.org/10.1007/978-3-642-02959-2_6
doi:10.1007/978-3-642-02959-2_6.
[TDJ21]DBLP:conf/cade/TammetDJ21
Tanel Tammet, Dirk Draheim, and Priit Jรคrv.
Confidences for commonsense reasoning.
In Andrรฉ Platzer and Geoff Sutcliffe, editors, CADE 2021,
volume 12699 of Lecture Notes in Computer Science, pages 507โ524.
Springer, 2021.
https://doi.org/10.1007/978-3-030-79876-5_29
doi:10.1007/978-3-030-79876-5_29.
[TH06]tsarkov2006fact++
Dmitry Tsarkov and Ian Horrocks.
FaCT++ description logic reasoner: System description.
In Ulrich Furbach and Natarajan Shankar, editors, IJCAR-2006,
volume 4130 of LNCS, pages 292โ297. Springer, 2006.
https://doi.org/10.1007/11814771_26
doi:10.1007/11814771_26.
[Thi18]thimm2018evaluation
Matthias Thimm.
On the evaluation of inconsistency measures.
Measuring inconsistency in information, 73:19โ60, 2018.
[Tod89]DBLP:conf/focs/Toda89
Seinosuke Toda.
On the computational power of PP and +p.
In 30th Annual Symposium on Foundations of Computer Science,
Research Triangle Park, North Carolina, USA, 30 October - 1 November 1989,
pages 514โ519. IEEE, 1989.
https://doi.org/10.1109/SFCS.1989.63527
doi:10.1109/SFCS.1989.63527.
[Val79]DBLP:journals/siamcomp/Valiant79
Leslieย G. Valiant.
The complexity of enumeration and reliability problems.
SIAM J. Comput., 8(3):410โ421, 1979.
[XM12]DBLP:conf/ecai/XiaoM12
Guohui Xiao and Yue Ma.
Inconsistency measurement based on variables in minimal unsatisfiable
subsets.
In Lucย De Raedt, Christian Bessiere, Didier Dubois, Patrick Doherty,
Paolo Frasconi, Fredrik Heintz, and Peter J.ย F. Lucas, editors, ECAI-2012, volume 242 of FRONTIERS, pages 864โ869. IOS Press, 2012.
https://doi.org/10.3233/978-1-61499-098-7-864
doi:10.3233/978-1-61499-098-7-864.
[ZBR+18]ZesBelRig16-AMAI-IJ
Riccardo Zese, Elena Bellodi, Fabrizio Riguzzi, Giuseppe Cota, and Evelina
Lamma.
Tableau reasoning for description logics and its extension to
probabilities.
Annals of Mathematics and Artificial Intelligence,
82(1โ3):101โ130, 2018.
https://doi.org/10.1007/s10472-016-9529-3
doi:10.1007/s10472-016-9529-3.
[ZC21]DBLP:journals/ws/ZeseC21
Riccardo Zese and Giuseppe Cota.
Optimizing a tableau reasoner and its implementation in prolog.
Journal of Web Semantics, 71:100677, 2021.
https://doi.org/10.1016/j.websem.2021.100677
doi:10.1016/j.websem.2021.100677.
[ZCL+19a]DBLP:journals/tplp/ZeseCLBR19
Riccardo Zese, Giuseppe Cota, Evelina Lamma, Elena Bellodi, and Fabrizio
Riguzzi.
Probabilistic DL reasoning with pinpointing formulas: A
prolog-based approach.
Theory and Practice of Logic Programming, 19(3):449โ476, 2019.
https://doi.org/10.1017/S1471068418000480
doi:10.1017/S1471068418000480.
[ZCL+19b]ZesBelCot18-TPLP-IJ
Riccardo Zese, Giuseppe Cota, Evelina Lamma, Elena Bellodi, and Fabrizio
Riguzzi.
Probabilistic DL reasoning with pinpointing formulas: A
prolog-based approach.
Theory and Practice of Logic Programming, 19(3):449โ476, jan
2019.
URL: <https://doi.org/10.1017
https://doi.org/10.1017/s1471068418000480
doi:10.1017/s1471068418000480.
[Zes17]Zese17-SSW-BK
Riccardo Zese.
Probabilistic Semantic Web: Reasoning and Learning, volumeย 28
of Studies on the Semantic Web.
IOS Press, Amsterdam, 2017.
URL:
<http://ebooks.iospress.nl/volume/probabilistic-semantic-web-reasoning-and-learning>,
https://doi.org/10.3233/978-1-61499-734-4-i
doi:10.3233/978-1-61499-734-4-i.
[ZXLdB14]DBLP:journals/ijar/ZhangXLB14
Xiaowang Zhang, Guohui Xiao, Zuoquan Lin, and Janย Van den Bussche.
Inconsistency- with OWL DL.
International Journal of Approximate Reasoning, 55(2):557โ584,
2014.
https://doi.org/10.1016/j.ijar.2013.09.005
doi:10.1016/j.ijar.2013.09.005.
>
|
http://arxiv.org/abs/2306.02272v2
|
20230604063313
|
OWQ: Lessons learned from activation outliers for weight quantization in large language models
|
[
"Changhun Lee",
"Jungyu Jin",
"Taesu Kim",
"Hyungjun Kim",
"Eunhyeok Park"
] |
cs.CL
|
[
"cs.CL"
] |
EfficientSRFace: An Efficient Network with Super-Resolution Enhancement for Accurate Face DetectionSupported by the Natural Science Foundation of China (NSFC) under grants 62173186 and 62076134.
Guangtao Wang1 Jun Li1 Jie Xie1 Jianhua Xu1 Bo Yang2
==================================================================================================================================================================================================
Large language models (LLMs) with hundreds of billions of parameters show impressive results across various language tasks using simple prompt tuning and few-shot examples, without the need for task-specific fine-tuning. However, their enormous size requires multiple server-grade GPUs even for inference, creating a significant cost barrier. To address this limitation, we introduce a novel post-training quantization method for weights with minimal quality degradation. While activation outliers are known to be problematic in activation quantization, our theoretical analysis suggests that we can identify factors contributing to weight quantization errors by considering activation outliers. We propose an innovative PTQ scheme called outlier-aware weight quantization (OWQ), which identifies vulnerable weights and allocates high-precision to them. Our extensive experiments demonstrate that the 3.01-bit models produced by OWQ exhibit comparable quality to the 4-bit models generated by OPTQ.
ยง INTRODUCTION
Large language models (LLMs) <cit.> demonstrate impressive generation performance on a wide range of complex language tasks solely relying on prompt tuning and few-shot examples without the need for task-specific fine-tuning. This suggests a potential increase in LLM applications in the future. However, the memory and computational demands of LLMs present significant challenges, not just for training but for inference as well. For instance, the GPT3-175B model requires about 350GB of space for storing model parameters. More than 5 A100 GPUs having 80GB of memory are required for inference, demanding hundreds of thousands of dollars to build. This expensive cost hampers the widespread adoption of LLMs.
Weight quantization is an attractive optimization method for large language models (LLMs) as it can significantly reduce system requirements.
By storing parameters with low precision through quantization, storage space can be considerably saved. This also leads to various performance benefits, including increased ALU utilization and reduced communication costs by allocating more operations to a single GPU <cit.>. Furthermore, quantization can address memory bottlenecks that cause performance degradation, particularly in single-batch inference scenarios with low data reuse. Performance improvement is also expected through low-precision acceleration. Advanced studies <cit.> have shown that matrix multiplication with 3-bit weight and fp16 activation exhibits remarkable performance improvements on a single GPU compared to matrix multiplication with fp16 weights and activation using 5 GPUs. Thus, weight quantization of LLMs is crucial as it can offer practical performance enhancements.
However, weight quantization of LLMs presents a trade-off, as it can degrade the output quality of the model. Minimizing this quality loss is vital for its widespread adoption, but LLM quantization is challenging due to the need for a fast quantization process considering the large size of LLMs and the requirement to avoid task-specific fine-tuning to preserve zero-shot generative performance. In advance, OPTQ <cit.> (or known as GPTQย <cit.>) has introduced layer-wise post-training quantization (PTQ) based on optimal brain compression (OBC) algorithmย <cit.>. This approach allows for the quantization of the largest OPT <cit.> model in approximately 4 hours, with minimal impact on accuracy, even at 3-bit precision. The model's capacity can be compressed to less than a quarter of the size of the existing fp16 format, making it accessible even with a single A100 GPU. This work demonstrates the potential of extreme weight quantization for LLMs.
In this paper, we introduce a new weight quantization technique, called Outlier-aware Weight Quantization (OWQ), an advancement over OPTQ that significantly enhances the quality of the quantized model. Previous studiesย <cit.> have identified outliers in certain activation feature dimensions. These outliers, due to their diverse data range, complicate the quantization process of activation. While our work utilizes fp16 for activation, our analysis indicates that activation outlier plays a key role in amplifying quantization-induced errors in weight. OWQ uses a mixed-precision quantization scheme, which applies higher precision to the weights susceptible to quantization caused by activation outliers. Our extensive analysis shows that the 3.01-bit OWQ model shows comparable quality to the 4-bit OPTQ model. To the best of our knowledge, we are the first study to incorporate the existence of activation outlier in extreme weight quantization. Our key contributions can be summarized accordingly:
* We introduce a novel analysis that reveals how activation outliers can amplify the error in weight quantization, providing valuable insights for improving the quality of quantized weights.
* Building upon this analysis, we propose a new weight quantization algorithm called OWQ. OWQ not only preserves the benefits of low-precision but also significantly reduces the quantization error.
* Through extensive analysis, we demonstrate that OWQ achieves comparable performance with 3.01-bit to that of OPTQ with 4-bit across diverse LLM tasks without incurring notable performance overhead.
ยง BACKGROUND AND RELATED WORKS
ยง.ยง Quantization and LLMs
Quantization is a widely-used optimization technique aimed at exploiting the benefit of low-precision while maintaining the quality of the target network. While the primary advantage is the reduction in storage space, it is worth noting that quantization also provides substantial performance improvement through the support of low-precision acceleration. The main drawback of quantization is the degradation of output quality. Mitigating quality degradation is highly important, and early studies focused on quantization-aware training (QAT) <cit.>, which tried to restore quality through additional training. However, as the understanding of quantization grew and various techniques emerged, post-training quantization (PTQ) <cit.> have been actively studied, enabling quality preservation without training.
Due to the LLM's necessity of significant storage space and computational resources, it is crucial to apply optimization via quantization. Numerous studies have been conducted, but the distinctive characteristics of LLMs steer the research in a unique direction. In general, QAT has been favored for extremely low-bit precision to minimize quantization error. However, it is less favorable for LLM quantization because of the high cost of the training environment. Moreover, QAT might not be the optimal choice to preserve the task-agnostic generation performance of LLMs.
Instead, PTQ has emerged as an important topic for LLM quantization. This field has two distinct approaches: one aims to quantize both activations and weights to int8ย <cit.>, considering both capacity reduction and performance enhancement. In contrast, the second approach focuses solely on weight quantization to sub-4-bit precisionย <cit.>. In this paper, we align our work with the latter approach. While concentrating on weight quantization, we devise a novel quantization scheme, drawing significant inspiration from int8-related research on activation quantization.
ยง.ยง Int8 Quantization for Activation and Weight
Int8 multiplication can provide up to 2x performance improvements and more than 5x energy consumption reduction compared to fp16 baselines <cit.>. Numerous studiesย <cit.> aim to quantize both activation and weight to int8 for matrix multiplication operations in LLMs. However, those studies identify a unique challenge of LLMs for activation quantization. LLMs exhibit a few outliers in intermediate activations, with values significantly larger than other activations, and these outliers are concentrated in specific feature dimensions. Preserving the values of these outliers is known to be crucial for maintaining accuracy after activation quantization.
In this study, while activation remains at full-precision and only weight quantization is applied, we figure out that the presence of activation outliers still impacts the sensitivity of weight quantization. We also demonstrate that considering activation outliers is essential for accurate weight quantization.
ยง.ยง OPTQ: Weight Quantization for LLMs
OPTQย <cit.> represents the cutting-edge research in the field of weight quantization for LLMs. It is based on Optimal Brain Compression (OBC)ย <cit.>, which employs element-wise quantization (pruning) and compensation, using a Hessian-based metric of layer-wise quantization errors (<ref>). This approach differs from previous studies, which applied quantization through gradient descent based on the straight-through estimatorย <cit.> or a rounding-to-nearest mechanismย <cit.>.
w_q = argmin_w_q(quant(w_q)-w_q)^2/[H_F^-1]_qq, ฮด _F = - w_q - quant(w_q)/[H_F^-1]_qqยท (H_F^-1)_:,q
OPTQ has optimized OBC to apply quantization in parallel for each element of the input dimension, facilitating rapid quantization and demonstrating remarkable performance even at 3-bit. However, if the model size decreases or the problem's complexity increases, the accuracy still falls compared to the fp16 baselines. In this paper, we suggest selectively applying high-precision to weights that are vulnerable to quantization caused by activation outliers, while applying OPTQ to the remaining weights. These enhancements can significantly reduce the quantization error while preserving the quantization performance of OPTQ.
ยง PROBLEM DEFINITION AND MOTIVATION
In this section, prior to introducing our idea, we first aim to clearly define the problem and explain how our findings can address it. This paper aims to apply the layer-wise linear quantization for the weights of LLMs with minimal quality degradation. When an input feature X โโ^C_inร N is given, where C_in represents the number of input channels and N is the sequence length of the input, the full-precision weight matrix W โโ^C_outร C_in for C_out output features is iteratively updated to minimize the difference between the output activations before and after quantization. The objective function to find the quantized weight ลด that minimizes the squared error is defined as follows:
ลดmin E = ลดmin || W X - ลด X ||_2^2 s.t. C(ลด) < C_t,
where C(ยท) represents the compression ratio and C_t is the target compression ratio. Solving the optimization problem represented by <ref> is known to be a challenging task, as it falls under the category of NP-hard problemsย <cit.>. In our work, we leverage the OPTQ algorithm for weight quantization. However, it is worth noting that other linear quantization algorithms can also be applied to address this problem.
The layer-wise quantization process is applied sequentially from the model input to the output, ensuring the comprehensive quantization of all weights in the model.
ยง.ยง Layer-wise Linear Quantization and Hessian of Weights
In this subsection, we drive the relationship between the sensitivity of weights and the activation outliers in terms of quantization.
Similar to the approach used in OBC, we reorganize the squared error term in <ref> as the sum of squared errors for each output channel in the weight matrix, and the equation is modified as ฮฃ_i=1^C_out ||W_i,: X - ลด_i,: X||^2_2. Through this decomposition, we can clearly observe that the overall error is separated into individual errors for each output channel.
With the modified equation, our focus shifts to two key aspects. Firstly, it is important to note that there is no Hessian interaction between output channels. Specifically, the individual Hessians with respect to the layer-wise quantization error, denoted as H_i โโ^C_inร C_in, have an identical value as:
H_i = โ^2E/โ W_i,:^2 = 2XX^T.
Secondly, as observed in previous studies <cit.>, the individual error term can be approximated using Taylor expansion. By setting ฮ W_i,: = W_i,: - ลด_i,:, the error for the i-th output channel can be expressed as follows:
E_i = ||W_i,:X - ลด_i,:X||_2^2 โฮ W_i,: H_i ฮ W_i,:^T.
For the detailed proof, please refer to the supplementary materials. The use of the Hessian as a metric to measure sensitivity in quantization has been widely adopted in various quantization approachesย <cit.>. As these studies have pointed out, particularly in the context of layer-wise quantization, the output error can be directly related to the Hessian of the weights.
Keeping these two observations in mind, we can derive a surprising insight by acknowledging the presence of activation outliers in LLMs. Previous studies <cit.> have reported that certain feature dimensions of LLM activation contain outliers with significantly larger values than others. These activation outliers cause some elements of H_i to exhibit exceptionally high values, as illustrated in <Ref>. This abnormal surge in Hessian values increases the channels' sensitivity to quantization.
As indicated in <ref>, even when the same degree of weight perturbation is present, the ensuing change in output can be considerably larger due to some large elements of H_i. Therefore, if an activation outlier exists, a perturbation in the connected weight will result in a larger output error.
We refer to the weights that are susceptible to quantization, specifically those associated with the activation outliers in a specific input channel, as a weak column (shown in <Ref>).
Hence, if we quantize all weights to the same bit-width during the weight quantization process, the quantization error at the weak columns corresponding to the activation outliers can lead to substantial perturbation on the output, resulting in a significant quantization error. To minimize this quality degradation, it is crucial to apply special handling for those weak columns.
ยง OWQ: OUTLIER-AWARE WEIGHT QUANTIZATION
To address this problem, we propose a novel idea called Outlier-aware Weight Quantization (OWQ). The algorithm is designed on top of OPTQ, but it includes additional pipelines that selectively handle weak columns with high-precision, minimizing quality degradation.
In the previous section, we highlighted the relationship between the Hessian matrix and sensitivity caused by activation outliers in ย <ref>. We also demonstrated that the final error is influenced by the quadratic terms of perturbations with the Hessian matrix in ย <ref>. In addition, a previous study <cit.> has suggested using the product of the trace of the Hessian matrix and the Frobenius norm of the perturbation to approximate the sensitivity of quantization. Building on these insights, we define the sensitivity of j-th column as follows:
sensitivity_j = ฮป_j || ฮ W_:,j ||_2^2,
where ฮป_j is the j-th diagonal element of the Hessian matrix.
In this work, we extend the studyย <cit.> that allocates layer-wise bit-width based on each layer's sensitivity: the sensitivity is measured at a more granular level, focusing on the columns of the weight matrix.
By analyzing the sensitivity of individual columns, we can effectively identify the weak columns that are vulnerable to quantization and require higher precision.
When the goal is to select a specific number (k) of weak columns, the proposed metric is utilized to choose the top-k columns based on their highest sensitivity values.
Moreover, as depicted in <Ref>, it's important to note that our approach is distinctly different from existing outlier-aware quantization studiesย <cit.>. In those studies, outlier weights are excluded from quantization based solely on their magnitude, with the motivation to minimize the error of weights before and after the quantization.
However, our method focuses on not only minimizing the quantization error for the weights but also that of the output activation. Therefore, we utilize a Hessian metric related to the output of matrix multiplication, and the weights are selected based on their sensitivity, rather than its magnitude, as shown in the figure.
After identifying the weak columns, we store those vectors with high precision.
In practice, we store a complete low-precision matrix with zero-filled weak columns. Additionally, we store the weak columns as fp16 (16-bit floating-point) and use an extra single 16-bit integer per column, which represents the column index of the weak columns. If we denote the matrix multiplication operation by ร, the output of the matrix multiplication can be calculated as the sum of (zero-filled quantized weight matrix ร fp16 activation matrix) and (fp16 weak columns ร corresponding fp16 activation channels).
The additional storage overhead is solely caused by the weak columns.
This overhead is negligible (โ 0.3%, demonstrated in <ref>), while the accuracy is significantly improved. In addition, the remaining weights are quantized using OPTQ, but we introduce a key advancement to minimize the quantization error even further, as explained in the following subsection.
ยง.ยง Quantization Hyperparameter Tuning
Following the selection of weak columns, the remaining weights are quantized using the OPTQ method. Since OPTQ also employs sequential column-wise quantization, the exclusion of weak columns can be seamlessly integrated into the OPTQ framework.
However, we have made a modification to the existing OPTQ, which applied min-max quantization. In our experiments, we tuned the quantization hyperparameters, such as step size and zero point, to narrow the quantization range than the maximum and minimum value range of the weight. Previous studies reported that truncation of data led to a significant reduction in quantization error by balancing truncation and rounding errorsย <cit.>, but performance degradation was observed in OPTQ, hence they use a naive min-max range. However, in our experiments, after removing the weak columns, the introduction of truncation significantly helped in reducing the quantization error and producing reliable output. According to our empirical observations, the weak columns in the key and query layers of transformer blocks have exceptionally large values. Truncating these values led to a very large error, resulting in a significant loss of accuracy. However, in our method, this error does not occur as the weak column are maintained with full-precision.
In our experiments, we employ a simple quantization method based on rounding to the nearest with truncation to adjust the quantization parameters. The optimal values of the parameters are searched greedily to minimize the difference between the weights before and after quantization. After searching for the best quantization parameters, we apply OPTQ on top of the searched values.
ยง EXPERIMENTS
ยง.ยง Experimental Setup
To validate the outstanding performance of our proposed method, we present quantization results for large-scale LLMs such as OPTย <cit.>, LLaMAย <cit.>, and BLOOMย <cit.> (Section C.1) families. Our primary baseline is OPTQ, so we apply identical experimental settings of it. For instance, our calibration dataset consists of 128 random 2048 token segments from the C4 datasetย <cit.>. All experiments were conducted on a single NVIDIA A100 GPU with 80GB of main memory. Like OPTQ, our method quantizes the target model without re-training. The reported results are obtained in a zero-shot environment. To measure the zero-shot performance, we utilize an open-source evaluation repository, EleutherAI/lm-evaluation-harnessย <cit.>.
Please note that we report the numbers with an error margin based on 5 experiments with different seeds.
Compared to the baselines of 3-bit and 4-bit OPTQ results, we present two variants, with an extra 0.01 bit and 0.1 bit overhead, respectively. The additional storage area of extra bit is evenly distributed across the linear layers within the transformer block. For instance, in the OPT model which has six linear layers (key, query, value, out, fc1, and fc2), the weak columns of the key layer contribute an average of 0.00167 bit in the 3.01-bit configuration (0.125% columns of the weight matrix). This extra bit covers the all overhead of the mixed-precision representation. If we quantize OPT-175B model with an average of 3.01 bits, it will require approximately 220 MB of additional storage compared to the 3-bit OPT-175B OPTQ model, which utilizes around 65.6 GB of storage space. All experiments were conducted using the PyTorch 2.0 <cit.> framework with HuggingFace integrationย <cit.>, and the source code is available at
.
ยง.ยง Results of Perplexity Measure
The accuracy of the proposed model is assessed through the evaluation on multiple language tasks, including WikiText-2 <cit.>, Penn Treebank (PTB) <cit.>, and C4. Perplexity-based tasks are particularly sensitive to model quantizationย <cit.>, with perplexity numbers serving as indicators of the generative performance of the quantized model. The results for WikiText-2 can be found in <Ref> and <Ref>, while the results for PTB and C4 are provided in the supplementary material.
The results on the tables clearly demonstrate that OWQ consistently delivers substantial quality improvements across the LLM families, irrespective of the model size. The 3.01-bit model effectively mitigates the quality degradation observed in the 3-bit OPTQ model, while the 3.1-bit model achieves comparable performance to the 4-bit OPTQ model. Furthermore, OWQ 4.01-bit yields noteworthy improvements, highlighting the significance of treating weak columns. These results underscore the importance and effectiveness of our approach in preserving model quality after quantization.
An interesting finding is the significant improvement in model quality for models with less than 13 billion parameters when applying OWQ. Although previous studies have highlighted the presence of activation outliers in models with more than 6.7 billion parameters, even smaller models with moderately large channels can still benefit from mixed precision quantization. This suggests that the concept of weak columns remains valid and effective in enhancing the quality of LLMs, regardless of the model size.
ยง.ยง Results of Various Zero-shot Tasks
We conducted additional experiments on diverse zero-shot language tasks. The results are available at <Ref> and <Ref>. Our approach shows significant performance improvements across all tested tasks. Even our models using 3.01 bits clearly outperform the 4-bit RTN, and exhibit results on par with the 4-bit of OPTQ models. Our method's strength lies in its universal applicability, consistently boosting the performance of generative models with minimal storage overhead.
The Pareto-front plot in Figure <ref> clearly illustrates the advantages of OWQ compared to OPTQ.
ยง.ยง Acceleration on Real Device
To show the benefits of low-precision acceleration, we created a customized CUDA kernel for OWQ and measured its latency overhead on an A100 GPU. Notably, we set weak columns in the low-precision matrix to zero, resulting in the low-precision process having the same overhead as OPTQ's customized kernel. Additionally, we integrated a high-precision operation for weak columns. The activation input channels for these weak columns are selected on-the-fly, and a dense high-precision GeMV kernel is used. In the 3.1-bit model, the mixed process computation increases latency by only 3.79 % compared to the 3-bit acceleration of the OPTQ kernel on OPT-175B model. Considering the benefits gained, the computational overhead is negligible.
ยง.ยง.ยง Effect of the Extra Bit Size
To analyze the impact of the number of weak columns, we measured the perplexity of the OPT-6.7B model while varying the extra bit ratio (<Ref>). While our main focus was on reporting performance using 0.01 and 0.1 extra bits, we observed significant performance improvements even with fewer weak columns. With just 3.005 bits, the model already surpasses the performance of the OPTQ 3-bit model. However, as the bit width increases, the performance gains gradually diminish, reaching a saturation point. It is important to note that activation outliers are limited to only a few dimensions, resulting in a small number of corresponding weak columns. Consequently, even a small number of weak columns can lead to noticeable performance improvements.
ยง.ยง Quantization time cost
The speed of the quantization algorithm is of utmost importance for LLM quantization. OWQ, compared to OPTQ, introduces extra operations for selecting weak columns and tuning quantization hyperparameters. However, by sharing certain expensive items like Hessian with OPTQ, the specific overhead of OWQ can be minimized. <Ref> presents the quantization time over various LLMs. OWQ can still successfully quantize a 66B-scale model in under 3 hours, showcasing its practicality for real-world applications.
ยง.ยง Layer-wise Quantization Sensitivity
In our experiments, we uniformly allocated extra storage space to linear layers, but observed varying sensitivity of weak columns across layers. In the OPT-6.7B model, we assessed layer-by-layer sensitivity by applying OWQ to select layers within a fixed capacity budget. Our results in <Ref> indicate the best accuracy is achieved when weak columns are removed simultaneously from both key and query weights. Performance suffers significantly if either layer isn't addressed, especially for queries alone. OWQ's hyperparameter tuning stage is more sensitive to weak columns in key and query layers with higher magnitudes than other layers. Improvements could be made by allocating space at different rates according to each layer's sensitivity. However, the rate tuning is prohibitively expensive, making implementing this approach challenging. Developing a method to tune layer-wise weak column numbers is a potential future research direction.
ยง DISCUSSIONS
ยง.ยง Fine-grained Quantization
Applying linear quantization at fine-grained granularity significantly reduces quantization error while introducing some storage overhead for quantization hyperparameters. OPTQ utilizes this approach by dividing row vectors into groups (e.g., group size of 128 or 1024) and applying linear quantization independently with different configurations. This expansion can be applied orthogonally to OWQ, so we can combine row-wise fine-grained quantization with OWQ to assess any additional improvements. Results in <Ref> show that the room for improvement from fine-grained quantization is negligible, as OWQ already substantially enhances the 3-bit model's quality. Moreover, compared to grouped OPTQ with 128 group size, 3.01-bit OWQ's storage overhead is only 8% of grouped OPTQ overhead while achieving better perplexity and zero-shot accuracy. Thus, OWQ is a superior solution to the grouping technique.
ยง.ยง Comparison with Act-Ordering of OPTQ
While not discussed in OPTQ papers, the "act-order" option was recently added to their official GitHub <cit.>. This option, which quantizes based on activation magnitude, differs from the previous OPTQ approach that applied quantization to columns sequentially. Generally, "act-order" produces superior results. Although no theoretical interpretation exists for this method, our study suggests that the benefits of "act-order" arise from sensitivity-aware quantization. This means that applying sequential quantization beginning with sensitive columns improves performance, even within the OPTQ method. However, act-order alone cannot sufficiently mitigate the quality degradation caused by weak columns within a low-precision domain. OWQ, by addressing the weak columns with high-precision, consistently outperforms OPTQ + act order, regardless of act-order option.
ยง CONCLUSION
The presence of activation outliers has been identified as a significant challenge in LLM activation quantization. We found that even in weight quantization, activation outliers can increase the sensitivity of certain weight columns, leading to significant quality degradation in a low-precision domain. To overcome this, we introduced a novel quantization scheme, OWQ. Compared to existing 3-bit quantization methods, OWQ achieves substantial quality improvements with only negligible storage and computation overhead, effectively preserving the benefits of low-precision acceleration. We believe that our theoretical insights will expand the understanding of weight quantization, inspiring future research and promoting the widespread adoption of LLMs.
plain
ยง A MAIN PROOFS
ยง.ยง 1 Proof of Eq. (5) in the Main Manuscript
Our goal is to find the quantized weight ลด that minimizes the objective function:
ลดmin E = ลดmin || W X - ลด X ||_2^2 s.t. C(ลด) < C_t .
we can reorganize the squared error term in <ref> as the sum of squared errors corresponding to each output channel in the weight matrix:
E = ฮฃ_i=1^C_out ||W_i,: X - ลด_i,: X||^2_2 = ฮฃ_i=1^C_out E_i .
From this decomposition, we can see that the overall error can be separated into the sum of the errors from each output channel. Therefore, our goal of minimizing overall error E can be thought of as minimizing the error of each output channel E_i.
The error that occurs when quantizing a network can be approximated by a Taylor series expansion for the model weights as follows:
E_i = โ E_i/โ W_i,: ฮ W_i,:^T + 1/2 ฮ W_i,: H_i ฮ W_i,:^T ,
where ฮ W_i,: = W_i,: - ลด_i,: is the perturbation of the i'th row of the weight and H_i = โ^2 E_i / โ W_i,:^2 is the Hessian matrix containing the second order derivatives for all weights in the weight row W_i,:. Since E_i is a quadratic function, all terms above the third order become zero. For a network trained with a local minimum of error, the first-order (gradient) term can be ignored:
E_i โฮ W_i,: H_i ฮ W_i,:^T .
Therefore, the quantization error for each output channel can be approximated as <ref>.
ยง B EXPERIMENT DETAILS
The implementation of OWQ is based on OPTQ (GPTQ) official GitHubย <cit.>.
ยง.ยง 1 Evaluation Settings
We measured language modeling perplexity on datasets from WikiText-2ย <cit.>, PTBย <cit.>, and C4ย <cit.>. The validation set is concatenated using two newlines as separators in WikiText-2 and a space as a separator in the PTB and C4 and then the concatenated data is tokenized using the default HuggingFaceย <cit.> tokenizer for each model.
ยง.ยง 2 Score Issues on LLaMA Model Family
In Section C, the PTB perplexity of the LLaMAย <cit.> model is poor. This is a problem of a full-precision model, not due to quantization. Another issue is that the accuracy we obtained in our zero-shot experiments using EleutherAI/lm-evaluation-harnessย <cit.> is slightly lower than that published in the LLaMA paper. We expect the accuracy to be different due to differences in the experimental environment or special tokens. Since all experiments were conducted in the same environment, we can compare the results regardless of these issues.
ยง.ยง 3 Effective Bit-Width
In Section 4 of the main manuscript, we described that we store a complete low-precision matrix with zero-filled weak columns and additional fp16 weak columns (latency-favored method). Another possible option is storing the reduced size low-precision matrix and fp16 weak columns to keep the total number of weight elements the same and to avoid storing unnecessary zeros corresponding to weak columns (storage-favored method). The latter method has trade-offs: it can save more storage, but it has more overhead in the actual operation.
We calculated the effective bit-width using the storage-favored method for the accuracy results in the main manuscript and the supplementary materials. However, the memory overhead is similar for both methods as there are few weak columns.
If we add the storage overhead of the zero-filled low-precision matrix to our 3.01-bit case, it becomes 3.012-bit, which is negligible overhead.
ยง C ADDITIONAL RESULTS
In this section, the results with * mark in the tables used 3.05-bit. Otherwise, there are few or no weak columns in the budget of 3.01-bit due to the small model dimension. Similar to the OPTQ, the calibration data were sampled from the C4 training set, so measuring perplexity on C4 is not a fully zero-shot situation.
ยง.ยง 1 BLOOM Model Results
We additionally checked the effectiveness of the proposed OWQ on BLOOMย <cit.> model families. <Ref>, <Ref>, and <Ref> show perplexity results on BLOOM models. For the 176B case, we used a single seed.
ยง.ยง 2 Additional Perplexity Results
Tables in this section (<Ref>,<ref>,<ref>, and <ref>) show additional language generation task results.
ยง.ยง 3 Additional Zero-shot Results
Tables in this section (<Ref>, <ref>, <ref>, <ref>, and <ref>) show additional results for several zero-shot benchmarks: PIQAย <cit.>, ARC-easy and ARC-challangeย <cit.>, OpenBookQAย <cit.>.
ยง.ยง 4 Language Generation Results
We analyzed the linguistic abilities of the quantized model through generated outputs by prompting the model. We experimented with full-precision (FP16) LLaMA 7B, 30B model, 3-bit quantized 30B model with OPTQ, and 3.01-bit quantized 30B model using OWQ. <ref> shows the input prompts and the corresponding outputs for each model. The n / 10 notation at the upper-right of each output box indicates the number of correct answers out of 10 total attempts. Each output example in <ref> has been randomly selected among 10 attempts. In our experiments, we used some prompt examples from the Chain-of-Thought Promptingย <cit.>. <ref> shows that the 30B OWQ 3.01-bit model has comparable linguistic abilities to the full-precision 30B model, while having a similar storage capacity to the full-precision 7B model.
ยง.ยง 5 True Sequential and Activation Order
As mentioned in Section 6 of the main manuscript, the "act-order" option and "true-sequential" option were recently added to the official Github of OPTQย <cit.>.
The "true-sequential" option allows sequential quantization even within a transformer block, while the "act-order" determines the quantization order based on activation magnitude.
These options are beneficial for OPTQ in general,
especially for LLaMA models (<Ref>, <Ref>). However, there is no significant performance improvement for OWQ with these options.
|
http://arxiv.org/abs/2306.06879v1
|
20230612054556
|
Arrhenius law for interacting diffusive systems
|
[
"Vishwajeet Kumar",
"Arnab Pal",
"Ohad Shpielberg"
] |
cond-mat.stat-mech
|
[
"cond-mat.stat-mech",
"cond-mat.soft",
"physics.bio-ph",
"physics.chem-ph"
] |
M
K
ฮณ
ฮ
ฮฃ
G
g
et al.ฯ/nรชโ
r Zโ
The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai 600113, India & Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, [email protected] Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai 600113, India &
Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, [email protected] of Mathematics and Physics, University of Haifa at Oranim, Kiryat Tivon 3600600, IsraelHaifa Research Center for Theoretical Physics and Astrophysics,
University of Haifa, Abba Khoushy Ave 199, Haifa 3498838, Israel
Finding the mean time it takes for a particle to escape from a meta-stable state due to thermal fluctuations is a fundamental problem in physics, chemistry and biology. For weak thermal noise, the mean escape time is captured by the Arrhenius law (AL). Despite its ubiquity in nature and wide applicability in practical engineering, the problem is typically limited to single particle physics. Finding a generalized form of the AL for interacting particles has eluded solution for a century. Here, we tackle this outstanding problem and generalize the AL to a class of interacting diffusive systems within the framework of the macroscopic fluctuation theory. The generalized AL is shown to conform a non-trivial yet elegant form that depends crucially on the particle density and inter-particle interactions. We demonstrate our results for the paradigmatic exclusion and inclusion processes to underpin the key effects of repulsive and attractive interactions. Intriguingly, we show how to manipulate the mean escape time using not only temperature, but also the particle density.
Valid PACS appear here
Arrhenius law for interacting diffusive systems
Ohad Shpielberg
July 31, 2023
===============================================
Introduction.โ
The celebrated Arrhenius law (AL) is a cornerstone in physics, chemistry and biology, capturing the activation time of a system from a meta-stable state. Thermally induced activation processes are ubiquitous in nature, e.g. in chemical reactions, protein folding, gene expressions to name but a few. Although each variant has its unique and intriguing features, the universality of AL with regard to the activation barrier and the surrounding temperature nonetheless is remarkable.
Often, the AL can be manifested within the Kramer's reaction rate theory <cit.>. Usually there, one is interested in the time taken by an overdamped particle to escape from a trapping potential U(x) while coupled to a thermal bath at temperature T<cit.>. For the particle to escape the trap, it needs a fluctuation to grant it an excess energy ฮ Uโ the energy difference between the bottom of the potential to the escape point at the top. The AL states that for weak thermal fluctuations D_0 = k_B T โชฮ U, the inverse of the mean escape time is
ฮฆ = ฯ^-1 _0 e^ -ฮ U/D_0.
ฯ_0 provides a microscopic time scale, which may be assessed due to arguments by Eyring <cit.>. Importantly, ฯ_0 is non-universal as it depends on the shape of U(x). The wide applicability of the AL can be attributed to the universal exponential decay e^- ฮ U/D_0. It suggests that by varying the temperature, the experimentally accessible mean escape time allows to infer the activation energy ฮ U, independent of the non-universal prefactor ฯ_0. To understand this better, imagine a diffusive process on a multidimensional, rugged energy landscape that can imitate chemical reactions in a network or protein conformational dynamics from unfolded to natively folded state via misfolded state. Presence of a variety of local minima surrounded by energy barriers ฮ U โซ k_B T renders a natural separation of timescales in these systems โ fast fluctuations in the well followed by slow/rare fluctuations between the wells. In other words, the
enzyme fluctuates many times within a well (typical trajectories) before leaving it (rare trajectories).
This is a key assumption behind Eq.ย (<ref>) in generic activation processes <cit.>.
Despite many years of study, discoveries are still being made around the AL and exciting applications continue to be found such as activation in the presence of viscoelastic medium <cit.> or escape dynamics of active particles <cit.>, temperature dependent activation energies <cit.>, multiple meta-stable states <cit.> and experiments with colloids <cit.>. Yet, one frontier that remains surprisingly less explored is the validity of AL in many-body systems. Indeed, even for two particles with short-range interactions, finding the activation time seems to be a formidable challenge. See <cit.> which sketches out the challenges for formulating a theory for interacting systems. In addition to the existing slow and fast time scales, the `nature' of interaction also sets another time-scale in the problem. Consequently, the universal exponential decay in AL may no longer hold true due to the complex interplay of different agents and their interactions <cit.>. This is illustrated in the breakdown of the AL for active particles <cit.>, in the case of infinite range interactions <cit.> as well as in low temperature glassy dynamics <cit.>.
In this letter, our aim is to delve deeper into the AL for interacting systems. However, before doing that, it is insightful to derive the AL for M non-interacting particles. Notably, the AL can be computed from the large time survival probability which can be derived by mapping the problem to an effective Schrรถdinger equation with absorbing boundaries โ see the textbooks <cit.> for this standard procedure. Technically, the survival probability ๐ฎ(t) โexp[ -t ฮฆ]<cit.>, [Assuming t โซ L^2/D_0 the diffusion time. For the short time limit, see <cit.>.], where ฮฆ = D_0 M ฮ and ฮ is the ground state energy of the Fokker-Planck Hamiltonian ฤค_FP =-D_0 โ_xx + D_0/4 (โ_x U /D_0 )^2 - 1/2โ_xxU<cit.>. Evidently, the AL universality still prevails for many non-interacting particles. What happens to this universality when we incorporate many-body interactions? Here, we try to address this important question.
Consider an extended 1D system of interacting diffusing particles in a monotonically increasing potential U(x) with initial mean density ฯ_0 = M/L, where M is the number of particles and L is the domain size. We show, in this letter, that the mean time it takes for a particle to escape from the potential is given by the following generalized AL
ฮฆโe^-ฮ U g(ฯ_0 ) / D_0 ,
where g depends both on the
density ฯ_0 and on the inter-particle interactions. By minimizing the particle configuration energy, one can show g = 1- U_top/ฮ U where U_top is the highest energy a particle can attain in the minimum energy configuration. The salient point here is that the exponential form of the AL is still preserved. However, the generalized AL crucially depends on the exact nature of the inter-particle interactions as well the density.
To illustrate the generalized AL, consider the paradigm of simple symmetric exclusion process (SSEP) where the inter-particle interaction is a hard-core exclusion <cit.>. Minimizing the particle configuration energy implies that they are tightly packed around the potential minimum, resulting in g<1 [see Fig.ย <ref>(a), details later]. Therefore, the repulsive interaction imposes a shorter mean escape time by (<ref>). Eq.ย (<ref>) is the central result of this letter and we will prove it for a class of diffusive interacting systems, within the framework of the macroscopic fluctuation theory (MFT) <cit.>.
Macroscopic Fluctuation Theory.โ
Over the last two decades, the Macroscopic Fluctuation Theory (MFT) has been instrumental to understand nonequilibrium fluctuations in diffusive systems <cit.>. Quite recently, the MFT was also used to capture the survival probability of interacting diffusive particles from a domain
<cit.>. Here, we extend this formalism in the presence of a potential which naturally allows us to compute the survival probability and study the generalized AL.
To set the stage, let us consider a 1D system of size L. The system is occupied with interacting diffusive particles that satisfy the continuity equation โ_s ฯ = -โ_x j with the density and current density ฯ(x,s) and j(x,s) respectively. Here, x โ[0,1] and s โ[0,t ] are diffusively rescaled <cit.>.
The fundamental formula of the MFT asserts that the path probability is
Prob[ {ฯ , j }] โ exp( - L/4D_0^1 _0 dx ^t _0 ds (j-J(ฯ))^2/ฯ(ฯ))
J(ฯ) = -D(ฯ) โ_x ฯ - ฯ(ฯ) โ_x ลจ,
where we have defined ลจ=U/D_0 as the rescaled potential. D(ฯ) and ฯ(ฯ) are the density dependent diffusivity and mobility which encapsulate the diffusive dynamics. In Eq.ย (<ref>), the continuity equation is implicitly assumed.
Moreover, we assume that the particles are constrained between one reflecting and one absorbing wall at x=0 and x=1. Thus, ๐ฎ(t), the survival probability of the particles to stay inside the region upto time t can be written as a conditional sum over all the paths Prob[ {ฯ , j }] that satisfy the following boundary conditions
J(ฯ)|_x=0 = ฯ_x=1 = 0,
and the mass conservation in the system L โซ dx ฯ(x,t) = M.
Within the MFT, one would expect ๐ฎ(t) โe^-t ฮฆ, where computation of ฮฆ reduces to a minimization problem of finding an optimal fluctuation {ฯ,j } that satisfy the above mentioned constraints. Note that the optimal fluctuation governs ฮฆ due to the large L saddle dominated probability in (<ref>), however the minimization problem still remains hard. To address that, we use the additivity principle (AP), that was introduced in
<cit.> and proved as a useful tool in evaluating large deviations <cit.> (also see <cit.>). The AP posits that the optimal fluctuation density is time-independent, reducing the complexity of finding the optimal fluctuations.
For a 1D system, the AP assumption ฯ(x,t) = ฯ(x) implies j(x,t)=const due to the continuity equation. However, const=0 since the current vanishes at the boundaries. Finding ฮฆ then reduces to the following minimization problem
ฮฆ = D_0 L/4min_ฯ(x) โซ dx โ(ฯ,โ_x ฯ ), ย โ = J(ฯ)^2 /ฯ - 4ฮ (ฯ- ฯ_0),
where ฯ(x) is subjected to (<ref>) and ฮ is Lagrange multiplier ensuring the mass conservation ฯ_0 = โซ dx ฯ(x).
The minimization problem in (<ref>) then boils down to solving an Euler-Lagrange (EL) equation with the Lagrangian โ. Using the transformation
F(ฯ) = โซ^ฯ _0 dz D(z)/2โ(ฯ(z)) (see <cit.>), the resulting EL reads
-โ_xx F + 1/8 (โ_x ลจ)^2 ฯ' - 1/2โ(ฯ)โ_xxลจ = ฮโ(ฯ)/D,
where ฯ = ฯ(ฯ[F]) and ฯ' =ฮดฯ / ฮด F. The resulting boundary conditions inherited from (<ref>) are J_F |_x=0= ฯ[F]|_x=1 =0 with rescaled current J_F = โ_x F+ 1/2ฯ^1/2โ_x ลจ. Thus, the survival probability can be estimated via (<ref>) as
ฮฆ = D_0 L โซ dx J^2 _F ,
where F is the solution of (<ref>).
Before discussing the interactions, it is instructive to re-derive the results for non-interacting particles. Indeed, here D=1, ฯ = ฯ so that ฯ = F^2 and (<ref>) recovers ฤค_FP F = ฮ F, which is essentially the eigen-value problem for the non-interacting system. Skipping details, one can show that ฮฆ = D_0 L ฯ_0 ฮ<cit.> which implies that ฮ is the ground state energy of ฤค_FP as found earlier.
Furthermore, assuming short range interactions, the limit of a dilute system, i.e. ฯ_0โ 0, suggests interactions become negligible. Indeed in this limit, Fโ 0. Thus, Eq.ย (<ref>) can be linearized in F, leading to the non-interacting case. Thus for dilute systems, the AL is recovered. In what follows, in order to justify the generalized AL in Eq.ย (<ref>),
we turn to study two processes of interacting systems: the SSEP for repulsive interactions and the symmetric inclusion process (SIP) for attractive interactions.
Repulsive interaction.โThe SSEP is a lattice gas model where each lattice site x has an occupancy n_x={ 0,1 }.
In the SSEP, a particle is allowed to hop to a
nearest neighbour with a fixed rate, provided
the target site is empty. Despite its simplicity, the SSEP has been widely studied in the context of nonequilibrium statistical mechanics and has been used as a paradigmatic model for understanding the behavior of interacting particle systems in many physical and biological systems, e.g. traffic flow, protein synthesis, and gene regulation <cit.>. The SSEP typically demonstrates genuine nonequilibrium behavior, e.g. a non-product steady state measure and long range nonequilibrium correlations <cit.>. Furthermore, the SSEP is susceptible to exact solutions; microscopically via the Bethe ansatz and macroscopically through the MFT <cit.>.
For the SSEP, D=1,ฯ=ฯ(1-ฯ)<cit.>. The transformation to F leads to ฯ = sin^2 F. We restrict F to the range [ 0,ฯ/2] to ensure a positive ฯ^1/2 = 1/2sin 2F. While ฯ,ฯ' in (<ref>) becomes explicit, a solution for arbitrary potential is challenging. Fortunately, an analytical solution can be obtained for the linear potential ลจ(x) = ฮลจ x. In that case (<ref>) is simply an autonomous equation. With the transformation y = cos 2F, and by assuming that the density as well as y are monotonous functions, (<ref>) can be reduced to a first order differential equation <cit.>
dy/dx = 1/2 ฮลจโ(1-y^2)โ(1-y^2 + 8 ฮป ( y -y_0)),
y_0 = y|_x=0, y_1 = y|_x=1=1,
where ฮป = ฮ/ ฮลจ^2. Note that -1 โค y_0 โค 1. Here y_0 โ 1 (-1) implies that the density at the reflecting boundary is approaching zero (unity).
Eqs. (<ref>) and (<ref>) constitute an implicit solution of the survival probability, denoted by ฮฆ_ SSEP, for the SSEP. However, it is worth stressing that one need not explicitly solve y (and thus F) in order to obtain ฮฆ_ SSEP. Skipping details from <cit.>, we obtain
ฮฆ_ SSEP = C_0 /2( y_0 -1 + C_0 - C_2 + 4ฮป (C_1 - y_0 C_0 ) ),
whereย ย C_k = โซ^1 _y_0 dy y^k/โ(1-y^2)โ(1-y^2 + 8ฮป (y-y_0)),
for k=0,1,2. Fortunately, the C_k integrals are analytical and involve elliptic functions. Furthermore, ฮฆ_SSEP is given in terms of (ฮป,y_0) which are implicit functions of ฮลจ and ฯ_0.
To make the relations explicit, we notice that direct integration of (<ref>) leads to C_0 = ฮลจ/2. Also, recalling that ฯ_0 = โซ dx sin^2 F implies that 2ฯ_0 = 1- C_1/C_0.
The regime of large ฮลจ is supported by 0<1+y_0 โชฮปโช 1. In these limits and to leading order ฮป = 2 e^-ฮลจ (1-ฯ_0) and (1+y_0) = e^-ฮลจฯ_0. Finally, we have
ฮฆ_ SSEP = D_0 L A e^-ฮลจ (1-ฯ_0),
where A is a polynomial function of ฯ_0 and ฮลจ<cit.>.
At this point, we connect the above with our announced result (<ref>). Eq.ย (<ref>) implies g = 1- ฯ_0 for the linear potential. In the exclusion process, the system's energy is minimized if the particles are ordered as close to the potential minimum as possible. Consequently, the minimum energy occurs when the system is fully occupied between x=0 and x=ฯ_0, and therefore U_top = U(x=ฯ_0) (see Fig.ย <ref>).
Although solving Eq.ย (<ref>) is hard for generic monotonous potentials, the same physical arguments can be given to infer the generalized AL. Indeed, the minimum energy attained by tight packing the exclusion particles in monotonous potentials, lead to U_top = U(x= ฯ_0). To demonstrate this behavior, we employ a standard shooting method algorithm to solve the eigen-problem (<ref>) for various potentials (see <cit.> for more details). The numerical results provide an excellent agreement with Eq.ย (<ref>) thus validating the generalized AL for arbitrary monotonous potentials.
Attractive interaction.โWe now turn our attention to the Symmetric Inclusion Process (SIP) where the interaction among the particles is attractive. In the SIP, the jump rate of a particle from site x with occupancy n_x to a neighboring site y with occupancy n_y is n_x (1+n_y). Thus, multiple particles can occupy a given site and the dynamics makes particle clustering favorable
<cit.>.
For the SIP, one finds D=1,ฯ= ฯ (1+ฯ). The F-transformation implies ฯ = sinh^2 F. Notice that unlike the SSEP, F is unbounded for the SIP. Similarly to the SSEP analysis, one can explicitly write the Euler-Lagrange equation (<ref>), but an explicit solution is again challenging for an arbitrary potential. However, some analytical progress can be made for the linear potential which we discuss in the following.
In this case, Eq.ย (<ref>) becomes an autonomous equation just like for the SSEP. Using the transformation y = cosh 2F and assuming the the monotonicity of F leads us to <cit.>
dy/dx = -1/2ฮลจโ(y^2-1)โ(y^2 -1 +8 ฮป (y_0 - y)),
where we note that 1 โค y โค y_0 and y_0 = y|_x=0 may be unbounded. Similarly, to the SSEP, one can express the survival probability ฮฆ_SIP for the SIP as <cit.>
ฮฆ_SIP = C_0/2(
1-y_0 + C_2 - C_0 + 4ฮป (y_0 C_0 - C_1)
) .
Here, we have re-defined the C_k integrals for the SIP
C_k = โซ^y_0 _1y^k dy/โ(y^2-1 )โ(y^2-1+8ฮป(y_0 -y)) ,
which are analytic. Integrating (<ref>) over the whole range implies C_0 = ฮลจ/2. Then, using ฯ_0 = โซ dx sinh^2 F we find C_1-C_0 = ฮลจฯ_0.
The large ฮลจ regime occurs for ฮปโชฮผ = y_0 ฮปโช 1. Then, following <cit.> we find ฮป = e^-ฮลจ(1+ฯ_0) and ฮผ =2 e^-ฮลจ to leading order. This results in
ฮฆ_SIP = D_0 L A e^-ฮลจ,
where A once again is a polynomial function of ฯ_0, ฮลจ<cit.>.
In the non-interacting limit ฯ_0 โ 0 with comparable ฮผ and ฮป and thus, we recover the AL ฮฆ_SIPโe^-ฮลจ<cit.>.
Eq.ย (<ref>) remarkably coincides with the single particle AL. This indeed stems from the underlying physics of the SIP, where the attractive interactions imply that the minimum energy configuration is attained when all the particles bunch at the potential minimum. Comparing ฮฆ_SIP with Eq.ย (<ref>) results in g=1. In fact, this holds for any monotonous potential. To demonstrate this fact, we employ our shooting algorithm to solve (<ref>) for different potentials. The numerical results are illustrated in Fig.ย <ref> validating the AL for SIP regardless of the particle density ฯ_0. We should stress that the simplicity of Eq.ย (<ref>) can be deceptive since there is no simple or trivial way to derive it.
Discussion.โ Understanding escape times in a thermal activation process while accounting for many-body interactions is at the heart of this letter.
To this end, we have studied the survival probability of interacting particles with diffusive dynamics in a potential trap. In the limit of weak thermal fluctuations ฮ U โซ D_0, we show, using
the seminal macroscopic fluctuation theory, that the Arrhenius law can be generalized (Eq.ย (<ref>)) with ฮ U gโ an interaction-dependent effective activation energy. We demonstrate our result for two distinct interaction classes. Particles which repel themselves reveal a decrease in g leading to a facilitated escape from the potential. On the other hand, for purely attractive particles barring condensation, we reach quite surprisingly and non-trivially to an effective single particle problem. It should be stressed that albeit its technical simplicity and intuitively appealing
physical interpretations, any attempts to generalize the insight gained from single particle AL to many particles with arbitrary interactions remains exorbitantly challenging. Furthermore, the generalized AL holds in store quite a few surprises.
Naively, one expects that the generalized AL would be more restrictive from an experimental stand point. However, Eq.ย (<ref>) suggests the opposite with two immediate interesting directions. First, consider e.g., single file colloids in a potential trap, represented by exclusion particles. By controlling the particle density in the trap, one can infer from ฮฆ the exact shape of the potential <cit.>. Second, experimental control of the potential affecting the colloids as in <cit.> together with Eq. (<ref>), allows to infer the particle density in the system.
Beyond these applications, this work opens new avenues of research. First, it will be important to extend Eq. (<ref>) to include arbitrary potential landscape, possibly also to higher dimensions <cit.>. Additionally, it is interesting to consider interactions that cannot be classified as purely attractive/repulsive e.g., the Katz-Leibowitz-Spohn models <cit.>. A yet unexplored path is to consider the AL for multi-species particles, where D,ฯ become matrices <cit.>. Exploring how inter-species interactions can serve to speed up/ slow down the activation time would be illuminating and potentially useful both in theory and experiments involving transport of particles or ions through channels.
Acknowledgements.โ
A.P. gratefully acknowledges research support from the Department of Science and Technology, India, SERB Start-up Research Grant Number SRG/2022/000080 and the Department of Atomic Energy, India. O.S. thanks Tridib Sadhu and Baruch Meerson for fruitful discussions. O.S. also acknowledges the support of the Erwin Schrรถdinger International Institute for Mathematics and Physics during the thematic program DPS22, where the discussions took place.
|
http://arxiv.org/abs/2306.03058v2
|
20230605172933
|
Shoal: Improving DAG-BFT Latency And Robustness
|
[
"Alexander Spiegelman",
"Balaji Arun",
"Rati Gelashvili",
"Zekun Li"
] |
cs.DC
|
[
"cs.DC"
] |
assumptionSecurity Assumption
propertyProperty
(i)
(ii)
(iii)
(iv)
(v)
(vi)
โ
?
!
[1]
[every node/.style=inner sep=0,outer sep=0, scale=1.5]
[minimum size=1.5ex] at (0,-1.5ex) ;
[fill=white] (0,-1.5ex) circle (0.75ex); [fill=black] (0.75ex,-1.5ex) arc (0:#1:0.75ex);
ล0
-180
3ฬ6ฬ0ฬ
SE[Receiving]ReceivingEndReceiving[1]upon
receiving #1
*EndReceiving
figures/
Aptos
Aptos
Aptos
Aptos
printfolios=false
printacmref=true
The Narwhal system is a state-of-the-art Byzantine fault-tolerant scalable architecture that involves constructing a directed acyclic graph (DAG) of messages among a set of validators in a Blockchain network.
Bullshark is a zero-overhead consensus protocol on top of the Narwhal's DAG that can order over 100k transactions per second.
Unfortunately, the high throughput of Bullshark comes with a latency price due to the DAG construction, increasing the latency compared to the state-of-the-art leader-based BFT consensus protocols.
We introduce , a protocol-agnostic framework for enhancing Narwhal-based consensus.
By incorporating leader reputation and pipelining support for the first time, significantly reduces latency.
Moreover, the combination of properties of the DAG construction and the leader reputation mechanism enables the elimination of timeouts in all but extremely uncommon scenarios in practice, a property we name โprevalent responsiveness" (it strictly subsumes the established and often desired โoptimistic responsiveness" property for BFT protocols).
We integrated instantiated with Bullshark, the fastest existing Narwhal-based consensus protocol, in an open-source Blockchain project and provide experimental evaluations demonstrating up to 40% latency reduction in the failure-free executions, and up-to 80% reduction in executions with failures against the vanilla Bullshark implementation.
<ccs2012>
<concept>
<concept_id>10002978.10003006.10003013</concept_id>
<concept_desc>Security and privacyย Distributed systems security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacyย Distributed systems security
: Improving DAG-BFT Latency And Robustness
================================================
ยง INTRODUCTION
Byzantine fault tolerant (BFT) systems, including consensus protocolsย <cit.> and state machine replicationย <cit.>, have been a topic of research for over four decades as a means of constructing reliable distributed systems.
Recently, the advent of Blockchains has underscored the significance of high performance.
While Bitcoin handles approximately 10 transactions per second (TPS), the proof-of-stake committee-based blockchainsย <cit.> are now engaged in a race to deliver a scalable BFT system with the utmost throughput and minimal latency.
Historically, the prevailing belief has been that reducing communication complexity was the key to unlocking high performance, leading to the pursuit of protocols with linear communication. However, this did not result in drastic enough improvements in the throughput, falling significantly short of the current blockchain network targets. For example, the state-of-the-art Hotstuffย <cit.> protocol in this line of work only achieves a throughput of 3500 TPSย <cit.>.
A recent breakthrough, however, stemmed from the realization that data dissemination is the primary bottleneck for leader-based protocols, and it can benefit from parallelizationย <cit.>. The Narwhal systemย <cit.> separated data dissemination from the core consensus logic and proposed an architecture where all validators simultaneously disseminate data, while the consensus component orders a smaller amount of metadata. A notable advantage of this architecture is that not only it delivers impressive throughput on a single machine, but also naturally supports scaling out each blockchain validator by adding more machines. The Narwhal paperย <cit.> evaluated the system in a geo-replicated environment with 50 validators and reported a throughput of 160,000 TPS with one machine per validator, which further increased to 600,000 TPS with 10 machines per validator.
These numbers are more in line with the ambitions of modern blockchain systems. Consequently, Narwhal has garnered significant traction within the community, resulting in its deployment in Suiย <cit.> and ongoing development in Aptosย <cit.> and Celoย <cit.>.
Developing a production-ready reliable distributed system is challenging, and integrating intricate consensus protocols only adds to the difficulty.
Narwhal addresses this issue by abstracting away networking from the consensus protocol. It constructs a non-equivocating round-based directed acyclic graph (DAG), a concept initially introduced by Alephย <cit.>. In this design, each validator contributes one vertex per round, and each vertex links to n-f vertices in the preceding round.
Each vertex is disseminated via an efficient reliable broadcast implementation, ensuring that malicious validators cannot distribute different vertices to different validators within the same round.
With networking abstraction separated from the details of consensus, the DAG can be constructed without contending with complex mechanisms like view-change or view-synchronization.
During periods of network asynchrony, each validator may observe a slightly different portion of the DAG at any given time. However, the structure facilitates a simpler ordering mechanism compared to monolithic BFT protocols. In DAG-based consensus protocols, vertices represent proposals, edges represent votes, and the concept of quorum intersection guarantees that validators can consistently order all DAG vertices. This provides efficient consensus because ordering is done via local computation only, without any additional communication cost.
Narwhal-based consensus protocols
As discussed, the idea shared by Narwhal-based consensus protocols is to interpret the DAG structure as the consensus logicย <cit.>, but they differ in the networking assumptions and the number of rounds required for vertex ordering.
However, all three protocols share a common structure.
Prior to the protocol initiation, there is an a-priori mapping from specific rounds to leaders shared among all validators.
In the asynchronous protocols (DAG-Rider and Tusk), this mapping to the sequence of leaders is hidden behind threshold cryptography and revealed throughout the protocol.
We use the term anchor to refer to the vertex associated with the round leader in each relevant round.
The DAG local ordering process by each validator is divided into two phases. First, each validator determines which anchors to order (the rest are skipped). Then, the validators sequentially traverse the ordered anchors, deterministically ordering all DAG vertices contained within the causal histories of the respective anchors. The primary considerations that affect the protocol latency are as follows
* Bad leaders. When a validator is malicious or not fast enough, its vertex may not be included in the DAG.
In the case of leaders, the absence of anchors affects the ordering latency of all vertices in previous rounds that are not already ordered.
These vertices can only be ordered as a part of a causal history of a future anchor, directly impacting their latency.
* Sparse anchors. In Narwhal-based consensus protocols, not every round includes an anchor. Consequently, vertices located farther from the next anchor must wait for additional rounds before they can be ordered.
framework
This paper presents : a framework addressing the aforementioned challenges by incorporating leader reputation and pipelining mechanisms into all Narwhal-based consensus protocols.
So far, all available open-source implementations of Narwhal and Bullshark, including Metaย [https://github.com/facebookresearch/narwhal/blob/main/consensus/src/lib.rs], and the production deployment on Suiย [https://github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark] lack these features, while our evaluations demonstrate they can provide significant performance improvements.
Leader reputation is an often overlooked concept in theoretical research, yet it holds crucial importance for practical performance.
In practice, Byzantine failures are rare due to robust protection and economic incentives for validators to adhere to the protocol. (Moreover, Narwhal-based DAG constructions, which provide non-equivocation, significantly reduce the range of potential Byzantine behavior).
Thus, the most common failure scenarios in Blockchain (esp. in Narwhal-based) systems involve validators who struggle to keep up, which can occur due to temporary crashes, slower hardware, or geographical distance. If unresponsive validators repeatedly become leaders, progress is inevitably impeded and degrades system performance. The leader reputation schemes select leaders based on the history of their recent activity, as introduced in Diemย <cit.> and later formalized inย <cit.>.
In the context of Narwhal-based consensus, pipelining means having an anchor in every round, which would result in improved latency for non-anchor vertices.
The main challenge
While the ability to order the DAG locally, without extra communication contributes to the scalability of Narwhal-based consensus, it poses a significant challenge to supporting leader reputation and pipelining.
The leader reputation problem is simpler to solve for monolithic BFT consensus protocols. While the validators may disagree on the history that determines the next leader's identity, the worst that can happen is a temporary loss of liveness until view synchronization, i.e. the quorum of validators can eventually recover by agreeing on a fall-back leader.
This exact method was utilized inย <cit.>, electing the fall-back leaders by a simple round-robin.
In contrast, when all communication is done upfront for building the DAG, the safety of a consensus protocol relies on a key property of the local computation that all validators will decide to order the same set of anchors.
This must hold despite the local views of the DAG possibly differing among the validators across multiple rounds.
Hence, selecting the round leaders dynamically based on reputation (as opposed to the a-priori mapping) seems impossible due to a circular dependency: we need to agree on mapping to solve consensus, but we need consensus to agree on a new mapping.
For pipelining, even if all validators agree on the mapping, they also must agree on whether to order or skip each anchor.
Our attempts to solve the problem by delving into the inner workings of the protocol and exploring complex quorum intersection ordering rules have not been fruitful.
Intuitively, this is because consensus requires a voting round after each anchor proposal and the next anchor should link to the decisions (votes) on the previous one.
Our solution.
In , we lean into the power of performing computations on the DAG, in particular the ability to preserve and re-interpret information from previous rounds. For leader reputation, this allows bootstrapping the seemingly circular dependency on consensus, while for pipelining, it allows combining multiple instances of the protocol in a suitable manner.
In fact, runs multiple instances of the protocol one after the other, where the trick is to agree on the switching point based on the following observation:
For any Narwhal-based consensus protocol, since all validators agree on which anchors to order vs skip, they in particular agree on the first ordered anchor.
With this observation in mind,
each validator can start locally interpreting its view of the DAG by running an instance of its favorite protocol until it determines the first ordered anchor.
Since validators agree on this anchor, they can all deterministically start a new protocol instance in the following round.
Note that this too, happens locally, from a validator's perspective, as a part of re-interpreting the DAG.
As a result, ensures the following
* Leader reputation: validators select new anchors for future rounds based on the information available in the causal history of the ordered anchors.
* Pipelining: allocate an anchor in the first round of the new instance. That way, if the first anchor in every instance is ordered, we get an anchor in every round, providing the pipelining effect.
Our system and prevalent responsiveness
We implemented in the open-source codebase of one of the live Blockchain networks and instantiated it with the partially synchronous version of Bullshark[of bull sharks.].
In this setting, we also discovered a way to eliminate timeouts in all except extremely rare scenarios, a property we refer to as prevalent responsiveness.
The design with prevalent responsiveness demonstrates further performance improvements in our evaluations.
Added motivation to avoid timeouts in as many situations as possible comes from a purely practical point of view, as (1) when timeouts are common, the duration affects the system performance, but in a way that is non-trivial to configure in an optimal way as it is highly environmentally (network) dependent; and (2) timeout handling is known to add significant complexity to the implementation logic for managing potential state space of validators.
Monolithic leader-based BFT protocols use timeouts to trigger protocol progress every time a leader is faulty or slow, while optimistic responsiveness property, popularized by the HotStuffย <cit.> protocol, effectively eliminates timeout implications in ideal scenarios when the network is synchronous and there are no failures.
However, when failures do occur, all validators must still wait until the timeout expires before transitioning to the next leader.
Utilizing the inherent properties of the DAG construction, and leader reputation mechanism, we ensure that makes progress at network speed under a much larger set of scenarios than optimistically responsive protocols would, which makes with partially synchronous Bullshark prevalently responsive. In , validators do wait for timeouts when a few leaders crash and the corresponding anchors are not ordered.
While the FLPย <cit.> impossibility result dictates that there has to be a scenario that requires a timeout, design aligns this FLP scenario to be extremely improbably in practice (multiple, e.g., 10 consecutive skipped anchors). Conceptually, this is similar to how randomized protocols align FLP scenarios to have 0 probability in solving asynchronous consensus with probability 1ย <cit.>.
All available Bullshark implementations use timeouts to ensure honest validators wait for slow anchors even if 2f+1 other vertices were already delivered.
By eliminating timeouts, immediately reduces latency when a leader is faulty, as the corresponding anchors would never be delivered and it is best to advance to the next round as fast as possible.
If the leader is not crashed and just slower, validators may skip anchors that they could order if they waited a little bit longer.
This is however, where the leader reputation mechanism of shines, filtering out slow validators that constantly delay new rounds and allowing the DAG to proceed at network speed while ordering most anchors.
Our experimental evaluation demonstrates up to 40% reduction in latency against vanilla Bullshark protocol implementation when there are no failures in the system, and up to 80% reduction in latency when there are failures. We provide experiments specifically designed to give insights into the impact of the improvements separately, i.e. pipelining, leader reputation and eliminating the timeouts (prevalent responsiveness).
In summary, the paper focuses on improving latency and robustness in DAG-Based protocols.
It provides , a framework to enhance any Narwhal-based consensus protocol with (1) Leader reputation mechanism that prevents slow, isolated, or crashed validators from becoming leaders, (2) pipelining support that ensures every round on the DAG has an anchor, and (3) eliminating timeouts in many cases further reducing the latency,
The remaining sections of the paper are organized as follows:
Sectionย <ref> provides background information on DAG-BFT and highlights the main property utilized in this paper.
Sectionย <ref> introduces our pipelining approach, while Sectionย <ref> presents the leader reputation solution in .
In Sectionย <ref>, we prove correctness of the proposed framework.
Sectionย <ref> describes the implementation details and discusses timeouts. Sectionย <ref> presents the results of our evaluation.
Sectionย <ref> discusses related work, and finally, Sectionย <ref> concludes the paper.
ยง DAG BFT
We start by providing the necessary background on Narwhal-based BFT consensus (Section <ref>) and define a common property (Section <ref>) satisfied by such consensus protocols. We rely on this property while designing to enhance a given baseline protocol with pipelining and leader reputation, thereby reducing latency.
ยง.ยง Background
The concept of DAG-based BFT consensus, initially introduced by HashGraphย <cit.>, aims to decouple the network communication layer from the consensus logic. In this approach, each message consists of a collection of transactions and references to previous messages. These messages collectively form an ever-growing DAG, with messaging serving as vertices and references between messages serving as edges.
In Narwhal, the DAG is round-based, similar to Alephย <cit.>. In this approach, each vertex within the DAG is associated with a round number.
In order to progress to round r, a validator must first obtain n-f vertices (from distinct validators) belonging to round r-1.
Every validator can broadcast one vertex per round, with each vertex referencing a minimum of n-f vertices from the previous round.
The causal history of a vertex ๐ refers to the sub-graph that starts from ๐. Figureย <ref> illustrates a validator's local view of a round-based DAG.
To disseminate messages, Narwhal uses an efficient reliable broadcast implementation that guarantees:
* Validity: if an honest validator has a vertex ๐ in its local view of the DAG, then it also has all the causal history of ๐.
* Eventual delivery: if an honest validator has a vertex in round ๐ by validator ๐ in its local view of the DAG, then eventually all honest validators have a vertex in round ๐ by validator ๐ in their local views of the DAG.
* Non-equivocation: if two honest validators have a vertex in round ๐ by validator ๐ in their local views of the DAG, then the vertices are identical.
Inductively applying Validity and Non-equivocation, we get:
* Completeness: if two honest validators have a vertex ๐ in round ๐ by validator ๐ in their local views of the DAG, then ๐'s causal histories are identical in both validators' local view of the DAG.
In simple words, Narwhal construction guarantees that
* All validators eventually see the same DAG; and
* Any two validators that have the same vertex v locally also agree on the whole causal history of v (the contents of vertices and edges between them).
DAG-Rider / Tusk / Bullshark
DAG-Rider, Tusk, and Bullshark are all algorithms to agree on the total order of all vertices in the DAG with no additional communication overhead.
Each validator independently looks at its local view of the DAG and orders the vertices without sending a single message. This is done by interpreting the structure of the DAG as a consensus protocol, where a vertex represents a proposal and an edge represents a vote.
DAG-Riderย <cit.> and Tuskย <cit.> are randomized protocols designed to tolerate full asynchrony, which necessitates a larger number of rounds and consequently, a higher latency. Bullsharkย <cit.> also provides a deterministic protocol variant with a faster ordering rule, relying on partial synchrony for liveness.
While the specific details are not required to understand this paper, next we explain the high-level structure of these protocols and define a property they all share.
ยง.ยง Common framework
Narwhal-based consensus protocols have the following common abstract structure:
* Pre-determined anchors. Every few rounds (the number depends on the protocol) there is a round with a pre-determined leader. The vertex of the leader is called an anchor. In the partially synchronous version of Bullshark, the leaders are a-priori known. In the asynchronous protocols (DAG-Rider, Tusk, asynchronous Bullshark) the leaders are hidden and revealed during the DAG construction.
* Order the anchors. All validators independently decide which anchors to skip and which to order. The details differ among the protocols, although they all rely on quorum intersection in the DAG structure. The key aspect is that each honest validator locally decides on a list of anchors, and all lists share the same prefix.
* Order causal histories. Validators process their list of ordered anchors one by one, and for each anchor order all previously unordered vertices in their causal history by some deterministic rule. By Completeness, all validators see the same causal history for any anchor, so all validators agree on the total order.
An illustration of the ordering logic appears in Figureย <ref>.
The key correctness argument for all the above mention consensus protocols relies on the fact that all validators agree on which anchors to order and which to skip. In particular, they will all agree on the first anchor that no validator skips. More formally, the abstract property of the Narwhal-based consensus protocols that our framework relies on is the following:
Given a Narwhal-based protocol ๐ซ, if all honest validators agree on the mapping from rounds to leaders before the beginning of an instance of ๐ซ, then they will agree on the first anchor each of them orders during the execution of ๐ซ.
The proof follows immediately from Proposition 2 in DAG-Riderย <cit.> and Corollary C. in Bullsharkย <cit.>.
ยง
is protocol agnostic and can be directly applied to all Narwhal-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark.
It makes no changes to the protocols but rather combines their instances in essentially a โblack-box" manner.
The entire correctness argument can be derived solely from Propertyย <ref>.
ยง.ยง Pipelining
A natural progression after the high throughput scalability of BFT consensus achieved by Narwhal is to reduce latency as much as possible.
To this end, Bullshark already halved DAG-rider's latency for ordering anchors from 4 rounds to 2 by adding an optimistic path under the partially synchronous network communication assumption.
Intuitively, it is hard to imagine latency lower than 2 rounds as in the interpretation of the DAG structure as a consensus protocol, one round is needed to "propose" the anchor, while another is needed for "voting".
However, only anchors can be ordered in 2 rounds.
The rest of the vertices are ordered as part of the causal history of some anchor and require a minimum latency of 3 or 4 rounds.
This is because the vertices in a "voting" round require (minimum) 3 rounds, while vertices that share a round with an anchor have to wait for at least the next anchor to be ordered, thus requiring (minimum) 4 rounds.
An illustration of the ordering latency for different vertices appears in Figureย <ref>.
Ideally, to reduce the latency of ordering vertices we would like to have an anchor in every round.
This would allow for non-anchor vertices to be ordered as a part of some anchor's causal history in each and every round, making latency and throughput of the protocol less spiky.
In Bullshark, it would become possible for every non-anchor vertex to be ordered in 3 rounds (see Figureย <ref>), while in DAG-Rider the latency may be reduced from 10 rounds to 7 in expectation.
Solution
Let ๐ซ be any Narwhal-based consensus protocol.
On a high level, the core technique in is to execute ๐ซ until it, as a consensus protocol, guarantees agreement on some part of the DAG for all validators.
Starting from the round following the agreed part of the DAG, all validators can switch over and start executing a new instance of ๐ซ (or a different Narwhal-based consensus protocol, if desired) from scratch.
While the instances are not executing concurrently, this scheme effectively pipelines the โproposing" and โvoting" rounds. As a result in , in a good case an anchor is ordered in every round.
The pseudocode appears in Algorithmย <ref>.
In the beginning of the protocol, all validators interpret the DAG from round 0, and the function F is some pre-defined deterministic mapping from rounds to leaders.
Each validator locally runs ๐ซ, using F to determine the anchors, until it orders the first anchor, denoted by A in round r.
The key is that, by the correctness of ๐ซ as stated in Propertyย <ref>, all validators agree that A is the first ordered anchor (previous anchors are skipped by all validators).
Consequently, each validator can re-interpret the DAG from the next round (round r+1) according to a new instance of the protocol ๐ซ (or another Narwhal-based protocol) executing from scratch from round r+1.
To order the DAG, much like in the original ๐ซ, the validators deterministically order A's causal history, and by the Completeness property, arrive at the same total order over the same vertices.
Note that without re-interpreting the DAG according to a new instance of ๐ซ starting from round r+1, the next anchor according to the previously executing instance of the protocol would appear in a strictly later round (e.g. r+4 for DagRider and r+2 for Bullshark).
The above process can continue for as long as needed.
An illustration appears in Figureย <ref>.
Note that in Algorithmย <ref>, function F is fixed and used by each instance of protocol ๐ซ. In a true "black-box" implementation, the round numbers could be different from the perspective of the executing protocol instance (i.e. start from 0 for each new instance). However, F is fixed and always assigns the same anchor to any given round r in regardless of the protocol instance used for this round.
Note that with Shoal, ordering an anchor vertex requires 2 rounds, while all other vertices require 3.
In Sectionย <ref> we discuss a potential direction to reduce the latency for non-anchor vertices by treating all vertices as anchors.
Intuitively, we can use Propertyย <ref> to instantiate a binary agreement to decide whether to commit each vertex individually.
ยง.ยง Leader Reputation
BFT systems are designed to tolerate Byzantine failures in order to provide as strong as possible worst-case reliability guarantees.
However, actual Byzantine failures rarely occur in practice.
This is because validators are highly secured and have strong economic incentives to follow the protocol.
Slow or crashed leaders are a much more frequent occurrence which can significantly degrade the system performance.
In Narwhal-based BFT, if the leader of round r crashes, no validator will have the anchor of round r in its local view of the DAG.
Thus, the anchor will be skipped and no vertices in the previous round can be ordered until some later point due to an anchor in a future round.
The way to deal with missing anchors is to somehow ensure that the corresponding leaders are less likely to be elected in the future.
A natural approach to this end is to maintain a reputation mechanism, assigning each validator a score based on the history of its recent activity. A validator that has been participating in the protocol and has been responsive would be assigned a high score. Otherwise, the validator is either crashed, slow, or malicious and a low score is assigned.
The idea is then to deterministically re-compute the pre-defined mapping from rounds to leaders every time the scores are updated, biasing towards leaders with higher scores. In order for validators to agree on the new mapping, they should agree on the scores, and thus on the history used to derive the scores.
Such a mechanism was previously proposed inย <cit.> and implemented in the Diem Blockchainย <cit.> to enhance the performance of Jolteonย <cit.>, a leader-based consensus protocol.
One important property Jolteon is that Safety is preserved even if validators disagree on the identity of the leader, while liveness is guaranteed as long as they eventually converge.
Hence, validators could re-assign the reputation scores every time a new block was committed, even though during asynchronous periods it was possible for different validators to commit the same block in different rounds.
Unfortunately, this is not the case for Narwhal-based BFT. If validators disagree on the anchor vertices, they will order the DAG differently and thus violate safety.
This makes the leader reputation problem strictly harder in Narwhal-based BFT.
Solution
constructs a protocol identical to a given Narwhal-based consensus protocol ๐ซ, but to support leader reputation anchors are selected according to a function F that takes into account validators' recent activity, e.g., the number of vertices they have successfully added to the DAG.
The function F should be updated as frequently as possible and aim to select validators with a better reputation as leaders more often than their counterparts with a lower reputation.
In , pipelining and leader reputation can be naturally combined as they both utilize the same core technique of re-interpreting the DAG after agreeing on the first ordered anchor.
In fact, the pseudocode for appears in Algorithmย <ref> only differs from Algorithmย <ref> by adding lineย <ref>.
The idea is that the validators simply need to compute a new mapping, starting from round r+1, based on the causal history of ordered anchor A in round r (which they are guaranteed to agree on by Propertyย <ref>). Then, the validators start executing a new instance of ๐ซ from round r+1 with the updated anchor selection function F.
Our solution is protocol agnostic and can be directly applied to all Narwahl-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark. An illustration can be found in Figureย <ref>.
makes no changes to the protocols but rather combines their instances, and the entire correctness argument can be derived solely from Propertyย <ref>.
ยง CORRECTNESS
To prove the correctness of (Algorithmย <ref>) we assume that the underlying protocol satisfies Propertyย <ref>, which we will use inductively.
Let P be a Narwhal-based DAG-BFT protocol that satisfies Propertyย <ref>.
Let D be a round-based DAG, and assume a known to all function F that maps rounds to anchors.
Then all the locally ordered lists of anchors by honest validators executing with P according to F share the same prefix.
Proof is by induction on the ordered anchors.
Base: We need to show that all honest validators agree on the first anchor.
Since starts by running P until the first anchor is ordered, the base case follows immediately from Propertyย <ref>.
Step: Assume all honest validators agree on the first k ordered anchors, we need to prove that they agree on anchor k+1.
First, we show that all honest validators agree on the new function F (Lineย <ref> in Algorithmย <ref>).
This holds because the new function F is deterministically computed according to the information in k's causal history, and by the Completeness property of the DAG, all honest validators have the same causal history of anchor k in their local view.
Next, let r be the round of anchor k.
By the inductive assumption, all honest validators agree on r.
Thus, all honest validators start the next instance of P in the same round r+1.
Now consider a DAG D' that is identical to D except it does not have the first r rounds.
By Propertyย <ref>, all validators that run P with the new function F on D' agree on the first ordered anchor in D'.
Therefore, all validators agree on anchor k+1 in D.
Let P be a Narwhal-based DAG-BFT protocol that satisfies Propertyย <ref>.
with P satisfies total order.
By Lemmaย <ref>, all validators order the same anchors. The theorem follows from the DAG Completeness property as all validators follow the same deterministic rule to order the respective causal histories of the ordered anchors.
ยง IMPLEMENTATION AND PREVALENT RESPONSIVENESS
We have implemented Narwhal and the partially synchronous version of Bullshark as part of a publicly available open-source blockchain project[In order to uphold the anonymity requirement of the submission, we do not disclose the name of the blockchain project.]. This blockchain is live and the process of productionizing our implementation is underway.
The code is written in Rust, utilizing Tokio[<https://tokio.rs>] for asynchronous networking, BLSย <cit.> implemented over BLS12-381 curves for signatures, RocksDB[<https://rocksdb.org>] for persistent data storage, and the Noise[<https://github.com/noiseprotocol/noise_spec>] protocol for authenticated messages.
ยง.ยง Vanilla Bullshark
We implemented Bullshark according toย <cit.>, but additionally incorporated weak links perย <cit.> in our DAG construction.
Observing n-f vertices in a round is sufficient for progressing to the next round.
Therefore, without weak links, slow validators may consistently lag behind others in broadcasting their vertices and thus may consistently fail to add their vertices to the DAG.
This will incur significant latency for their client transactions.
Weak links from a vertex can reference vertices from earlier rounds in addition to the normal (strong) links to n-f vertices from the previous round.
These weak links are used when establishing the causal history of ordered anchors and thus facilitate the inclusion of transactions contributed by the slow validators into the total order.
We refer to this implementation as Vanilla Bullshark.
It is important to note that adding the support for weak links increases the average latency compared to the figures presented inย <cit.>, which did not employ the weak links.
ยง.ยง Eliminating Timeouts
The short paper for the stand-alone partially synchronous version of Bullsharkย <cit.> assumes the DAG is given and focuses on the ordering of its vertices. On the other hand, full Bullshark is an asynchronous protocol with a fast path under partial synchrony. The full Bullshark paperย <cit.> describes how to build the DAG and in particular, the incorporation of timeouts to support the fast path.
Validators in Bullshark must observe n-f vertices in a round to advance to the next round.
Even rounds have anchors, while vertices in odd rounds determine the โvoting" pattern.
Full Bullshark uses the following timeouts for every validator to support the fast path:
* Even-round: wait until the anchor of the round is delivered (or the timeout expires).
* Odd-round: wait until 2f+1 vertices that link to the anchor in the previous round are delivered (or the timeout expires).
The rationale for the above logic is to help order the anchor within 2 rounds.
However, part of the contribution of this paper is to eliminate these timeouts in such a way that actually significantly improves latency, according to our evaluation. Having fewer cases where timeouts can occur also inherently simplifies the potential state space and thus, the implementation of the protocol.
In Sectionย <ref>, we refer to even-rounds as anchor rounds and to odd-rounds as vote rounds.
Vanilla bullshark w/o vote Timeout
In the full Bullshark 2f+1 votes are required to order anchors.
Without timeouts in odd rounds, a Byzantine adversary can prevent the fast pass from making progress even during synchrony.
As long as Byzantine validators deliberately not link to the anchor, and even 1 of their vertices get delivered among the first 2f+1 to an honest validator in an odd round, then the honest validator will not be able to order the anchor.
However, we discovered that we can completely eliminate timeouts in odd rounds in the partially synchronous variant of Bullshark.
The anchor ordering rule in this case is f+1 votesย <cit.>.
As a result, even if f out of the first 2f+1 vertices delivered to a validator in a round is from Byzantine validators (and do not link to the anchor), the remaining f+1 vertices will link to the anchor due to the even-round timeout and be sufficient to order it.
Baseline Bullshark
FLP impossibility resultย <cit.> dictates that any deterministic protocol providing liveness under partial synchrony must use timeouts.
In Bullshark, without timeouts in the even rounds, an honest leader that is even slightly slower than the fastest 2f+1 validators will struggle to get its anchor linked by other vertices.
As a result, the anchor is unlikely to be ordered.
The timeout, therefore, ensures that all honest validators link to anchors during periods of synchrony (as long as the leader has not crashed and actually broadcasts the anchor vertex).
Even though timeouts are unavoidable in the worst case, we observe that the DAG construction combined with the leader reputation mechanism allows avoiding them in vast majority of cases in practice.
This is in contrast to leader-based monolithic consensus protocols, where timeouts are the only tool to bypass the rounds with bad leaders.
Without timeouts, a monolithic protocol could stall forever as there is no other mechanism to stop waiting for a crashed leader.
It is also hard to set the timeouts appropriately: conservative timeouts lead to excessive waiting for crashed leaders, while aggressive timeouts lead to bypassing slower validators (and hence unnecessarily failed rounds).
In contrast, the DAG construction provides a โclock" that estimates the network speed.
Even without timeouts, the rounds keeps advancing as long as 2f+1 honest validators continue to add their vertices to the DAG.
As a result, the DAG can evolve despite some leaders being faulty.
Eventually, when a non-faulty leader is fast enough to broadcast the anchor, the ordering will also make progress.
Recall that to be ordered, in partially synchronous Bullshark, an anchor needs f+1 votes (links) out of the 3f+1 vertices.
Therefore, as our evaluation demonstrates, in the failure-free case, most of the anchors are ordered in the next round.
The benefit are even more pronounced when there are failures.
This is because a crashed validator causes a timeout to expire, stalling the protocol for the entire duration.
Without a timer, however, the DAG will advance rounds at network speed and the Bullshark protocol is able to immediately move to the next anchor.
Timeouts as a fallback
By FLPย <cit.> impossibility result, there exists an adversarial schedule of events that can prevent all anchors from getting enough votes to be ordered.
This scenario is extremely unlikely to occur in practice, but to be on the safe side, the protocol can deal with it by falling back to using timeouts after a certain amount of consecutive skipped anchors.
ยง.ยง of Bullsharks
A realistic case in which timeouts can help the performance of a Narwhal-based consensus protocol is when the leader is slower than other validators.
Then, as discussed earlier, waiting for an anchor to be delivered even after 2f+1 other vertices can allow the anchor to be committed in the next round.
While we eliminated timeouts from partially synchronous Bullshark, note that, due to the leader reputation mechanism, instantiated with Bullshark does better than repeatedly waiting for the slow leaders.
Instead, the leader reputation mechanism excludes (or at least significantly reduces the chances of) slow validators from being selected as leaders.
This way, the system takes advantage of the fast validators to operate at network speed.
Prevalent Responsiveness
provides network speed responsiveness under all realistic failure and network scenarios, a property we name Prevalent Responsiveness.
Specifically, compared to optimistic responsiveness, continues to operate at network speed even during asynchronous periods or if leaders fail for a configurable number of consecutive rounds.
We implemented leader reputation and pipelining on top of the Baseline Bullshark and compared it to the baseline (no timeouts) implementation.
Leader reputation logic
As explained in Sectionย <ref>, ensures all validators agree on the information used to evaluate the recent activity and to bias the leader selection process accordingly towards healthier validators.
Any deterministic rule to determine the mapping from rounds to leaders (i.e. the logic in pseudocode Lineย <ref> in Algorithmย <ref>) based on this shared and agreed upon information would satisfy the correctness requirements.
Next, we discuss the specific logic used in our implementation.
At any time each validator is assigned either a high or a low score, and all validators start with a high score.
After ordering an anchor v, each validator examines v's causal history H.
Every skipped anchor in H is (re-)assigned a low score, and every ordered anchor in H is (re-)assigned a high score.
Then, the new sequence of anchors is pseudo-randomly chosen based on the scores, with a validator with a high score more likely to be a leader in any given round.
Note that while the validators use the same pseudo-randomness (so that they agree on the anchors), the computation is performed locally without extra communication.
Assigning higher scores to validators whose anchors get ordered ensures that future anchors correspond to faster validators, thus increasing their probability to be ordered.
However, we ensure that the low score is non-zero, and thus underperforming validators also get a chance to be leaders. This crucially gives a temporarily crashed or underperforming validator a chance to recover its reputation.
ยง EVALUATION
We evaluated the performance of the aforemententioned variants of Bullshark and on a geo-replicated environment in Google Cloud.
In order to show the improvements from pipelining and leader reputation independently, we also evaluate PL, which is a instantiation with only pipelining enabled, and LR, which is a instantiation with only Leader Reputation enabled.
With our evaluation, we aim to show that
(i) maintains the same throughput guarantees as Bullshark.
(ii) can provide significantly lower latency than Bullshark and its variants.
(iii) is more robust to failures and can improve latency with the help of Leader Reputation.
For completeness, we also compare against Jolteonย <cit.>, which is the current consensus protocol of the production system we use.
Jolteon combines the linear fast path of Tendermint/Hotstuff with a PBFT style view-change, and as a result, reduces Hotstuff latency by 33%.
The implementation extends the original Jolteon protocol with a leader reputation mechanism, which prioritizes well-behaved leaders from previous rounds for future rounds.
In addition, to mitigate the leader bottleneck and support high throughput, the implementation uses the Narwhal technique to decouple data dissemination via a pre-step component (called Quorum Storeย <cit.>).
We evaluate prevalent responsiveness by presenting experiments that compare variants of Bullshark w.o. timeout in different rounds versus as discussed in Sectionย <ref>.
Experimental Setup.
Our experimental setup consists of type virtual machines spread equally across three different Google Cloud regions: us-west1, europe-west4, asia-east1.
Each virtual machine has 32 vCPUs, 128GB of memory, and can provide up to 10Gbps of network bandwidth.
The round-trip latencies are:
118ms between us-west1 and asia-east1,
251ms between europe-west4 and asia-east1,
and 133ms between us-west1 and europe-west4.
The experiments involve three different values of N (the number of validators): 10, 20, and 50, tolerating up to 3, 6, and 16 failures, respectively.
We only measure the consensus performance to avoid introducing noise from other parts of the production system, such as execution and storage.
The transactions are approximately 270B in size. We set a maximum batch size of 5000 transactions.
In our experiments, we measure latency as the time elapsed from when a vertex is created from a batch of client transactions to when it is ordered by a validator. The timeouts for moving to the next round, when applicable, are set to 1s, which is less than the 1.5s timeout used by the production Blockchain system we use.
ยง.ยง Baseline Performance
First, we evaluate the performance of the Bullshark variants, namely Vanilla Bullshark, Vanilla Bullshark w/ Anchor Timeouts, and Baseline Bullshark, to align on a baseline performance to evaluate in the rest of the experiments. The results are in Figuresย <ref> andย <ref>.
Figureย <ref> shows the throughput and average latencies of the three Bullshark variants as the system size increases. The presence of timeouts in Vanilla Bullshark forces it to build the DAG slowly, which combined with the fact that fewer validators contribute vertices to the DAG when N=10, results in lower throughput than other variants, which have fewer or no timeouts. The latencies for Vanilla Bullshark is up to 88% higher due to the timeouts. Interestingly, the latencies are similar for baseline Bullshark and Vanilla Bullshark w/o Vote timeout in the normal case because there is a trade-off between building a DAG at network-speed while skipping an anchor and waiting slightly longer for the anchor to be part of the votes.
We also evaluated the vanilla variants and the baseline for N=50 and with varying the number of failures, in Figureย <ref>. We observe that Baseline Bullshark provides lower latency than other variants by virtue of being able to build the DAG at network speed skipping failed anchors and ordering using the alive ones. Therefore, in the rest of the section, we use Baseline Bullshark as the baseline to evaluate .
ยง.ยง Performance of under fault-free case
We now evaluate the variants against the baseline under the normal case where there are no failures. The results are in Figureย <ref>. As expected, the throughput of the variants is similar as the number of validators increases. It can be observed that each variant of decreases the latency leading to full protocol. In summary, we observe that the 's average latency decreases by up to 20% compared to Baseline Bullshark.
On the other hand, Jolteonย <cit.>, despite its use Narwhal's data dissemination decoupling, is only able to achieve a peak throughput of less than 60k, about 40% lower than .
This is because under high load leaders become the bottleneck again as they are not able to deal with the required network bandwidth, and as a result, unable to drive progress before timeouts expire.
Furthermore, in terms of latency, Jolteon is โ50% better than Vanilla Bullshark, but only โ20% better than .
Note that the latencies presented do not include the pre-step Quorum Store's latencies, because all the compared protocols include this optimization. However, in the case of , this latency can be avoided by merging Quorum Store into the DAG construction, as done in Narwhal, which will further close the latency gap from Jolteon.
In Figuresย <ref> andย <ref>, we distinguish the latencies of transactions in the vote-round vertices from that in anchor-round vertices, in order to show the effect of the pipelining approach.
The vote and anchor round latencies for PL, as well as , are similar, which helps provide predictable and smooth latency for transactions in real production systems. In contrast, the vote and anchor round latencies for Baseline Bullshark and LR differ by 5-20% depending on the number of failures.
ยง.ยง Performance of under faults
Figureย <ref> shows the behavior of baseline and variants under faults. For this experiment, N=50 and the failures are increased from 4 to 16 (maximum tolerated). This is the case where the Leader Reputation mechanism helps to improve the latency significantly by reducing the likelihood of failed validators from being anchors.
Notice that without Leader Reputation, the latencies of Baseline Bullshark and PL increases significantly as the number of failures increases. provides up to 65% lower latencies than Baseline Bullshark under failures.
Figureย <ref> shows the impact of skipping leaders on the latency by comparing vanilla Bullshark with on a timeline plot under failures. We have a system of 50 validators, 8 of which have failed. The x-axis represents a part of the experiment time window and the y-axis shows the latency.
The presence of timeouts and the need to skip anchors causes vanilla Bullshark's latency to fluctuate. In our experiment, we observed latency jitter of approximately one second, which makes it impossible to provide predictable latency in production systems. In constrast, maintains consistent low latency without any jitter.
ยง.ยง Summary
In contrast to Vanilla Bullshark, provides up to 40% lower latency in the fault-free case and up to 80% lower latency under failures. Furthermore, we show that provides predictable latency and is able to commit at network speed in most cases and without waiting for timeouts.
ยง RELATED WORK
ยง.ยง BFT systems for Blockchains
Byzantine fault tolerance (BFT) has been an active area of research for over four decades, with a significant body of literature in both theoryย <cit.> and systemsย <cit.>. With the advent of Blockchain systems in recent years, the focus on performance and scalability has notably increased.
Initial efforts to enhance throughput and scalability attempted to reduce the communication complexity of leader-based eventually synchronous protocols. This resulted in a considerable body of work aiming to achieve communication complexity linear to the number of validatorsย <cit.>.
Despite sound theoretical premise, the practical implications arguably fell short of expectations.
An independent evaluation and comparison conducted byย <cit.> revealed that the well-known HotStuffย <cit.> protocol achieved a throughput of only 3,500 TPS on a geo-replicated network.
The practical breakthrough occurred a few years later with the realization that the main bottleneck in BFT systems, particularly those relying on leaders, is data dissemination. Mir-BFTย <cit.> introduced an innovative approach by running multiple PBFTย <cit.> instances in parallel.
Independently, Narwhalย <cit.> and later Dispersedledgerย <cit.> decoupled data dissemination from the consensus logic. These advancements showcased impressive results, with Narwhal achieving a peak throughput of 160,000 TPS.
There has been systemsย <cit.> and theoreticalย <cit.> research in asynchronous BFT protocols. However, to the best of our knowledge, no asynchronous protocol is deployed in production in an industrial system.
Another appealing property of Narwhal is the support of a partially synchronousย <cit.> as well as asynchronousย <cit.> (as long as randomness is available) protocols, and the ability to easily switch among them.
ยง.ยง Timeouts and responsiveness
The FLPย <cit.> impossibility result states that there is no deterministic consensus protocol that can tolerate a fully asynchronous network.
The proof relies on the fact that it is impossible to distinguish between crashed and slow validators during asynchronous periods.
The immediate application to partially synchronous networks, therefore, is that all deterministic protocols must rely on timeouts in some way to guarantee liveness against a worst-case adversary.
Indeed, to the best of our knowledge, all previous deterministic BFT protocols, including the partially synchronous version of Bullsharkย <cit.>, relied on timeouts to implement a simple version of a failure detectorย <cit.>.
This mechanism monitors the leaders and triggers view-changes when timeouts expire, i.e. when faults are suspected.
The optimistic responsiveness property, popularized by HotStuffย <cit.>, avoids timeouts in the best-case failure-free scenario.
However, when failures do occur, all validators wait until the timeout expires before view-changing to the next leader, introducing a significant slowdown in the protocol execution.
Moreover, as discussed in Sectionย <ref>, setting a proper timeout duration is a non-trivial problem in its own right.
provides prevalent responsiveness, which is a strictly better property than optimistic responsiveness as it guarantees network speed progress in case of healthy leaders and zero delays in case of failures.
achieves this by relying on the network speed โclock" inherent in the DAG construction itselfย <cit.>, combined with the leader reputation mechanism.
While due to the FLP result, the worst case in which a timeout would be required for maintaining the liveness of the protocol cannot completely be eliminated, successfully relegates such cases to occur in specific extremely uncommon scenarios from a practical point of view (multiple consecutive unordered anchors).
ยง.ยง DAG-based BFT
DAG-based consensus in the context of BFT was first proposed by HashGraphย <cit.>. The idea is to separate the network communication layer, i.e. efficiently constructing a system that forms a DAG of messages, and the consensus logic that can involve complex pieces such as view-change and view-synchronization.
The consensus logic is performed locally, whereby a validator examines its local view of the DAG and orders the vertices without sending any messages.
The challenge arises from the asynchronous nature of the network, which may cause different validators to observe slightly different portions of the DAG. To address this, the DAG structure is interpreted as a consensus protocol, wherein a vertex represents a proposal and an edge represents a vote.
Alephย <cit.> introduced a round-based DAG structure. Such a structure simplifies support for garbage collection and non-equivocation, which in turn simplifies the consensus logic to order the vertices.
Narwhal implements round-based DAG, and three Narwhal-based consensus protocols have been previously proposed. The first is DAG-Riderย <cit.>, which introduced a quantum-safe asynchronous protocol with optimal amortized communication complexity and O(1) latency. Tuskย <cit.> improved latency in the best case. An asynchronous version of Bullsharkย <cit.> includes a fast pathย <cit.>, while a stand-alone partially synchronous protocolย <cit.> also exists and is currently deployed in production in Suiย <cit.>. presents a framework that applies to all Narwhal-based protocols, enhancing their latency through a more efficient ordering rule and a leader reputation mechanism.
An orthogonal theoretical effortย <cit.> trades off the non-equivocation property of the DAG construction (which typically requires reliable broadcast), as well as the separation from the consensus logic, in order to reduce latency.
ยง.ยง Pipelining
To the best of our knowledge, pipelining in the BFT context was first proposed by Tendermintย <cit.>, and later utilized in HotStuffย <cit.> and Diemย <cit.>.
State machine replication (SMR) systems can be constructed from multiple instances of single-shot consensusย <cit.>, e.g. one approach to build Byzantine SMR is by running a PBFT instanceย <cit.> for each slot.
Tendermint introduced the elegant idea of chaining proposals or piggybacking single-shot instances such that a value for a new slot could be proposed before the value for the previous slot was committed. In this approach, a message in the i^th round of the k^th instance can be interpreted as a message in round i-1 of instance k+1. While the latency for each instance remains unchanged, clients experience improved latency as their transactions can be proposed earlier.
In DAG-based consensus, the concept of piggybacking proposals is inherent in the design, as each vertex in the DAG links to vertices in previous rounds. However, previous protocols did not allow having an anchor in every round.
framework supports having an anchor in each round in a good case for any Narwhal-based protocol, providing a "pipelining effect".
ยง.ยง Leader reputation
Leader reputation is often overlooked in theory, yet it plays a crucial role in performance in practice.
While Byzantine failures are rare as validators are highly protected, isolated, and economically incentivized to follow the protocol, more common are validators that are unresponsive.
This may be because they temporarily crashed, running slow hardware, or are simply located farther away.
If a leader/anchor election is done naively, unresponsive validators will unavoidably stall progress and lead to significant performance impact.
A practical approach, implemented in Diemย <cit.> and formalized inย <cit.>, is to exclude underperforming validators from leader election. This is achieved by updating the set of candidates after every committed block based on the recent activity of validators. In a chained protocol, if all validators observe the same committed block, they can deterministically elect future leaders based on the information in the chain. However, in some cases, certain validators may see a commit certificate for a block earlier than others. This can lead to disagreements among validators regarding the list of next leaders, causing a temporary loss of liveness.
For DAG-based protocols, disagreements on the identity of round leaders can lead the validators to order the DAG completely differently. This poses a challenge for implementing leader reputation on the DAG. As evidence, a Narwhal and Bullshark implementation currently deployed in production in Sui blockchain does not support such a featureย [github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark.rs]. enables leader reputation in Narwhal-based BFT protocols without any additional overhead.
ยง DISCUSSION
can be instantiated with any Narwhal-based consensus protocol, and can even switch between protocols during the DAG retrospective re-interpretation step.
uniformizes the latency and throughput across the validators and eliminates the use of timeouts except in very rare cases, which contributes to the robustness and performance of the system.
Predictable and smooth latency and throughput patterns have major practical benefits for real systems. It facilitates setting up effective monitoring and alerts for anomaly detection. This is crucial for ensuring security and quality of service by enabling timely response and any intervention necessary, be it manual or automated. Predictable consensus throughput also facilitates pipelining the ordering of transactions with other components of the Blockchain, e.g. transaction execution and commit.
satisfies the property we name prevalent responsiveness, ensuring the worst-case executions that must use timeouts due to the FLP impossibility result are aligned with the improbable (and worst-case) scenarios from the practical standpoint. Moreover, the design without timeouts plays into the strengths of the leader reputation mechanism of , and as a result, provides further latency improvements.
ACM-Reference-Format
ยง MULTIPLE ANCHORS PER ROUND
With pipelining, introduces an anchor in every round.
As a result, in the best case, each anchor requires 2 rounds to commit and while non-anchor vertices require 3 rounds.
Next, we present an approach to further optimize the latency for non-anchor vertices, which relies on retrospectively re-interpreting the DAG structure.
We could envision a protocol in which we iterate over more than one vertex in each round in a deterministic order and treat each vertex as an anchor.
More specifically, for a vertex v in round r, we consider executing an instance of the underlying Narwhal-based consensus protocol ๐ซ (i.e., DAG-Rider, Tusk, and Bullshark) starting from round r with v being the first anchor.
This involves re-interpreting the existing DAG structure, and potentially letting it evolve, until a decision of whether v is ordered or skipped is locally made.
If v is ordered by ๐ซ, then the causal history of v followed by v is added to the ordering determined by the new protocol. Otherwise, v is skipped and the protocol proceeds to considering a new instantiation of ๐ซ from the next potential anchor (which may be in the same round).
A pseudocode in which all vertices are considered as anchors appears in Algorithmย <ref>.
In the good case, each vertex that is considered as an anchor can be ordered in 2 rounds.
However, the drawback of this approach is that if some validators are slow and a potential anchor takes many rounds to decide whether to skip or order, the progress of the whole protocol will be stalled.
This happens because potential anchor vertices must be considered in an agreed-upon and deterministic order. As a result, a vertex that necessitates more rounds incurs a latency penalty for the subsequent vertices.
The above issue can potentially be mitigated by combining it with a leader reputation mechanism to select the vertices that are considered as potential anchors, making the bad case delays less likely.
The other vertices can be ordered based on causal history as previously.
|
http://arxiv.org/abs/2306.09861v1
|
20230616141829
|
Astrophysical Uncertainties in the Gravitational-Wave Background from Stellar-Mass Compact Binary Mergers
|
[
"Leonard Lehoucq",
"Irina Dvorkin",
"Rahul Srinivasan",
"Clement Pellouin",
"Astrid Lamberts"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] | |
http://arxiv.org/abs/2306.05588v2
|
20230608231825
|
An Improved Algorithm for Finding Maximum Outerplanar Subgraphs
|
[
"Gruia Calinescu",
"Hemanshu Kaul",
"Bahareh Kudarzi"
] |
cs.DS
|
[
"cs.DS",
"math.CO"
] |
An Improved Algorithm for Finding Maximum Outerplanar Subgraphs
Gruia CalinescuDepartment of Computer Science, Illinois Institute of Technology, Chicago, IL 60616. E-mail: [email protected] Hemanshu Kaul Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616. E-mail: [email protected] Bahareh KudarziDepartment of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616. E-mail: [email protected]
July 31, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================
We study the NP-complete Maximum Outerplanar Subgraph problem. The previous best known approximation ratio for this problem is 2/3. We propose a new approximation algorithm which improves the ratio to 7/10.
Keywords. Outerplanar graph, Maximum subgraph, Approximation algorithm, Matroid parity.
ยง INTRODUCTION
A graph is planar if it can be drawn in the 2-dimensional plane such that no two edges meet in a point other than a common end. A planar graph is outerplanar if it can be drawn in a way in which every vertex lies on the boundary of the same connected region of the plane. Let G be a simple graph, and G' be a subgraph of G. We say that G' is a maximum outerplanar subgraph of G if it is outerplanar and there is no outerplanar subgraph Gโ such that |E(Gโ)|>|E(G')|. Given a graph G, finding an
outerplanar subgraph of G with the maximum number of edges is called the Maximum Outerplanar Subgraph problem.
While outerplanar graphs have been investigated in-depth for their applications <cit.> and theoretical properties <cit.>, the problem of finding large outerplanar subgraphs of a graph has not been studied as much.
Most problems which are NP-hard on arbitrary graphs become polynomial on outerplanar graphs<cit.>.
An in-depth exploration of practical algorithms for
Maximum Outerplanar Subgraph was done by Poranen
<cit.>.
His main experimental result is that simulated annealing with initial solution taken from the greedy triangular
cactus approximation algorithm (which we discuss below) yields the best known heuristic for the Maximum Outerplanar Subgraph problem.
Maximum Outerplanar Subgraph problem is NP-hard, <cit.>. Hence, instead of solving the problem exactly, we use polynomial-time approximation algorithms.
An algorithm's approximation ratio is the worst-case ratio between the result obtained by the algorithm and the optimal solution.
For maximization problems, the approximation ratio is smaller than 1, and the closer we are to 1, the better.
Previous Work.
By constructing a spanning tree of a given graph (recall, a tree is an outerplanar graph), one obtains an approximation ratio for the Maximum Outerplanar Subgraph problem of 1/2 <cit.>.
The approximation ratio was improved by using the concept of triangular cactus, discussed in Section <ref> and depicted in Figure <ref>.
The greedy triangular cactus approximation algorithm
from <cit.>
results in a non-trivial approximation ratio of 7/12.
Furthermore, <cit.> used the fact that
computing a triangular cactus with maximum number of
triangles can be done in polynomial time
based on Matroid Parity <cit.>
to obtain a 2/3 approximation ratio.
Utilizing an outerplanarity testing algorithm <cit.>, a greedy technique to approximate the maximum outerplanar subgraph involves adding edges to an outerplanar subgraph as long as the subgraph stays outerplanar.
Cimikowski and Coppersmith <cit.> showed that after constructing the spanning tree, using this greedy technique does not improve the approximation ratio of 1/2.
Poranen <cit.> claimed the same for the 7/12 ratio obtained by the greedy triangular cactus. There are examples that this
holds for the 2/3 approximation as well.
All these bounds assume that the greedy technique adds the worst possible edge if it has a choice. In practice, this greedy technique is effective
<cit.>.
Related Work.
The Maximum Induced Outerplanar Subgraph problem is the task of finding the size of the largest subset of vertices in a graph that induces an outerplanar
subgraph. This problem is known to be NP-hard <cit.> and does not admit good approximations <cit.>. Morgan and Farr <cit.>
presented an efficient algorithm that
finds an induced outerplanar subgraph with at least 3n/(d + 5/3) vertices
for graphs of n vertices with maximum degree at most d.
Donkers et al. <cit.>
study the Outerplanar Deletion problem, in which one wants to remove at most
k vertices from a graph to make it outerplanar,
and showed that this problem is fixed-parameter tractable.
Another related problem is
Maximum Planar Subgraph, in which one wants to find a
planar subgraph of the input graph with the maximum number of edges.
This problem is also known to be NP-hard <cit.>, and it has many applications
(see e.g. <cit.>).
For this problem, constructing a spanning tree of a given graph
gives an approximation ratio of 1/3, the greedy triangular cactus approximation algorithm has ratio 7/18, and the best known approximation is 4/9 and
is obtained by computing a triangular cactus with maximum number of
triangles <cit.>.
Maximum Weight Outerplanar Subgraph, in which one wants to find an
outerplanar subgraph with the maximum weight of the input edge-weighted graph, admits an approximation ratio of 1/2 by considering the maximum weight spanning tree. This has been improved by <cit.> to 7/12, and by combining Theorem 29 of
<cit.> with the recent algorithm for
Weighted Matroid Parity <cit.>, one
immediately obtains a (2/3)-approximation algorithm.
A (2/3)-approximation was also claimed in the Master thesis of Osipov <cit.>.
In this paper, we present,
There exists a polynomial-time (7/10)-approximation algorithm for the
Maximum Outerplanar Subgraph problem.
Our algorithm has a greedy phase of adding appropriate induced 4-cycles after the main phase of the
(2/3)-approximation of <cit.>. While the algorithm is very simple (except for the Matroid Parity part already used by the
previous best work), the tight analysis we provide is nice and elementary, and may have wider applications.
Applying a greedy method after matching methods (Matroid Parity is an
extension of graph matching) is new to us; applying matching methods
after greedy methods was done before in
<cit.>, while <cit.> combines local improvement with matching.
In Section <ref> we introduce further definitions and notation.
Our algorithm and the proof of Theorem <ref> appears in Section
<ref>. In Section <ref> we discuss some limitations of our method and conclude with some open questions.
ยง PRELIMINARIES
In this paper all graphs are nonempty, finite, simple graphs unless otherwise noted. Generally speaking we follow Westย <cit.> for terminology and notation.
Given a graph G=(V,E), V'โ V, and E'โ E, we denote by G[V'], the
induced subgraph of G with vertex set given by V', and we denote by G[E'] = (V,E'), the
spanning subgraph of G with edge set given by E'.
A triangle in a graph is C_3, a cycle of length three and a square in a graph is an induced C_4, a cycle of length four.
A plane graph is defined as an embedding of an planar graph with a mapping from every vertex to a point in the
2-dimensional plane, and from every edge to a curve on the plane, such that the extreme points of each curve are the points mapped from its end vertices, and all curves are disjoint except on their extreme points.
The plane graph divides the plane into a set of connected regions, called faces. Each face is bounded by a closed walk called the boundary of the face. The outer face is the unbounded region outside the whole embedding.
An outerplane graph is a plane graph where every vertex is mapped to a point on the boundary of the outer face.
Any other face is called an inner face of the outerplane graph.
An outeredge in an outerplane graph is an edge which is in the boundary of the outer face and an inneredge in an outerplane graph is an edge which is not an outeredge.
The boundary of the outer face of a biconnected outerplane graph is a cycle.
An inner triangle is a triangle that is the boundary of an inner face.
An inner square is a square that is the boundary of an inner face.
Let H be a biconnected outerplane graph and uv be an edge of H and wโ V(H)\{u,v}. Consider the path P from u to v using only outeredges of H that does not contain w and let H_1=H[V(P)]. We say that the subgraph H_1 of H is split from w by uv. As an example, in Figureย <ref>,
H_1 is split from y by xz. Note that if uv is an outeredge, then H_1 will be the graph ({u,v},{uv}).
A triangular cactus is a graph whose cycles (if any) are triangles and
such that all edges appear in some cycle. A triangular cactus in a graph
G is a subgraph of G which is a triangular cactus. A triangular cactus in a graph G is maximum if it has maximum number of triangles.
A square-triangular cactus is a graph whose cycles (if any) are triangles or squares and
such that all edges appear in some cycle. A square-triangular cactus in a graph
G is a subgraph of G which is a square-triangular cactus.
A square-triangular structure is a graph whose cycles (if any) are triangles or squares.
Note that every square-triangular cactus is a
square-triangular structure, but not vice versa. See Figureย <ref>.
ยง THE APPROXIMATION ALGORITHM
In this section we describe our algorithm, which we call STS for square-triangular structure, and give its approximation ratio analysis.
Input to the algorithm below is a graph G.
Algorithm STS
The algorithm has three phases, as numbered below:
* Find M_0 a maximum triangular cactus in G.
* Starting with E_1=E(M_0), repeatedly (as long as possible) find
a square S whose vertices are in different components of G[E_1],
and add the edges of S to E_1.
Let M_1 := G[E_1].
* Starting with E_2=E(M_1), repeatedly (as long as possible) find an
edge e in G whose endpoints are in different components of G[E_2], and
add e to E_2.
Output G[E_2].
Algorithm STS produces a square-triangular structure in the given graph G. This is indeed an outerplanar graph.
The (2/3)-approximation algorithm of <cit.> has only two phases corresponding to our Phase 1 and Phase 3.
Phase 1 of the algorithm can be implemented to run
in polynomial time as explained in <cit.>.
This is done by an immediate reduction to
the Graphic Matroid Parity problem, for which
<cit.> provide polynomial-time algorithms.
Phase 2 can be implemented in time O(n^4) as we try all
possible subsets of four vertices to see if they form a
square that can be added. Within the same time bounds,
we also maintain the connected components
of G[E_1] - these components need to be updated only O(n) times.
To complete the proof of Theoremย <ref>, it only remains to show the approximation ratio of 7/10.
ยง.ยง Approximation Ratio Analysis
To establish the approximation ratio, we first prove a lemma about the structure of biconnected outerplanar graphs. This is a generalization of Lemma 3.1 of <cit.>.
Let H be a biconnected outerplane graph. Suppose that H has t inner triangles.
Then, the following holds:
* If t is even, then for every outeredge xy in H, there is a triangular cactus C in H with at least t/2 triangles such that x and y are in different components of C.
* If t is odd, then there is a triangular cactus C in H with at least โt/2โ triangles, and for every outeredge xy in H, there is a triangular cactus C in H with at least โt/2โ triangles such that x and y are in different components of C.
This proof uses ideas from the proof of
Lemma 3.1 of <cit.>, with many extra cases.
We will prove this by induction on n+t, where n is the number of vertices of H and t is the number of inner triangles of H.
Note that n is at least 3.
Before we prove the base cases, we note that if a graph H' consists of only one edge xy and therefore it has t=0 inner triangles, then there is triangular cactus in H' which has r=0 โฅt/2 triangles and the vertices x and y are in different components of the triangular cactus. The same argument applies when the graph H is biconnected with nโฅ3 but has zero inner triangles.
The case n+t=3 cannot happen since if n=3 then the biconnected H has
an inner triangle.
Now let n+t=4. By Factย <ref> we can assume that H has one inner triangle uvw. Now C=H[{u,v,w}] is a triangular cactus with r=1 triangle and we have rโฅโt/2โ =1. For the outeredge xy, subgraph C=โ
is a triangular cactus in H with r=0 such that x and y are not in the same component of C. Then we have rโฅโt/2โ=0.
Now assume that the statements are true for any n' and t' such that n'+t'<n+t.
We will prove the statements for n+t.
In the first part of the proof, we assume that t is even.
Let f be the inner face such that its boundary contains the edge xy. Note that f is not all of H because in that case H must have less than two inner triangles. There are two cases: the boundary of f is a triangle or not.
Case 1: Suppose that the boundary of f is a triangle consisting of vertices x, y, z.
Let H_1 be the subgraph of H that is split from vertex y by edge xz and H_2 be the subgraph that is split from vertex x by edge yz. See Figureย <ref>. H_1โช H_2 has t-1 inner triangles. Since t is even, WLOG, let H_1 have an even number of inner triangles t_1 and H_2 have an odd number of inner triangles t_2.
By applying the induction hypothesis or Factย <ref> to H_1 with outeredge xz, there exists a triangular cactus C_1 with r_1โฅt_1/2 triangles such that vertices x and z are in different components of C_1. By applying the induction hypothesis to H_2, there exists a triangular cactus C_2 with r_2โฅโt_2/2โ triangles. Since x and z are in different components of C_1, we can conclude that C_1โช C_2 is a triangular cactus in which vertices x and y are in different components and has r triangles where r=r_1+r_2โฅt_1/2+โt_2/2โ=t/2.
Case 2: Suppose that the boundary of f is not a triangle.
If t=0, using Factย <ref> we are done. From now on in this case we assume that t>0.
Consider all of the inneredges of f and name them as e_1, e_2, ..., e_k. Note that since t>0 so k>0. For each i = 1,..,k, assume that e_i = v_i_1 v_i_2 and let vertex v_i โ V(f) \{ v_i_1, v_i_2}. Then, let H_i be the subgraph which is split from vertex v'_i by edge e_i.
See Figureย <ref> for an example.
Assume that H_i has t_i inner triangles.
For graph H_i, we have t_i+n_i < t+n for each iโ [k].
If all t_i are even, by applying induction hypothesis to each H_i, we get that there exists a triangular cactus C_i in which v_i_1 and v_i_2 are in different components of C_i with r_iโฅt_i/2 triangles. Now consider the subgraph C=โ_i=1^kC_i which is a triangular cactus, because the vertices v_i_1 and v_i_2 of the edge e_i are in different components of C_i for each i, and therefore, in different components of C. Hence, x and y cannot be in the same component of C.
If some t_j is odd, then there must be another t_l which is odd. By applying induction on H_j, we get that there is a triangular cactus C_j with r_jโฅโt_j/2โ triangles. Then by applying induction on H_l with outeredge (for H_l) e_l, we get that, there is a triangular cactus C_l which has r_lโฅโt_l/2โ triangles such that v_l_1 and v_l_2 are not in the same component of C_l. For i โ [k]โ{j,l}, by applying induction to H_i there is a triangular cactus C_i with at least t_i/2 triangles. Now consider subgraph C=โ_i=1^kC_i which is a triangular cactus because its cycles can only be triangles and C has r=โ_i=1^kr_i triangles and rโฅโ_iโ[k]\{j,l}t_i/2+โt_j/2โ+โt_l/2โ=t/2. Note that x and y are not in the same component of C since v_l_1 and v_l_2 are not in the same component of C_l.
This completes the first part of the proof when t is even.
For the second part of the proof, we consider that t is odd. We will first prove that there exists a triangular cactus C with rโฅโt/2โ triangles.
Since tโฅ1 is odd, then H has at least one inner triangle, uvw.
Let H_1 be the subgraph of H that is split from vertex v by edge uw, H_2 be the subgraph of H that is split from vertex u by edge vw, and H_3 be the subgraph of H that is split from vertex w by edge uv. See Figureย <ref>. These three subgraphs together have exactly t-1 inner triangles of H.
Let H_1, H_2, and H_3 have t_1, t_2 and t_3 inner triangles, respectively.
Because t-1 is even, there are two possibilities: either, each of these three subgraph has even number of inner triangles, or exactly two subgraphs have odd number of inner triangles.
For the first possibility, we apply the induction hypothesis or Factย <ref> to H_1 with outeredge uw, H_2 with outeredge vw, and H_3 with outeredge uv. For H_1, there is a triangular cactus C_1 with r_1โฅt_1/2 triangles such that u and w are in different components of C_1. Similarly for H_2 and H_3, there is a triangular cactus C_2 with r_2โฅt_2/2 triangles such that v and w are in different components of C_2 and there is a triangular cactus C_3 with r_3โฅt_3/2 triangles such that u and v are in different components of C_3, respectively. We claim that C=โ_i=1^3C_i โช H[{u,v,w}] is a triangular cactus. Indeed, since u, v, and w are in different components of each cactus (or they are not in the cacti at all) by putting them together u,v,w are still in different components allowing us to add the triangle uvw and maintain the property of being a triangular cactus. Obviously C has r=r_1+r_2+r_3+1 triangles and t_1
+t_2+t_3=t-1 which is even, so rโฅt_1/2+t_2/2+t_3/2+1=โt/2โ.
For the second possibility, WLOG, assume that t_1 is even while t_2 and t_3 are odd. We apply the induction hypothesis or Factย <ref> to H_1 with outeredge uw which tells there is a triangular cactus C_1 with r_1โฅt_1/2 triangles such that u and w are in different components of C_1. Applying the induction hypothesis
to H_2 and H_3, there is a cactus in H_i with r_iโฅโt_i/2โ triangles, for i=1,2. Because u and w are in two different components of C_1, the subgraph C=โ_i=1^3C_i is a triangular cactus with r=r_1+r_2+r_3 triangles and rโฅt_1/2+โt_2/2โ +โt_3/2โ=โt/2โ.
Finally, we consider the situation t is odd and xy be an outeredge. We have proved that there is a triangular cactus C with rโฅโt/2โ triangles. If x and y are in different components of C we are done. If x and y are in the same component of C we can remove one of the triangles in that component such that x and y are not in the same component any more. Let C' be the subgraph after removing the triangle. C' is a triangular cactus with at least โt/2โ -1 =โt/2โ triangles.
This concludes the proof of Lemma <ref>.
We need the following consequence of the lemma.
Let H be an outerplane graph
with t inner triangles. Then
there exists a triangular cactus in H with โ t/2 โ
triangles.
Assume that H has k components, H_1, H_2,...,H_k. Suppose that H_i has t_i inner triangles and l_i blocks, B_i1,...,B_il_i.
Using Lemma <ref>, every B_ij with t_ij inner triangles has a triangular cactus C_ij as subgraph with at least โt_ij/2โ triangles. The subgraph C_i=โ_j=1^l_i C_ij is a triangular cactus in H_i since the union does not add any new cycle. C_i has at least โ_j=1^l_iโt_ij/2โโฅโt_i/2โ triangles, since t_i=โ_j=1^l_it_ij. Now consider subgraph C=โ_i=1^k C_i, which is a triangular cactus in H and it has at least โ_i=1^kโt_i/2โโฅโ t/2 โ triangles as we have t=โ_i=1^k t_i.
We are now ready to establish the approximation ratio.
The approximation ratio of Algorithm STS is 7/10.
First we show that the approximation ratio is at most 7/10.
Take a triangulated outerplanar graph with vertex set Q and q vertices such that q is odd. For each outeredge uv of Q, add two new vertices a_uv and b_uv and the three edges ua_uv, a_uvb_uv, and b_uvv in the outer face of Q such that it remains outerplanar. Name this new graph H, it has 3q vertices, and (2q-3) + 3q = 5q -3 edges.
Figureย <ref> presents a drawing of H with q=7.
Consider H as the input to the algorithm. Note that H is the optimum solution.
Phase 1 of the algorithm finds a triangular cactus M_0 with โ q/2โ triangles connecting all the vertices of Q.
One can easily check that no square can be added to M_0 in Phase 2 of the algorithm.
So the algorithm's output has 3q - 1 + โ q/2โ edges,
as every triangle of M_0 adds one more edge compared to a spanning tree.
By letting qโโ the ratio of number of edges in output divided by the number of edges in the optimum converges to 7/10.
Let denote a maximum outerplanar subgraph of the input graph G and let denote the number of edges of ( is the value of the objective).
We assume that the input graph is connected (or else we can run the algorithm
on every connected component) and therefore is also connected.
Also, we fix an embedding of , so that it becomes an outerplane graph.
Let t and s denote respectively
the number of inner triangles and inner squares in
.
Let r and c be respectively the number of triangles and squares
in the algorithm's output.
From Corollary <ref> and the fact that Phase I finds a cactus
with maximum number of triangles in the input graph:
t โค 2r.
.
We need the following claim.
4t+sโค 10r+6c
Let M_1 (from the pseudocode of the algorithm) have q non-trivial connected components A_1, A_2, โฆ, A_q.
Let r_i, c_i be the number of triangles and squares in A_i, respectively.
This implies that the number of vertices in the component A_i
is exactly 3c_i + 2r_i + 1.
We have r = โ_j=1^q r_j and c = โ_j=1^q c_j.
Let B_i = [V(A_i)] and let B_i^j, j = 1, 2, โฆ j_i
be the connected components of B_i, and let b_i^j = |V(B_i^j)|.
If b_i^j > 1, let B_i^j(l), for l = 1, 2, โฆ l_i^j, be the blocks
of B_i^j, and let b_i^j(l) = |V(B_i^j(l))|. Note that b_i^j(l) โฅ 2.
We have
โ_l=1^l_i^j b_i^j(l) = b_i^j + (l_i^j - 1).
Notice that all the inner faces of all the B_i are faces of some
block B_i^j(l).
Consider an inner square S of . At the end of Phase 2 of the algorithm, S will satisfy exactly one of the following conditions.
* S is the boundary of an inner face of some B_i^j(l) (we say that S is of Type I)
* S is not of Type I and
has one edge that is also an edge of some B_i^j(l) (we say that
S is of Type II)
* S is not of Type I or II and
is such that two non-consecutive vertices
belong to the same B_i (we say that S is of Type III).
If none of these three conditions holds,
then S can be added to E_1 in Phase 2
of the algorithm, and becomes a inner square of Type I. See Figureย <ref>.
Let s'_1, s'_2, and s'_3 be the number of inner squares of of types I, II,
and III respectively. Then s = s'_1 + s'_2 + s'_3.
No triangle T can be added to E_1 while keeping a square-triangular cactus,
since otherwise M_0 would not be a triangular cactus with maximum
number of triangles. Using this, we classify the inner triangles of into
Type I and Type II,
* T is the boundary of an inner face of some B_i^j(l) (we say that T is of Type I)
* T is not of Type I and has at least one edge that is also an
edge of some B_i^j(l) (we say that T is of Type II).
Let t'_1 and t'_2 be the number of inner triangles of of types I
and II respectively. Then t = t'_1 + t'_2.
Block B_i^j(l) has at most b_i^j(l)-2 inner faces. We obtain:
t'_1 + s'_1 โคโ_i,j,l (b_i^j(l)-2).
We associate with each inner triangle (inner square, respectively) of of Type II
an edge of some B_i^j(l), precisely the edge that is both
in B_i^j(l) and in the inner triangle (inner square, respectively).
If b_i^j(l) > 2, there are exactly b_i^j(l) edges on the
outer face of B_i^j(l). Any such edge e
can be associated with at most one
inner triangle or inner square of Type II, as we argue below. For illustration look at edge e in Figureย <ref>.
Every edge of
participates in at most two inner faces of .
On one side of e
lies an inner face of B_i^j(l) and this is
also an inner face of , based on the following fact.
Let H be a
outerplane graph. Let H' be an induced subgraph of H. Then, an inner face of H' is also an inner face of H.
Thus one of the at most two inner triangles or inner squares
of that
could be associated with e is in fact an inner triangle
or an inner square of Type I, and not of Type II.
Inneredges of B_i^j(l) are not associated with
any inner triangles or inner squares of Type II, since the boundary of two faces
of on
both sides of such an inneredge of B_i^j(l)
are of Type I, based on the reasoning above.
If b_i^j(l) = 2, there exists exactly one edge in B_i^j(l) and this edge
can be associated with at most two inner triangles or inner squares
of Type II,
since every edge of
participates in at most two inner faces of . As an illustration, please look at B_1^2(2) in Figureย <ref>.
Based on these arguments, we conclude:
t'_2 + s'_2 โคโ_i,j,l b_i^j(l).
We continue by counting the inner squares of Type III of .
Let S be such a square with vertices u,v,w,x in clockwise order.
We must have that some B_i contains two non-consecutive
of these four vertices, and does not contain the other two,
or else S would be of Type I or Type II.
WLOG, let u,w โ V(B_i) and x,v โV(B_i). Note that u and w
are not adjacent in , since there is no edge inside this square, and an
edge outside this square will make
either x or v not on the
outer face of , contradicting outerplanarity.
By the same reasoning,
does not have a path from u to w that does
not pass through either x or v.
So u and w are in different components of B_i.
We associate with S the pair of vertices u,w (recall that uw is not an edge in ).
We claim that the total number of inner squares of Type III of
whose corresponding pair of vertices are in different components of B_i is at most
j_i - 1 (recall that j_i is the number of components of B_i), as we argue in the next paragraph.
Let F_i denote the set of all edges uw for each pair of vertices u,w of B_i corresponding to some square of Type III. with F_i added is still
outerplanar, and cannot contain a path from u to w (except for the
edge uw) that does not go through the corresponding v or x. Since
v and x are not added to B_i, if we add F_i to B_i, then this new graph (V(B_i),E(B_i)โช F_i) does not contain any cycle which is not in B_i. We conclude that |F_i|โค j_i-1.
Thus, it follows that
s'_3 โคโ_i=1^q (j_i - 1).
By adding up Equation (<ref>) and Equation (<ref>) we get that
t'_1+s'_1+t'_2+s'_2โคโ_i=1^qโ_jโ [j_i],b_i^j>1โ_l=1^l_i^j (2b_i^j(l)-2).
Using Equation (<ref>), we have that
โ_l=1^l_i^j (2b_i^j(l)-2)=2b_i^j-2. Therefore, Inequality (<ref>) simplifies to
t'_1+s'_1+t'_2+s'_2โคโ_i=1^qโ_jโ [j_i],b_i^j>1 (2b_i^j-2),
Since for b_i^j=1 we have 2b_i^j-2=0 we can add them to the sum without changing the amount. Hence,
t'_1+s'_1+t'_2+s'_2โคโ_i=1^qโ_jโ [j_i] (2b_i^j-2).
Note that since B_i^j are components of B_i and the set of vertices of B_i is the same as the set of vertices of A_i, we will have โ_jโ [j_i] b_i^j= |V(B_i)|=|V(A_i)| which, recall, is equal to 3c_i+2r_i+1. Hence,
t'_1+s'_1+t'_2+s'_2โคโ_i=1^q( 2( 3c_i+2r_i+1)-2 j_i).
Now by adding Equation (<ref>) to equation above we get that
t'_1+s'_1+t'_2+s'_2+s'_3โคโ_i=1^q( 2( 3c_i+2r_i+1)-2 j_i + j_i -1).
Recall that t=t'_1+t'_2 and s=s'_1+s'_2+s'_3, hence,
t+sโคโ_i( 2(3c_i+2r_i)-j_i+1)โคโ_i (4r_i+6c_i)=4r+6c.
Multiplying Equation (<ref>)
by 3 and adding it to equation above, we
obtain Equation (<ref>):
4t+sโค 10r+6c.
This concludes the proof of Claim <ref>
We continue with the proof of Theorem <ref>.
Let have blocks H_1, H_2, โฆ, H_k, and let n_j = |V(H_j)|
and m_j = |E(H_j)|.
Recall that is connected.
Then n := |V| = |V(G)| = |V()|
= ( โ_j=1^k n_j ) - (k-1).
We also have that = |E()| = โ_j=1^k m_j.
Let f_i^j be the number of internal faces of length i in H_j,
and t_j and s_j be respectively f_3^j and f_4^j.
We have that n_j = 2 + t_j + 2 s_j + โ_i โฅ 5 (i-2) f_i^j,
and m_j = 1 + 2 t_j + 3 s_j + โ_i โฅ 5 (i-1) f_i^j.
Note that Phaseย 3 makes the output subgraph connected. The algorithm outputs a solution with (n-1) + r + c edges, as
every triangle or square increases the cyclomatic number[the cyclomatic number (also called circuit rank, cycle rank, or nullity) of an undirected graph is the minimum number of edges that must be removed from the graph to break all its cycles, making it into a tree or forest.] of the
output by 1.
We have
n-1+r+c = r + c -k + โ_j=1^kn_j
=
r + c -k + โ_j=1^k ( 2 +t_j+2s_j+ โ_i โฅ5 (i-2) f_i^j )
=
r+c + k+t+2s+ โ_j=1^k โ_i โฅ5 (i-2) f_i^j ,
and
=โ_j=1^k(1+2t_j+3s_j+โ_i โฅ 5 (i-1) f_i^j ) = k+2t+3s+โ_j=1^k
โ_i โฅ 5 (i-1) f_i^j.
For iโฅ 5, we get that 10(i-2)โฅ7(i-1).
By Inequality (<ref>), we have 10r+10cโฅ4t+s and we also have 10 kโฅ 7k. Hence, one can easily check that
10 ( r +c + k+t+2s + โ_j=1^k โ_i โฅ 5 (i-2) f_i^j )
โฅ 7 (k+2t+3s+ โ_j=1^k โ_i โฅ 5 (i-1) f_i^j ).
Therefore we showed that
n-1+r+c/โฅ7/10. This finishes the proof of Theorem <ref>.
ยง CONCLUDING REMARKS
In this paper we presented an algorithm which improved the ratio of Maximum Outerplanar Subgraph to 7/10. It is natural to ask if this algorithm can be improved. Some obvious modifications to the algorithm do not help.
Observe that adding pentagons or larger outerplanar graphs after Phase 2 of Algorithm STS will not make the ratio better than 7/10. The example used in the first part of the proof of Theoremย <ref> illustrates this. See Figureย <ref>. There are no pentagons or any other structure in the optimum that have their vertices in different components of the graph M_1 from the algorithm.
We believe that the greedy technique of adding edges to an outerplanar subgraph as long as the subgraph stays outerplanar does not improve the ratio provided that the worst possible edge is chosen.
An outerplanar k-restricted structure is a simple graph whose blocks are outerplanar and each has at most k vertices. In this paper, outerplanar 4-restricted structures are used by our approximation algorithm. The outerplanar k-restricted ratio is the infimum, over simple outerplanar graphs H, of the ratio of the number of edges in a maximum k-restricted structure subgraph of H to the number edges of H. It is proved in <cit.> that, as k tends to infinity, the outerplanar k-restricted ratio tends to 1.
This could be useful in improving the
approximation ratio for
Maximum Outerplanar Subgraph,
although we do not know if there is a polynomial time algorithm for
finding an outerplanar 4-restricted structure with maximum number of edges that is a subgraph of the input graph.
A diamond is a cycle of size four with exactly one chord and is an outerplanar graph. Note that a diamond is not a square-triangular structure.
A diamond has five edges and four vertices, therefore, it is a better structure than a square since it has more edges and also a better structure than two triangles since it has fewer vertices. Since K_4 is not outerplanar, the only blocks that an outerplanar 4-restricted structure can have are bridges, triangles, squares and diamonds.
One way to use diamonds is to add them greedily (as long as the four vertices are in different connected components), followed by triangles as in <cit.>. We could also add squares and pentagons after this. And after this,
one employs Phase 3 of Algorithm STS.
This approach does not lead to an approximation ratio better than 2/3, as the following series of examples shows.
The optimum solutions H_k are
taken from Theorem 4 of <cit.>.
and H_2 is illustrated in Figureย <ref>.
H_0 is a triangle.
To obtain H_k from H_k-1 duplicate and
then subdivide every outeredge.
Thus H_k has 3 ยท 2^k vertices and is triangulated with
2 ยท 3 ยท 2^k - 3 edges. We also have a separate
diamond cactus D_k
with d diamonds that includes all the vertices
of H_k that came from H_k-1, plus another one vertex.
Here, d = 2^k-1, as one diamond structure with d diamonds has 1 + 3d vertices.
Given an input that has both the edges of H_k and of D_k,
the algorithm can select D_k by greedily adding diamonds,
after which it goes directly to Phase 3 as the vertices of H_k that are not
in D_k form an independent set and therefore cannot make
any triangle, square, pentagon, or larger block that
one can greedily add to D_k. The output of this greedy algorithm has 3 ยท 2^k - 1 + 2d edges, as every
diamond increases the cyclomatic number by 2.
Recall that d = 2^k-1 and that the optimum has 2 ยท 3 ยท 2^k - 3 edges. The ratio of the output to optimum converges to 2/3 as k goes to infinity.
The question of how large
an approximation ratio for the Maximum Outerplanar Subgraph problem can be achieved remains open.
Is there a linear-time algorithm
with approximation ratio 1/2+ฯต?
Is there an approximation algorithm for the Maximum Weight Outerplanar Subgraph problem with a ratio better than 2/3?
10
Baker111
Brendaย S. Baker.
Approximation algorithms for NP-complete problems on planar graphs.
J. ACM, 41(1):153โ180, jan 1994.
brehaut77
W.ย Brehaut.
An efficient outerplanarity algorithm.
In Proceedings of the 8th South-Eastern Conference on
Combinatorics, Graph Theory, and Computing, pages 99โ113, 1977.
CalinescuFFK98
G.ย Cฤlinescu, C.G. Fernandes, U.ย Finkler, and H.ย Karloff.
A better approximation algorithm for finding planar subgraphs.
Journal of Algorithms, 27(2):269โ302, 1998.
CF08
Gruia Cฤlinescu and Cristinaย G. Fernandes.
On the k-structure ratio in planar and outerplanar graphs.
Discrete Mathematics & Theoretical Computer Science, 10(3),
2008.
calinescu-new
Gruia Cฤlinescu, Cristinaย G. Fernandes, Howard Karloff, and Alexander
Zelikovsky.
A new approximation algorithm for finding heavy planar subgraphs.
Algorithmica, 36:179โ205, 2003.
Schmid17
Parinya Chalermsook and Andreas Schmid.
Finding triangles for maximum planar subgraphs.
In Sheung-Hung Poon, Md.ย Saidur Rahman, and Hsu-Chun Yen, editors,
WALCOM: Algorithms and Computation, pages 373โ384, 2017.
CGNRS06
Chandra Chekuri, Anupam Gupta, Ilan Newman, Yuri Rabinovich, and Alistair
Sinclair.
Embedding k-outerplanar graphs into L1.
SIAM Journal on Discrete Mathematics, 20(1):119โ136, 2006.
CLL14
Hoย Yee Cheung, Lapย Chi Lau, and Kaiย Man Leung.
Algebraic algorithms for linear matroid parity problems.
ACM Transactions on Algorithms (TALG), 10(3):1โ26, 2014.
cimikowskicoppersmith
Robert Cimikowski and Don Coppersmith.
The sizes of maximal planar, outerplanar, and bipartite planar
subgraphs.
Discrete Mathematics, 149(1-3):303โ309, 1996.
Donkerrrrs
Huib Donkers, Bart Jansen, and Michaล Wลodarczyk.
Preprocessing for outerplanar vertex deletion: An elementary kernel
of quartic size.
Algorithmica, 84(11), 2022.
DF97
Rong-chii Duh and Martin Fรผrer.
Approximation of k-set cover by semi-local optimization.
In Frankย Thomson Leighton and Peterย W. Shor, editors, Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of
Computing, El Paso, Texas, USA, May 4-6, 1997, pages 256โ264. ACM, 1997.
FK05
Uriel Feige and Shimon Kogan.
The hardness of approximating hereditary properties.
2005.
Available on: http://research.microsoft.com/research/theory/feige/
homepagefiles/hereditary.pdf.
felsner
Stefan Felsner, Giuseppe Liotta, and Stephen Wismath.
Straight-line drawings on restricted integer grids in two and three
dimensions.
In Petra Mutzel, Michael Jรผnger, and Sebastian Leipert, editors,
Graph Drawing, pages 328โ342, Berlin, Heidelberg, 2002. Springer
Berlin Heidelberg.
GabSta
H.N. Gabow and M.ย Stallmann.
Efficient algorithms for graphic matroid intersection and parity.
In 12th Colloq. on Automata, Language and Programming, pages
210โ220, 1985.
garey
M.R. Garey and D.S. Johnson.
Computers and Intractability: A Guide to the Theory of
NP-completeness.
Mathematical Sciences Series. W. H. Freeman, 1979.
G05
D.ย Gonรงalves.
Edge partition of planar graphs into two outerplanar graphs.
In 37th Annual ACM Symposium on Theory of Computing, pages
504โ512, 2005.
iwata2021weighted
Satoru Iwata and Yusuke Kobayashi.
A weighted linear matroid parity algorithm.
SIAM Journal on Computing, 51(2):238โ280, 2022.
KANT19961
Goos Kant.
Augmenting outerplanar graphs.
Journal of Algorithms, 21(1):1โ25, 1996.
KEDLAYA1996238
Kiranย S. Kedlaya.
Outerplanar partitions of planar graphs.
Journal of Combinatorial Theory, Series B, 67(2):238โ248,
1996.
KRY95b
Samir Khuller, Balaji Raghavachari, and Neal Young.
Approximating the minimum equivalent digraph.
SIAM J. Comput., 24(4):859โ872, 1995.
liebers2004
Annegret Liebers.
Planarizing graphsโa survey and annotated bibliography.
In Graph Algorithms And Applications 2, pages 257โ330. World
Scientific, 2004.
LiuGel
P.C. Liu and R.C. Geldmacher.
On the deletion of nonplanar edges of a graph.
In 10th Southeastern Conference on Combinatorics, Graph Theory,
and Computing, pages 727โ738, 1977.
LovPlu
L.ย Lovรกsz and M.ย D. Plummer.
Matching Theory.
Elsevier Science, 1986.
LY93
Carsten Lund and Mihalis Yannakakis.
The approximation of maximum subgraph problems.
In International Colloquium on Automata, Languages and
Programming, 1993.
maheshwari
Anil Maheshwari and Norbert Zeh.
External memory algorithms for outerplanar graphs.
In Algorithms and Computation, pages 307โ316, Berlin,
Heidelberg, 1999. Springer Berlin Heidelberg.
MANNING
Joseph Manning and Mikhailย J. Atallah.
Fast detection and display of symmetry in outerplanar graphs.
Discrete Applied Mathematics, 39(1):13โ35, 1992.
MITCHELL79
Sandraย L. Mitchell.
Linear algorithms to recognize outerplanar and maximal outerplanar
graphs.
Information Processing Letters, 9(5):229โ232, 1979.
MF07
Kerri Morgan and Graham Farr.
Approximation algorithms for the maximum induced planar and
outerplanar subgraph problems.
Journal of Graph Algorithms and Applications, 11(1):165โ193,
2007.
OS81
Haruko Okamura and Paulย D Seymour.
Multicommodity flows in planar graphs.
Journal of Combinatorial Theory, Series B, 31(1):75โ81, 1981.
Orlin08
Jamesย B Orlin.
A fast, simpler algorithm for the matroid parity problem.
In International Conference on Integer Programming and
Combinatorial Optimization, pages 240โ258. Springer, 2008.
Osipov2006
Vitali Osipov.
A polynomial time randomized parallel approximation algorithm for
finding heavy planar subgraphs.
Master's thesis, Universitรคt des Saarlandes, August 2006.
poranen
Timo Poranen.
A simulated annealing algorithm for the maximum planar subgraph
problem.
International Journal of Computer Mathematics, 81(5):555โ568,
2004.
P05
Timo Poranen.
Heuristics for the maximum outerplanar subgraph problem.
J. Heuristics, 11:59โ88, 01 2005.
Poranen_2008
Timo Poranen.
Two new approximation algorithms for the maximum planar subgraph
problem.
Acta Cybernetica, 18(3):503โ527, Jan. 2008.
resende
Mauricioย GC Resende and Celsoย C Ribeiro.
A grasp for graph planarization.
Networks: An International Journal, 29(3):173โ189, 1997.
syslo79
Maciejย M Sysลo and Masao Iri.
Efficient outerplanarity testing.
Fundamenta Informaticae, 2(1):261โ275, 1979.
syslo78
M.M. Syslo.
Outerplanar graphs: characterizations, testing, coding and counting.
Bull. Acad. Polon. Sci., Ser. Sci. Math. Astronom. Phys.,
26:675โ684, 1978.
SYSLO197947
Maciejย M. Sysลo.
Characterizations of outerplanar graphs.
Discrete Mathematics, 26(1):47โ53, 1979.
S03
Z.ย Szigeti.
On the graphic matroid parity problem.
J. Combin. Theory Ser. B, 88:247โ260, 2003.
W02
Douglasย B. West.
Introduction to Graph Theory.
wiegers
M.ย Wiegers.
Recognizing outerplanar graphs in linear time.
In Graph-Theoretic Concepts in Computer Science, International
Workshop WG'86, 246(1):165โ176, 1984.
Yan78
M.ย Yannakakis.
Node- and edge-deletion NP-complete problems.
In ACM Symposium on Computational Geometry, pages 253โ264,
1978.
|
http://arxiv.org/abs/2306.03614v1
|
20230606120547
|
Simulations of idealised 3D atmospheric flows on terrestrial planets using LFRic-Atmosphere
|
[
"Denis E. Sergeev",
"Nathan J. Mayne",
"Thomas Bendall",
"Ian A. Boutle",
"Alex Brown",
"Iva Kavcic",
"James Kent",
"Krisztian Kohary",
"James Manners",
"Thomas Melvin",
"Enrico Olivier",
"Lokesh K. Ragta",
"Ben J. Shipway",
"Jon Wakelin",
"Nigel Wood",
"Mohamed Zerroukat"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"physics.ao-ph"
] |
[1]Denis E.Sergeev
[1]Nathan J.Mayne
[2]ThomasBendall
[2, 1]Ian A.Boutle
[2]AlexBrown
[2]IvaKavฤiฤ
[2]JamesKent
[1]KrisztianKohary
[2]JamesManners
[2]ThomasMelvin
[3]EnricoOlivier
[4]Lokesh K.Ragta
[2]BenShipway
[4]JonWakelin
[2]NigelWood
[2]MohamedZerroukat
[1]Department of Physics and Astronomy, University of Exeter, Exeter, EX4 4QL, UK
[2]Met Office, FitzRoy Road, Exeter, EX1 3PB, UK
[3]Research Software Engineering, University of Exeter, Exeter, EX4 4QE, UK
[4]Department of Information Technology, University of Leicester, University Road, Leicester, LE1 7RH, UK
Denis E. Sergeev ([email protected])
LFRic-Atmosphere for terrestrial planets
Sergeev et al.
1
Simulations of idealised 3D atmospheric flows on terrestrial planets using LFRic-Atmosphere
[
===========================================================================================
We demonstrate that LFRic-Atmosphere, a model built using the Met Office's GungHo dynamical core, is able to reproduce idealised large-scale atmospheric circulation patterns specified by several widely-used benchmark recipes.
This is motivated by the rapid rate of exoplanet discovery and the ever-growing need for numerical modelling and characterisation of their atmospheres.
Here we present LFRic-Atmosphere's results for the idealised tests imitating circulation regimes commonly used in the exoplanet modelling community.
The benchmarks include three analytic forcing cases: the standard Held-Suarez test, the Menou-Rauscher Earth-like test, and the Merlis-Schneider Tidally Locked Earth test.
Qualitatively, LFRic-Atmosphere agrees well with other numerical models and shows excellent conservation properties in terms of total mass, angular momentum and kinetic energy.
We then use LFRic-Atmosphere with a more realistic representation of physical processes (radiation, subgrid-scale mixing, convection, clouds) by configuring it for the four TRAPPIST-1 Habitable Atmosphere Intercomparison (THAI) scenarios.
This is the first application of LFRic-Atmosphere to a possible climate of a confirmed terrestrial exoplanet.
LFRic-Atmosphere reproduces the THAI scenarios within the spread of the existing models across a range of key climatic variables.
Our work shows that LFRic-Atmosphere performs well in the seven benchmark tests for terrestrial atmospheres, justifying its use in future exoplanet climate studies.
We are on the precipice of a new era in planetary science, as the atmospheres of Earth-sized terrestrial extrasolar planets (exoplanets) are likely to soon be detected and then characterised.
Interpreting these observations to the fullest extent will demand efficient use of one of the key tools we have to study atmospheric processes, 3D general circulation models (GCMs).
Here, we present the first results of the Met Office's next generation atmospheric model, LFRic-Atmosphere, applied to terrestrial planets and validate it against several other GCMs using a set of commonly-used benchmarks <cit.>, and the cases adopted as part of a recent exoplanet model intercomparison project <cit.>.
GCMs are instrumental in our understanding of planetary atmospheres as they encapsulate a range of physical and chemical processes interacting with each other, with the treatments constrained by both theory and observations <cit.>.
Over the last two decades, a number of GCMs have been applied to explore the climate evolution, observability and potential habitability of terrestrial exoplanets <cit.>.
Through the application of GCMs, numerous confirmed and hypothetical exoplanet atmospheres have been investigated, and important mechanisms or processes studied <cit.>.
GCMs applied in such studies vary in complexity and the expanding list includes, but is not limited to: ExoCAM <cit.>, Exo-FMS <cit.>, ExoPlaSim <cit.>, Isca <cit.>, MITgcm <cit.>, Planetary Climate Model <cit.>, The Resolving Orbital and Climate Keys of Earth and Extraterrestrial Environments with Dynamics <cit.>, and THOR <cit.>.
The growing variety of GCMs provides an invaluable multi-model perspective on exoplanet atmospheres, which is especially important in the light of the extreme scarcity of observational data <cit.>.
The current operational weather and climate prediction model of the Met Office, the UM, has been extensively used to study an array of processes in planetary atmospheres, for both terrestrial <cit.> and gaseous <cit.> planets; and has also participated in the pioneering exoplanet model intercomparison projects <cit.>.
However, its application for more ambitious numerical experiments both for exoplanet and Earth climates, such as in global kilometre-scale cloud-resolving setups <cit.>, faces several challenges.
The first key challenge is that the UM is based on a traditional latitude-longitude (lat-lon) grid, whose simplicity comes at the cost of a computational bottleneck due to the grid convergence at the poles <cit.>.
As a result, in high-resolution multi-processor setups the UM reaches a plateau of scalability <cit.>.
This limitation has restricted applications at high, convection-permitting resolutions to small regions within the global model <cit.>.
The second major challenge is that the UM lacks portability and flexibility with respect to high-performance computing platforms making adaptation to new hardware difficult <cit.>.
To address the limitations of the UM, a new modelling framework has been developed by the Met Office and its partners.
The infrastructure for this model is called LFRic <cit.>.
Crucially, it is based on a new dynamical core, GungHo, designed for finite element methods on unstructured meshes, such as the cubed-sphere mesh, that avoid the polar singularity problem and allow for better parallel scalability.
Alongside GungHo and the LFRic infrastructure, physical parameterisations that are already well tested in the UM are combined to create a model we refer to as the LFRic-Atmosphere.
While GungHo and the LFRic infrastructure are still under active development, it already shows promising results in simulating geophysical flows <cit.>.
Critically, LFRic will also be an open-source framework (under the BSD 3-clause license) aiding wider collaborative efforts.
Complementary to these studies, the purpose of our paper is to demonstrate that LFRic-Atmosphere is capable of robustly simulating global-scale atmospheric circulation on terrestrial planets in a selection of commonly-used benchmark cases.
Specifically, we perform experiments with temperature and wind forcing, analytically prescribed following <cit.>, <cit.>, and <cit.>.
These tests are the simplest way to obtain an idealised atmospheric circulation over a climatic period in a 3D GCM.
Stepping up the model complexity ladder, we then switch to treating unresolved processes such as radiative transfer, turbulence, and moist physics more realistically via the suite of physical parameterisations inherited by LFRic-Atmosphere from the UM.
With this setup, we simulate the four temperate climate scenarios prescribed by the TRAPPIST-1 Habitable Atmosphere Intercomparison protocol <cit.> for an Earth-sized rocky exoplanet, TRAPPIST-1e <cit.>.
This proves for the first time that LFRic-Atmosphere is capable of reliably simulating non-Earth climates on temperate exoplanets.
Our work thus provides a necessary stepping stone for future theoretical studies focused on planetary atmospheres using LFRic-Atmosphere.
In the next section (Sec.ย <ref>) we give a description of the LFRic-Atmosphere model, detailing the main features of its dynamical core called GungHo.
In Sec.ย <ref>, we show that LFRic-Atmosphere is capable of reproducing three temperature forcing benchmarks: Held-Suarez (Sec.ย <ref>), Earth-like (Sec.ย <ref>), and Tidally Locked Earth (Sec.ย <ref>).
Next, in Sec.ย <ref> we apply LFRic-Atmosphere with a full suite of physical parameterisations to the four THAI cases and demonstrate that it reproduces them with sufficient fidelity.
Sec.ย <ref> summarises our findings and gives an outlook for future LFRic-Atmosphere development in the context of extraterrestrial atmospheres.
ยง LFRIC-ATMOSPHERE DESCRIPTION
LFRic (named after the pioneer of numerical weather prediction Lewis Fry Richardson) is the next generation modelling framework developed by the Met Office <cit.>.
At its heart lies the GungHo dynamical core (see Sec.ย <ref>), designed to be efficiently scalable on exascale supercomputers.
The combination of GungHo and a suite of physical parameterisations (see Sec.ย <ref>) can be referred to as LFRic-Atmosphere.
LFRic as a project is in an active development stage, with the aim of deployment for operational weather forecasts by mid-2020s.
From the perspective of the software infrastructure, LFRic's key feature is the `separation of concerns' between science code and parallelisation-related code.
This concept is called PSyKAl after the three layers of code it comprises: Parallel Systems (PSy), Kernel code, and Algorithm code <cit.>.
ยง.ยง The GungHo dynamical core
The dynamical core, a fundamental component of every GCM, solves a form of the Navier-Stokes equations to simulate the movement of mass and energy resolved by the underlying mesh.
It is then coupled to a set of parameterisations representing subgrid physical or chemical processes, such as radiative heating and cooling, cloud microphysics, convection, boundary-layer turbulence, etc.
In this section, a brief description of the dynamical core GungHo is given, while Sec.ย <ref> gives an overview of the physical parameterisations used in the THAI experiments.
Key features of GungHo include the non-hydrostatic equations (Sec.ย <ref>) already shown to be important for certain exoplanets <cit.>, a quasi-uniform cubed sphere grid (Sec.ย <ref>), a mimetic finite-element discretisation (Sec.ย <ref>), a mass-conserving finite-volume transport scheme (Sec.ย <ref>), and a multigrid preconditioner (Sec.ย <ref>).
ยง.ยง Continuous equations
GungHo solves the fully-compressible non-hydrostatic Euler equations for an ideal gas in a rotating frame:
โu/โ t= -(uยทโ)u - 2 ฮฉรu -โฮฆ -c_p ฮธโฮ ,
โฯ/โ t= -โยท(ฯu),
โฮธ/โ t= -uยทโฮธ,
ฮ ^1-ฮบ/ฮบ= R/p_0ฯฮธ,
where u=(u,v,w) is the velocity vector, ฯ is density, ฮธ is potential temperature, and ฮ =(p/p_0)^ฮบ is the Exner pressure function, ฮฆ is the geopotential.
Additionally, ฮฉ is the planet rotation vector, R is the specific gas constant, p_0 is a reference pressure and ฮบ=R/c_p, where c_p is the specific heat at constant pressure.
As in its predecessor ENDGame <cit.>, GungHo's equations include a few approximations.
First, the geopotential ฮฆ in Eq.ย (<ref>) includes contributions from both the gravitational potential and the centrifugal potential, which are constant with time (constant apparent gravity approximation).
Second, the geopotential is assumed to be spherically symmetric, i.e to vary only with height above the planet's surface and not with longitude or latitude.
Third, the effect of the mass of the atmosphere itself on the distribution of gravity is also neglected.
For most types of planetary atmospheres that could be modelled in GungHo, errors introduced by these approximations are negligible <cit.>.
ยง.ยง Mesh
A key advantage of GungHo is its cubed-sphere grid, underpinning the model's greater computational scalability compared to that of ENDGame and other dynamical cores that use the traditional lat-lon grid <cit.>.
Due to its advantages, cubed-sphere meshes are gaining in popularity in other atmospheric models, used for both Earth climate and weather prediction <cit.> and exoplanet studies (e.g. ; see also discussion in ).
Gungho's cubed-sphere mesh is constructed by gnomonically projecting a cube on a sphere, resulting in a six-faced equi-angular mesh of quadrilateral cells (Fig.ย <ref>).
This mesh is horizontally quasi-uniform, albeit at the expense of losing orthogonality of cell edges.
Quadrilateral grids have a number of advantages such as the number of edges being twice the number of faces, a necessary condition for avoiding computational modes; and often a logically rectangular structure that facilitates certain schemes such as Semi-Lagrangian methods <cit.>.
The horizontal mesh is then radially extruded to form a full 3D spherical shell, with the mesh treated as structured in the vertical (radial) direction.
The latter allows for data arrays to be contiguous in memory along the radial direction <cit.>, which requires a significant refactoring of the underlying code when porting physical parameterisations from the UM to LFRic-Atmosphere (see Sec.ย <ref>).
The mesh resolution is denoted as CnLm where n is the number of cells along one edge of a panel and m is the number of vertical levels.
Thus there are 6n^2 model columns and 6n^2m cells in the 3D mesh.
In this study, we use a resolution of C48L32 for the Temperature Forcing cases (Sec.ย <ref>) and C48L38 for the THAI cases (Sec.ย <ref>).
A mesh with the same number of columns (13824) was used for recent hot Jupiter studies with Exo-FMS <cit.> and MITgcm <cit.>, while a similar number of columns (albeit with a lat-lon grid) was used in the UM simulations of the Temperature Forcing and THAI benchmarks <cit.>.
For visualisation purposes, we interpolate LFRic-Atmosphere's output from its native mesh to a lat-lon grid of 144 longitudes and 90 latitudes using conservative interpolation.
ยง.ยง Finite element discretisation
The system of equations (<ref>) is discretised in spatial dimensions using the mimetic finite element method (FEM).
The outcome is equivalent of the Arakawa C-grid with Charney-Phillips staggering used in ENDGame <cit.>, but more general in terms of the underlying mesh, which is not necessarily orthogonal.
This FEM is also attractive because it has good wave discretisation properties, avoids spurious computational modes and allows for the conservation of key physical quantities <cit.>.
In GungHo's mixed FEM, a set of four function spaces for hexahedral finite elements with differential mappings between them are defined: the ๐_2 space of vector functions corresponding to fluxes (located on cell faces), the ๐_3 space of scalar functions corresponding to volume integrals (located in cube centres), the ๐_ฮธ space (located in the centre of the top and bottom faces of a cell), and the ๐_ฯ space; see Fig.ย 2 in <cit.> for visual aid.[In earlier versions of GungHo, two additional function spaces were used, as described in <cit.>: the ๐_0 space of pointwise scalar functions (located in cell vertices) and the ๐_1 space of vector functions corresponding to circulations (located on cell edges).]
Each variable is assigned to one of the function spaces: uโ๐_2, (ฮฆ,ฯ,ฮ )โ๐_3, and ฮธโ๐_ฮธ.
The ๐_ฯ space is used to decouple the coordinate field from other FEM spaces.
See <cit.> and <cit.> for the full justification of the mixed FEM used in GungHo (for cartesian and spherical geometry applications, respectively).
ยง.ยง Advection
GungHo uses an Eulerian finite-volume advection scheme based on the method of lines that maintains inherent local conservation of mass <cit.>.
In this method, the temporal part is handled using an explicit scheme, specifically the third-order, three-stage, strong stability preserving RungeโKutta scheme.
The spatial part is treated by finite-volume upwind polynomial reconstructions.
For more details on each of these aspects, see <cit.>.
ยง.ยง Multigrid preconditioner
On every time step, GungHo repeatedly solves its discretised equations as a large linear equation system for corrections to the prognostic variables (u, ฮธ, ฯ, ฮ ), which is computationally one of the most costly parts of the model.
Unlike traditional finite-difference and finite-volume methods, the mimetic finite-element discretisation of Eq.ย (<ref>) requires an approximate Schur-complement preconditioner <cit.> due to the non-diagonal operator associated with the velocity correction.
As described in detail in <cit.>, a bespoke multigrid preconditioner for the Schur-complement pressure correction equation has been recently developed for GungHo where the problematic operator is approximated by a diagonal operator.
The key idea behind the multigrid algorithm is to then coarsen the model mesh in the horizontal dimensions only over several (multigrid) levels, typically four (as used in this study) is sufficient, along with an exact solve in the vertical direction on each multigrid level.
The benefits of using a multigrid preconditioner are twofold.
First, it allows for the superior performance and robustness of the solver for the pressure correction when compared to Krylov subspace methods (used by default in an early version of GungHo).
Second, it offers excellent parallel scalability because it avoids expensive global sum operations, typically performed multiple times per time step by other methods and much of the computational work is shifted to the coarsest mesh where there are relatively few unknowns to solve for.
While a relatively coarse global mesh is used in this study (Fig.ย <ref>), we still obtained improved performance when employing the multigrid preconditioner.
Furthermore, our future plans for LFRic-Atmosphere, such as global convection-resolving simulations, and applications to combined interior convection and atmospheric circulation in gas giant planets, will require the scalability that this algorithm offers <cit.>.
ยง TEMPERATURE FORCING CASES
LFRic-Atmosphere's dynamical core, GungHo, has been successfully validated using a set of benchmarks in Cartesian geometry <cit.> and in spherical geometry <cit.>.
The latter study, using GungHo as a shallow water model, demonstrated that it has a similar level of accuracy to other well known shallow water codes.
We thus proceed with a more complex setup: we use GungHo and force it by temperature increments prescribed analytically, following <cit.>.
These tests allow us to simulate an idealised global circulation of the atmosphere over a climatic timescale and qualitatively compare its steady state to that in other 3D GCMs.
At the same time, these tests are free from the uncertainty that inevitably comes with more realistic physical parameterisations (see Sec.ย <ref>).
The test cases presented here are the classic Held-Suarez test <cit.>, an Earth-like test with a stratosphere <cit.>, and a hypothetical Tidally Locked Earth with a longitudinal dipole of the temperature forcing <cit.>.
The tests consist of two parts: temperature forcing (heating and cooling) and horizontal wind forcing (friction), which are added to the right-hand side of the thermodynamic equation (Eq.ย <ref>) and the momentum equation (Eq.ย <ref>), respectively.
Temperature forcing is parameterised as a Newtonian relaxation of potential temperature ฮธ to an equilibrium profile ฮธ_eq:
F_T = T - T_eq/ฯ_rad = -ฮ ฮธ - ฮธ_eq/ฯ_rad,
where ฯ_rad is the relaxation timescale representing a typical radiative timescale.
In sections below, the equilibrium temperature profiles are expressed in terms T_eq.
Wind forcing is parameterised as a Rayleigh friction term, which damps the horizontal wind close to the planet's surface:
F_u = - u/ฯ_fric,
where ฯ_fric is the friction timescale.
The initial condition in all three tests is a hydrostatically balanced isothermal atmosphere (T_init=300) at rest (Tableย <ref>).
We allow the model to `spin up' for 200 days (throughout the paper `days' refers to Earth days), after which we assume it has reached a statistically steady state (Fig.ย <ref>).
To show the mean climate (Fig.ย <ref>โ<ref>), we average the results over the subsequent 1000 days, following <cit.> and <cit.>.
The total simulation length is thus 1200 days.
The rest of the relevant model parameters are given in Tableย <ref>.
Validating our results against previous studies requires interpolation of the model output from LFRic-Atmosphere's native hybrid-height coordinate to a pressure-based ฯ coordinate:
ฯ=p/p_surf,
where p is the pressure at each model level and p_surf is the pressure at the surface.
For each time step of LFRic's output, we linearly interpolate the data to an evenly spaced set of 34 ฯ-levels <cit.>.
Temporal and zonal averaging is then performed on the interpolated data on ฯ-levels.
We compare our results with several previous GCM studies, first and foremost with LFRic-Atmosphere's predecessor, the UM, benchmarked in <cit.>.
Note, however, that since the version used in , the UM's code has evolved, leading to minor differences in these temperature forcing cases.
We therefore supply figures for the latest version of the UM () in the Appendixย <ref>.
ยง.ยง Held-Suarez
The HS test was proposed by <cit.> and while it has been used in many GCM studies <cit.>, for completeness we summarise it here.
The HS test prescribes an equilibrium temperature profile of
T_eq= max{T_stra,[T_surf-ฮ T_horizsin^2ฯ..
..-ฮ T_vertlog(p/p_0) cos^2ฯ](p/p_0)^ฮบ},
where ฯ is latitude, T_stra=200 T_surf=315, ฮ T_horiz=60 , ฮ T_vert=10, and p_0=e5 (Tableย <ref>).
The timescale of the temperature forcing (Eq.ย <ref>) is calculated as
1/ฯ_rad= 1/ฯ_rad, d
+ (1/ฯ_rad, u - 1/ฯ_rad, d) max{0, ฯ-ฯ_b/1-ฯ_b}cos ^4 ฯ,
where ฯ_rad, d=40days, ฯ_rad, u=4days, and ฯ_b=0.7, which is the top of the boundary layer.
The timescale of the wind forcing (Eq.ย <ref>) is given by
1/ฯ_fric= 1/ฯ_fric, fmax{0, ฯ-ฯ_b/1-ฯ_b},
where ฯ_fric, f=1day.
The rest of the parameters, including planetary parameters and gas constants, are given in Tableย <ref>.
The time evolution of the HS climate simulated using LFRic-Atmosphere is shown in Fig.ย <ref> in terms of integral metrics of the atmosphere: total mass, total axial angular momentum, and total kinetic energy.
It is clear from Fig.ย <ref>a that LFRic-Atmosphere conserves the total atmospheric mass as stated in Sec.ย <ref>.
Likewise, the angular momentum curve is almost flat indicating good conservation properties of the model (Fig.ย <ref>b).
This is particularly important for an accurate representation of the zonal jets that dominate the large-scale circulation of the free troposphere <cit.>.
Total kinetic energy reaches a peak in the first hundred days and then exhibits small-amplitude fluctuations around an overall constant level (Fig.ย <ref>c).
Once in steady state, the HS climate is characterised by a zonally symmetric temperature distribution and two prograde zonal jets with the average speed reaching 31 (Fig.ย <ref>).
Our results are in good agreement with the original study which used a GCM with a finite-difference core <cit.>, though the near-surface temperature stratification is generally more stable in LFRic-Atmosphere, while the upper atmosphere does not drop below 190 as it does in the original study.
It also agrees well with the finite-difference lat-lon grid GCM benchmarked in <cit.>.
Compared to its predecessor, the UM <cit.>, LFRic-Atmosphere produces a very similar temperature and wind distribution (Fig.ย <ref>).
The largest temperature difference between LFRic-Atmosphere and the UM is in the equatorial lower troposphere, indicating a generally colder surface climate in LFRic-Atmosphere (compare Fig.ย <ref>a to Fig.ย <ref>a).
Additionally, the upper atmosphere is a few degrees colder at the poles, resulting in a weaker equator-pole temperature gradient in LFRic-Atmosphere than that in the UM (note the 210 isotherm in the same figures).
Nevertheless, the shape of the zonal mean temperature distribution in LFRic-Atmosphere matches that in the UM well.
The differences in the zonal mean eastward wind speed are negligible between LFRic-Atmosphere and the UM, and the dominant jets are only 3 slower in LFRic-Atmosphere than in the UM (compare Fig.ย <ref>b and Fig.ย <ref>b).
We thus conclude that LFRic-Atmosphere produces qualitatively good results for the HS case.
ยง.ยง Earth-Like
Another test used to represent an idealised temperature distribution in the Earth's atmosphere was suggested by <cit.> and then used to benchmark planetary climate models by e.g. <cit.>, <cit.>, and <cit.>.
The Earth-Like (EL) test is formulated such that the equilibrium temperature has two regions: a troposphere, where the temperature linearly decreases with height, and a stratosphere, where temperature is constant with height.
It can be considered a variation of the HS benchmark.
The EL equilibrium temperature profile is given by
T_eq=T_vert+ฮฒ_tropฮ T_horiz(1/3-sin ^2 ฯ),
where
T_vert= T_surf-ฮ_trop (z_stra +z-z_stra/2)
+([ฮ_trop (z-z_stra)/2]^2+ฮ T_vert^2)^1/2, z โค z_stra
T_surf-ฮ_trop z_stra+ฮ T_vert, z>z_stra,
and T_surf=288 is the surface temperature, ฮ_trop=6.5e-3 is the tropospheric lapse rate, and ฮ T_vert is effectively an offset to smooth the transition between the troposphere (with a finite lapse rate) and the (isothermal) stratosphere.
Finally, z is height and ฯ is latitude.
Further, the latitudinal temperature gradient changes with height:
ฮฒ_trop= max{0, sinฯ(ฯ-ฯ_stra)/2(1-ฯ_stra)}.
z_stra=12 and ฯ_stra are the locations of the tropopause in z and ฯ coordinates, respectively.
The timescale of the temperature forcing (Eq.ย <ref>) is set to a constant ฯ_rad=15ย days, while the timescale and profile of the wind forcing (Eq.ย <ref>) is the same as in the HS case (Sec.ย <ref>), following <cit.>.
The rest of the parameters are the same as in HS test and are given in Tableย <ref>.
Like in the HS case, the spin-up of the EL test is under 200 days and the key atmospheric quantities โ total mass, angular momentum, kinetic energy โ are conserved well (Fig.ย <ref>).
Our steady-state LFRic-Atmosphere results agree well with the previous studies <cit.>: the temperature field is zonally symmetric and has a near-surface equator-pole gradient of up to 50, and the wind structure has two zonal jets reaching the magnitude of 26.
Keep in mind that show a snapshot of their simulation, not a long-term average, resulting in larger extremes in these fields.
Compared to the UM simulations, LFRic-Atmosphere produces exactly the same temperature distribution and almost exactly the same wind pattern (compare Fig.ย <ref> to Fig.ย <ref>).
In summary, LFRic-Atmosphere reproduces the EL climate sufficiently well.
ยง.ยง Tidally Locked Earth
Exoplanets in close-in short-period orbits around M-dwarf stars are currently the best targets for atmospheric detection and characterisation <cit.> and they are likely to be tidally locked because of the small planet-star separation <cit.>.
To mimic a synchronously rotating planet tidally locked to its host star, the Tidally Locked Earth (TLE) benchmark was introduced by <cit.> and subsequently used by <cit.>.
The TLE setup consists of introducing a strong longitudinal (ฮป) asymmetry in the temperature forcing by replacing the -sin^2ฯ term in the HS equilibrium temperature profile (Eq.ย <ref>) with +cos(ฮป-ฮป_sub)cosฯ:
T_eq= max{T_stra,[T_surf+ฮ T_horizcos(ฮป-ฮป_sub)cosฯ..
..-ฮ T_vertlog(p/p_0) cos ^2 ฯ](p/p_0)^ฮบ}.
Here the notations are the same as above, and ฮป_sub is the longitude of the substellar point, i.e. the centre of the day side of the planet.
Note that <cit.> used ฮป_sub=270, while <cit.>, <cit.> and <cit.> used ฮป_sub=180.
In the present paper, we use ฮป_sub=0 in alignment with the THAI setup (Sec.ย <ref>).
Since we are assuming a hypothetical tidally locked Earth, we also slow down the rotation rate so that one planetary year is equal to one planetary day: ฯ_TLE = ฯ_HS,EL / 365 (Tableย <ref>).
The rest of the parameters are the same as in the HS case (Sec.ย <ref>).
Following <cit.>, we use a sponge layer at the top of the model domain to damp the vertical velocity and improve the model stability.
This is done via an extra term in the vertical velocity equation <cit.> that acts above a threshold height (20).
The damping coefficient is set to 0.05 (Tableย <ref>).
In an additional sensitivity test (not shown), we switched the damping off, which did not noticeably affect the resulting climate, offering a promising suggestion that LFRic-Atmosphere could be more numerically stable in tidally locked setups.
However, to stay close to <cit.> and the tidally locked cases in Sec.ย <ref>, we choose to keep the damping layer for the TLE test.
The LFRic-Atmosphere simulation of the TLE case conserves the mass, kinetic energy, and axial angular momentum (AAM) well: there is no discernible long-term drift in any of these variables (Fig.ย <ref>).
The AAM evolution appears to have larger fluctuations (green curve in Fig.ย <ref>b), but this is because for display purposes AAM is multiplied by 365 to account for the slower rotation rate in TLE.
The total kinetic energy again takes about 200 days for it reach an equilibrium level.
The time mean near-surface temperature field has a dipole distribution with the hot spot centred at or near the substellar point and the coldest region occupying the antistellar point (Fig.ย <ref>d).
The day-night temperature contrast in the lower atmosphere exceeds 50, broadly matching the results in <cit.> and <cit.> (see also Fig.ย <ref>d).
The upper atmosphere (ฯ=0.22) is more isothermal, in comparison: the largest thermal gradient is about 10 times smaller (Fig.ย <ref>a), similar to that in <cit.> and in the latest version of the UM (Fig.ย <ref>a).
This zonally asymmetric temperature forcing drives a global circulation cell transporting heat from the day side to the night side of the planet, dominated by the divergent component of the wind field <cit.>.
Branches of this circulation are clear in the middle and right columns of Fig.ย <ref>.
The horizontal wind components show a vigorous low-level flow convergence at the substellar point, with the individual wind components reaching โ20, in agreement with <cit.> and <cit.> (see also Fig.ย <ref>e,f).
The convergence area is elongated zonally, broadly tracing the isoline of the highest temperature.
This also corresponds to the region of the strongest updraughts <cit.>.
Aloft, the flow diverges at the substellar point, transporting energy poleward and to the night side at a wind speed reaching 33 (Fig.ย <ref>b,c).
The shape of the upper-level circulation exhibits certain departures from that shown in <cit.>: LFRic-Atmosphere predicts equatorial regions of counter-flow in the zonal wind (Fig.ย <ref>b), while it is purely divergent from the substellar point in <cit.>.
Compared to a more recent paper by <cit.>, the strongest meridional flow in the upper atmosphere simulated by LFRic-Atmosphere is larger by up to 9.
This difference could be due to a slightly different level (pressure rather than ฯ) used to display the data.
Compared to <cit.>, LFRic-Atmosphere produces a similar structure of the horizontal wind in both the lower and upper sections of the atmosphere, which also agrees well with the latest version of the UM presented in Fig.ย <ref>.
Overall, we conclude that LFRic-Atmosphere reproduces the TLE case sufficiently close to previous well-established GCMs, serving as a necessary prelude to a more complex and computationally demanding model setup for a terrestrial tidally locked exoplanet.
Such setup replaces the analytic forcing terms with interactive physical parameterisations, as discussed in Sec.ย <ref>.
ยง THAI EXPERIMENTS
While LFRic-Atmosphere performs well in experiments where its dynamical core is forced by the analytic temperature profiles discussed above, we need to test its ability to simulate atmospheres on rocky exoplanets with the full suite of physical parameterisations.
In this section we show that LFRic-Atmosphere is able to reproduce the results of the recent GCM intercomparison for a tidally locked exoplanet, the TRAPPIST-1 Habitable Atmosphere Intercomparison <cit.>.
This section is structured as follows.
After a brief description of the THAI protocol, we include a summary of LFRic-Atmosphere's physical parameterisations, which are ported from the UM (Sec.ย <ref>).
The results are then presented in Sec.ย <ref> and <ref>.
ยง.ยง THAI protocol
With exoplanet atmosphere observations being extremely scarce compared to those of Earth's atmosphere, multi-model intercomparisons offer a way to develop and validate GCMs for exoplanets <cit.>.
This has been performed at varying scopes for hot Jupiters in <cit.>, for hypothetical tidally locked rocky planets in <cit.> and for TRAPPIST-1e <cit.>, while several more are currently in progress under the Climates Using Interactive Suites of Intercomparisons Nested for Exoplanet Studies (CUISINES) umbrella <cit.>.
In THAI, the pilot CUISINES project, four GCMs were used to simulate four types of potential atmospheres of TRAPPIST-1e, which is a confirmed rocky exoplanet and a primary target for future atmospheric characterisation <cit.>.
The full THAI protocol is given in <cit.>, but we repeat the key details here for completeness.
The host star, TRAPPIST-1, is an ultra-cool M-dwarf with a temperature of 2600 and spectrum taken from BT-Settl with Fe/H=0 <cit.>.
TRAPPIST-1e is assumed to be tidally locked to the star, so that the planet's orbital period is equal to its rotation period (6.1ย days, see Tableย <ref>).
Two experiments are designed to be dry, Benย 1 and Benย 2, acting primarily as benchmarks for the dynamical cores and radiative transfer modules.
Benย 1 is a colder climate with a N2 atmosphere and 400 ppm of CO2, while Benย 2 is a warmer climate with CO2 atmosphere; both of them have a mean surface pressure of 1 bar <cit.>.
Their moist counterparts, Habย 1 and Habย 2, are designed to represent habitable climate states, in this context having an active hydrological cycle with H2O as the condensible species <cit.>.
The bottom boundary is assumed to be a flat land-only surface in the Ben experiments, or an aquaplanet with infinite water supply (slab ocean) in the Hab experiments.
In the Ben cases, the surface bolometric albedo is fixed at 0.3, and the heat capacity is 2e6.
In the Hab cases, the albedo is 0.06 for open water (above the freezing temperature) and 0.25 for sea ice (below the freezing temperature), while the heat capacity of the slab ocean is 4e6.
In all experiments, the roughness length is set to 0.01 for momentum and to 0.001 for heat.
All simulations start from a hydrostatic isothermal (T_init=300) dry atmosphere at rest.
LFRic-Atmosphere is then integrated until it reaches a statistically steady state, qualitatively determined by the absence of a long-term trend in global mean fields such as the surface temperature and the net top of the atmosphere (TOA) radiative flux.
In practice, we have integrated LFRic-Atmosphere for 2400 and 3600 days for the Ben and Hab cases, respectively.
In the analysis below (Sec.ย <ref> and <ref>), we focus on the time-mean state, for which we use instantaneous daily output from last 610 days (100 TRAPPIST-1e orbits).
ยง.ยง Model setup
Overall, for THAI experiments LFRic-Atmosphere's dynamical core is configured similarly to that used in the Temperature Forcing experiments (Sec.ย <ref>).
Namely, we use the C48 mesh with a multigrid preconditioner (Sec.ย <ref>), while most of the FEM and transport scheme settings are the same.
In the vertical, we use 38 levels, quadratically stretched from 0 to โ40 in height, with a higher resolution closer to the surface.
Note that the model top is lower than that used in the UM simulations of the Benย 1 and Habย 1 cases <cit.>.
However, the vertical resolution within the lowest 40 is the same, and it has been used in previous exoplanet studies based on the UM <cit.>.
LFRic-Atmosphere inherits most of the existing and well-tested physical schemes from the UM <cit.>.
They include parameterisations of radiative transfer, subgrid-scale turbulence, convection, large-scale clouds, microphysics, gravity wave drag, and air-surface interaction.
The suite of parameterisations used in our simulations is the same as that used in the UM THAI experiments, so the reader is referred to <cit.> and <cit.> and references therein for more details.
Here we give only a short overview in a form of Tableย <ref>.
Note that while the parameterisations used in the UM and LFRic-Atmosphere is the same, the science configuration, i.e. how these parameterisations are configured, is different: LFRic-Atmosphere uses the latest Global Atmosphere 9.0 (GA9.0) configuration, while the UM was configured to use the GA7.0 configuration (with appropriate modifications according to the THAI protocol).
This has been the result of extensive research and UM validation, addressing various model biases and code bugs.
The GA9.0 configuration is soon to become operational at the Met Office, and a detailed description of it is currently in preparation.
The change between GA7.0 and GA9.0 introduces an additional source of differences between the UM and LFRic-Atmosphere.
This is exacerbated by the fact that the climate of TRAPPIST-1e is prone to bistability, which could be triggered by small changes in the model configuration <cit.>.
To check this, we re-ran the THAI cases using the UM in the GA9.0 configuration.
While a detailed analysis of these experiments is out of scope of the present paper, the key result is that in the colder, nitrogen-dominated atmospheres (Benย 1 and Habย 1 cases) the circulation regime changes, but in the warmer, CO2-dominated atmospheres of Benย 2 and Habย 2 it stays the same.
Overall though, we confirm that predictions of the key global climate metrics by the the UM are closer to those by LFRic-Atmosphere when the GA9.0 configuration is used.
In the following two sections, we present results of the dry (Sec.ย <ref>) and moist (Sec.ย <ref>) THAI simulation pairs.
We analyse the key metrics of the steady-state climate in these simulations, from radiative fluxes to surface temperature, to global circulation, and, in the Hab cases, to moisture variables such as water vapour, cloud content and fraction.
Almost all of these metrics are shown to be within a few percent difference compared to those predicted by the UM (GA7.0) and within the overall inter-model spread in the THAI project <cit.>, validating the application of LFRic-Atmosphere to terrestrial exoplanets of this nature.
ยง.ยง Dry cases
The Benย 1 and Benย 2 cases are the key step between the temperature forcing experiments (Sec.ย <ref>) and a full-complexity moist Hab experiments (Sec.ย <ref>).
These dry cases allow us to show that LFRic-Atmosphere's dynamical core (Sec.ย <ref>), coupled only to radiative transfer and turbulence schemes, (i) is numerically stable over sufficient long integration periods, and (ii) produces results close to those in the UM.
Nevertheless, there are small LFRic-Atmosphere to UM differences in the time-mean global diagnostics, most notably in the Benย 1 case: e.g. LFRic-Atmosphere is โ6 colder on average (Tableย <ref>), while its circulation regime is is dominated by a single superrotating jet (see more details below).
Since the atmosphere is dry and cloud-free, the stellar (or shortwave, SW) radiation coming from the host star is absorbed and scattered only by N2 and CO2 molecules in the Benย 1 case and only by CO2 in the Benย 2 case.
Most of it still reaches the planet's surface, symmetrically illuminating the day side of the planet as shown in Fig.ย <ref>a,b.
For Benย 1, the downward SW flux reaches 862, while for Benย 2 it is about a quarter smaller, 637.
Thus, due to a much higher CO2 concentration, Benย 2 atmosphere absorbs substantially more SW radiation due to molecular CO2 absorption and to a lesser extent due to collision-induced absorption, which is consistent with <cit.>.
30 of the SW radiation flux is subsequently reflected by the planet's surface because of the fixed surface albedo.
The surface properties, most importantly the albedo and heat capacity, are the same in both Benย 1 and Benย 2 experiments, so the differences in the LW radiation flux at the surface (Fig.ย <ref>e,f) are due to the temperature differences of the surface (Fig.ย <ref>a,b) and the atmosphere.
The net upward LW flux is almost twice as large in the Benย 1 case than that in the Benย 2 case.
This difference also manifests at the top-of-atmosphere (TOA), where the the outgoing LW flux reaches 485 and 329 in the Benย 1 and Benย 2 experiments, respectively (Fig.ย <ref>i,j).
However, the mean TOA LW flux is smaller in the Benย 1 case (161), than that in the Benย 2 case (182), demonstrating that on average the Benย 2 planet is warmer due to a larger greenhouse effect.
These numbers agree well (within ยฑ1) with the UM results and are quite close to the results of the other three GCMs analysed in <cit.>.
LFRic-Atmosphere reproduces the surface temperature features likewise close to that in the UM simulations of the Ben cases <cit.>.
Both the mean values and extrema depart only by a few when compared to the UM (Tableย <ref>) and are within the inter-model variability reported in the intercomparison <cit.>.
At the same time, the day-side temperature distribution in the Benย 1 case (between 0 and 90 degrees longitude) is slightly different to that in the UM (GA7.0), because of the structurally different circulation regime.
Indeed, when the Benย 1 case zonal mean zonal wind predicted by LFRic-Atmosphere (Fig.ย <ref>a) is compared to that of the UM (Fig.ย 6a in ), it becomes obvious that the troposphere (the lowest โ16) is dominated by a single equatorial superrotating jet in our study, unlike the two high-latitude jets produced by the UM (GA7.0).
The former is sometimes labelled as the Rhines-rotator regime, while the latter is the fast-rotator regime in the nomenclature of <cit.>.
Regime transitions in Earth-like atmospheres were modelled previously by <cit.>, <cit.>, <cit.> and <cit.>.
Evidence from these studies suggests that TRAPPIST-1e has a combination of the planetary radius and rotation rate that places it on the edge of between two distinct circulation patterns <cit.>.
Thus even minor changes in physical parameterisations within one GCM <cit.> or using different GCMs <cit.> can lead to a fundamentally different regime.
We confirm this by re-running the UM in the same science configuration as that used in LFRic-Atmosphere (GA9.0), which leads to the regime change in the Benย 1 case and an overall better agreement between the UM and LFRic-Atmosphere (not shown).
Circulation regime bistability in the case of TRAPPIST-1e specifically was recently explored in more depth by <cit.>, albeit for a moist case (THAI Habย 1).
One of the conclusions of that study was that the regime bistability is likely driven by moisture feedbacks in the GCM, and dry simulations should be less prone to flipping between single-jet and double-jet regimes.
Our LFRic-Atmosphere simulation results provide a valuable sensitivity experiment, which was not possible to do using the UM.
Namely, we effectively swapped the dynamical core while leaving the physical parameterisations the same (and upgrading their science configuration to GA9.0).
Even for the dry atmosphere of the Benย 1 case, this was enough for the global tropospheric circulation to settle into a qualitatively different state.
We aim to further investigate the circulation bistability on planets like TRAPPIST-1e using LFRic-Atmosphere in a future work, but it is beyond the scope of this paper: here we primarily ensure that LFRic-Atmosphere is numerically stable and reproduces an atmospheric state for the THAI setup within the inter-model variability of the original THAI GCMs.
ยง.ยง Moist cases
The Hab THAI cases present another layer of GCM complexity, i.e. the inclusion of a hydrological cycle.
The imprint of clouds is immediately seen in the maps of radiative fluxes in Fig.ย <ref> (two rightmost columns): there is a distinct shift of the `hot spot' to the west of the substellar point (ฮป_sub=0) due to the reflection of stellar radiation by the day-side cloud cover concentrated the substellar point (Fig.ย <ref>g,h).
In the Habย 1 scenario, the downward SW flux that reaches the planet's surface peaks at 463 (Fig.ย <ref>c) โ much higher than 296 in the UM, but closer to that in the other three THAI GCMs <cit.>.
This can be explained by the cloud cover differences on the day side of the planet: compared to the UM, LFRic-Atmosphere produces a larger cloud gap to the west of the substellar point (Fig.ย <ref>g).
This crescent-shaped region of relatively low cloudiness is mostly due to a reduction in cloud ice, while cloud liquid water has a fairly uniform east-west distribution on the day side (Fig.ย <ref>c,e).
Consequently, the surface temperature maximum is about 10 higher in LFRic-Atmosphere (Fig.ย <ref>c) than in the UM (Fig.ย 3d in ).
While LFRic-Atmosphere reproduces the overall temperature distribution, it predicts the night-side cold spots warmer by 17 than those found using the the UM (also raising the global mean temperature substantially).
However, the UM was a cold outlier in the original Habย 1 experiment, so LFRic-Atmosphere is actually closer to the other three GCMs that participated in the THAI project (and further from the relatively cold ExoPlaSim simulations in ).
Concomitantly, LFRic-Atmosphere produces a higher amount of cloud cover on the night side of the planet, most noticeably over the coldest spots (Fig.ย <ref>g).
Another potential source of LFRic-Atmosphere to UM departure is the uniform horizontal grid spacing in GungHo, which has a lower resolution over the high-latitude night-side gyres than the UM's lat-lon grid.
Because of the warmer surface, especially on the night side of the planet, the TOA outgoing LW radiation flux is also larger in LFRic-Atmosphere's Habย 1 case than that found using the UM, though they have a very similar spatial pattern (Fig.ย <ref>k).
This again makes LFRic-Atmosphere agree better with the other three THAI GCMs, especially LMD-G <cit.>.
With regards to the net thermal flux at the surface, the LFRic-Atmosphere to UM mean difference is about 7 (Fig.ย <ref>g).
The global tropospheric circulation in the Habย 1 scenario is in the same regime as that in the Benย 1 and Benย 2 scenarios: a strong prograde flow at the equator (Fig.ย <ref>c).
When compared to the UM <cit.>, the superrotating jet in LFRic-Atmosphere has a more defined core confined to the low latitudes, but the overall pattern is the same (see quivers in Fig.ย <ref>c).
Interestingly, the stratospheric zonal wind structure differs markedly in the LFRic-Atmosphere from that in the UM: in LFRic-Atmosphere, there is a pronounced eastward jet in the lower stratosphere (โ2032), superseded by a counter-rotating westward flow aloft (Fig.ย <ref>c).
The UM, on the other hand, produces a weak subrotation throughout most of the tropical stratosphere, with two eastward jets in high latitudes similar to those in the Benย 1 case in LFRic-Atmosphere (Fig.ย <ref>a).
There are two main causes for this inter-model disagreement: (i) a newer science configuration (GA9.0) used in LFRic-Atmosphere, and (ii) a different spatial extent of the so-called sponge layer near the model top.
As discussed above, we confirm the former by running the UM with the GA9.0 configuration, which results in a double-jet tropospheric circulation regime, accompanied by the weakening of the stratospheric flow (not shown).
The second point is confirmed by an additional sensitivity experiment with the UM: a different shape of the sponge layer triggers the regime change yet again.
The role of the wind damping at the model top has been discussed in more detail in <cit.>; we delegate a full re-assessment of this problem to a future study.
In the Habย 2 scenario, LFRic-Atmosphere predicts a climate state quite similar to that in the original UM THAI simulation, and the inter-model differences are overall smaller than those for the Habย 1 case (Tableย <ref>).
The global average difference in the radiative fluxes at the surface, both SW and LW (Fig.ย <ref>d,h), are within a few , which is within the inter-GCM spread in the THAI project <cit.>.
Moreover, the key spatial features are virtually indistinguishable between LFRic-Atmosphere and the UM.
The downward SW flux reaches its highest values to the west of the substellar point.
Both its maximum (299) and mean (68) are among the highest among the THAI ensemble (and closest to those for ExoCAM).
As Fig.ย <ref>l shows, the TOA LW radiation has a typical minimum east of the substellar point (180) corresponding to the highest cloud tops which have the lowest emission temperature.
As a result, bisected along the equator, the longwave flux has two pronounced peaks along the equator, similar to that in ExoCAM <cit.>.
The TOA LW flux distribution matches the pattern of the cloud content, especially the cloud ice (Fig.ย <ref>d).
Due to its warmer climate, the Habย 2 simulation has more cloud water (Fig.ย <ref>f) and less cloud ice than does the Habย 1 climate; the distribution of cloud particles around the planet is also more uniform.
The amount of total column cloud water in LFRic-Atmosphere agrees well with the mean values predicted by the UM, while the cloud fraction is a few percent higher (Fig.ย <ref>h here and Fig.ย 19 in ).
The overall spatial distribution is very similar between the models.
The Habย 2 tropospheric flow regime is likewise similar to that in Habย 1 (as well as the Ben scenarios), though the Habย 2 tropospheric jet is the strongest and widest among our THAI simulations (Fig.ย <ref>d).
In the zonal average, the prograde flow dominates the troposphere at all latitudes (Fig.ย <ref>d) and its speed exceeds 50.
Its wide latitudinal extent suggest that the circulation is near to a transition into the double-jet, or fast-rotator, regime <cit.>.
The stratospheric circulation exhibits a very weak secondary jet and a strong retrograde rotation aloft โ almost a mirror image of its tropospheric counterpart โ similar to the UM prediction for the Habย 2 case <cit.>.
Taking the results of all four THAI simulations together, we conclude that LFRic-Atmosphere is able to reproduce the results obtained by its forerunner, the UM.
Despite relatively minor departures in the global climate metrics, LFRic-Atmosphere stays well within the inter-model spread reported for the core THAI simulations <cit.> as well as in the follow-up GCM studies <cit.>.
From the technical perspective, we also see that thanks to the recent updates to GungHo, LFRic-Atmosphere is able to reproduce N2 or CO2-dominated exoplanetary climates with sufficient numerical stability and over sufficiently long simulations.
We have shown that LFRic-Atmosphere, the Met Office's next generation atmospheric model, reproduces global atmospheric circulation and climate in a variety of idealised planetary setups.
It does this sufficiently well to qualitatively match results obtained with other well-established exoplanet GCMs <cit.>.
Complementary to a more rigorous and extensive Earth-focused testing of the new model currently under way at the Met Office <cit.>, our findings provide a necessary first step in using LFRic-Atmosphere for a wide range of planetary atmospheres.
Here, we first apply three commonly used prescriptions of terrestrial planetary atmospheric circulation forced by an analytic temperature profile.
These temperature forcing benchmarks are the Held-Suarez test <cit.>, an Earth-like test <cit.>, and a hypothetical tidally locked Earth <cit.>.
Overall, LFRic-Atmosphere agrees well with its forerunner, the UM <cit.>, and with other 3D GCMs used in exoplanet modelling <cit.>.
At the same time, LFRic-Atmosphere conserves key integral characteristics of an atmospheric flow โ total mass, angular momentum, and kinetic energy โ thus passing an important test especially important for our future simulations of gas giant atmospheres.
A higher level of model complexity is tested in the four simulations of the THAI protocol.
The THAI cases comprise dry or moist N2 or CO2-dominated setups <cit.>, which cover the key points of the parameter space in terms of atmospheric composition.
While we cannot judge which THAI GCM is more correct due to the absence of observations, we see that LFRic-Atmosphere reproduces the global climate sufficiently close to the original ensemble of the THAI GCMs <cit.> across the range of key metrics: the surface temperature, TOA and surface radiation fluxes, circulation patterns, and cloud cover.
The LFRic-Atmosphere to UM differences in the global mean surface temperature are relatively small, well within the THAI inter-model spread.
Generally, LFRic-Atmosphere simulations tend to be on the colder edge of the spectrum when compared to the other four THAI GCMs (though warmer than the UM in the Habย 1 case).
The dominant wind pattern in the troposphere โ a prograde equatorial jet โ is similar in shape and strength to that reported for the UM for all cases but Benย 1 (Sec.ย <ref>).
The inter-model differences in the jet structure is likely a manifestation of the climate bistability <cit.> and will be the focus of a follow-up study.
The disagreement between LFRic-Atmosphere and the UM are due to the different dynamical core (including a different horizontal mesh), the updates in the science configuration from GA7.0 to GA9.0 (see Sec.ย <ref>), as well as the use of a different shape of the sponge layer near the model top needed for numerical stability.
LFRic-Atmosphere's numerical stability and its ability to capture the salient climatic features opens a new avenue for applying it to rocky exoplanets, building and improving upon studies done with the UM in the recent years <cit.>.
Our LFRic-Atmosphere simulations also provide a valuable sensitivity experiment, which is possible to perform with very few GCMs: the physical parameterisations are effectively the same as in the UM's THAI simulations, but the dynamical core is completely different.
This LFRic-Atmosphere to UM intercomparison could be used more generally to test and debug various parameterisations used by both models.
In our future work, we aim to use LFRic-Atmosphere to investigate the atmospheric circulation on rocky planets in more detail.
At the same time, we have started adapting the model to a broader range of atmospheres, namely those on extrasolar gas giants following <cit.>.
We will then add an idealised parameterisation for hot Jupiter clouds <cit.> and a flexible chemistry scheme <cit.>.
These steps will allow us to simulate a variety of atmospheric processes on exoplanets at higher resolution and with better computational efficiency; compare our results with the data coming from the recently launched JWST as well as future observational facilities.
The present paper is a crucial milestone towards this future.
The LFRic-Atmosphere source code and configuration files are freely available from the Met Office Science Repository Service (<https://code.metoffice.gov.uk>) upon registration and completion of a software licence.
The UM and JULES code used in the publication has been committed to the UM and JULES code trunks, having passed both science and code reviews according to the UM and JULES working practices; in the UM/JULES versions stated in the paper ().
Scripts to post-process and visualise the model data are as a Zenodo archive: <https://doi.org/10.5281/zenodo.7818107>.
The scripts depend on the following open-source Python libraries: <cit.>, <cit.>, (<https://github.com/SciTools-incubator/iris-esmf-regrid>), <cit.>, <cit.>, <cit.>.
A post-processed dataset is provided in a Zenodo archive: <https://doi.org/10.5281/zenodo.7818107>.
Along with visualisation scripts, it contains LFRic-Atmosphere output, averaged in time and interpolated to a common lat-lon grid.
It also contains time mean UM data shown in the Appendixย <ref>.
ยง TEMPERATURE FORCING CASES IN THE LATEST VERSION OF THE UM
In this section, we include supplementary figures showing the steady-state climate in the Temperature Forcing cases simulated by the latest version of the UM.
These newest UM simulations are shown in Fig.ย <ref>โ<ref> using the same contour ranges and can thus be easily compared with their LFRic-Atmosphere counterparts in Fig.ย <ref>โ<ref>, respectively.
For the original versions of these plots, see Fig.ย 2, 3, 8, 14 and 16 in <cit.>.
To produce these figures, we used the same model grid, time step, and experiment duration as those used in <cit.>
DS & NM led the paper. The rest of the co-authors provided assistance in developing the model and supporting its architecture. Paper reviewed and contributed to by all co-authors.
The authors declare that they have no conflict of interest.
Material produced using Met Office Software.
We acknowledge use of the Monsoon2 system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council.
Additionally, some of this work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (<www.dirac.ac.uk>).
The equipment was funded by BEIS capital funding via STFC capital grants and and STFC DiRAC Operations grant .
DiRAC is part of the National e-Infrastructure.
This work was supported by a UKRI Future Leaders Fellowship .
This work also was partly funded by the Leverhulme Trust through a research project grant .
copernicus
|
http://arxiv.org/abs/2306.07426v1
|
20230612210212
|
Izindaba-Tindzaba: Machine learning news categorisation for Long and Short Text for isiZulu and Siswati
|
[
"Andani Madodonga",
"Vukosi Marivate",
"Matthew Adendorff"
] |
cs.CL
|
[
"cs.CL"
] |
Izindaba-Tindzaba: Machine learning news categorisation for Long and Short Text for isiZulu and Siswati
Madodonga, Andani
Department of Computer Science, University of Pretoria, South Africa
[email protected]
Marivate, Vukosi
Department of Computer Science, University of Pretoria, South Africa
[email protected]
Adendorff, Matthew
Open Cities Lab
[email protected]
ยง ABSTRACT
Local/Native South African languages are classified as low-resource languages. As such, it is essential to build the resources for these languages so that they can benefit from advances in the field of natural language processing. In this work, the focus was to create annotated news datasets for the isiZulu and Siswati native languages based on news topic classification tasks and present the findings from these baseline classification models. Due to the shortage of data for these native South African languages, the datasets that were created were augmented and oversampled to increase data size and overcome class classification imbalance. In total, four different classification models were used namely Logistic regression, Naive bayes, XGBoost and LSTM. These models were trained on three different word embeddings namely Bag-Of-Words, TFIDF and Word2vec. The results of this study showed that XGBoost, Logistic Regression and LSTM, trained from Word2vec performed better than the other combinations.
Keywords: South African native Languages, Low Resources Languages, Data Augmentation, Topic Classification, News Categorisation
ยง INTRODUCTION
Natural Language Processing (NLP) is a subfield of artificial intelligence, linguistics and computer science that focuses on enablling computers to process natural language <cit.>. One of the cases where NLP has been beneficial to people is where it has been used for machine translation, performing the task of translating from one language to another. In this case, NLP helps the computer or machine to attempt the conversion from one langauge to another. NLP can also assist in learning and prediction sentiment/opinion from sentences or text. This NLP capability is utilised by companies to understand how customers feel and their opinion about the companyโs products and services through the analysis of their social media posts and comments. Furthermore, the chatbots that are used in the customer services space are one of the examples of NLP application
<cit.>. Contextual chatbots and Virtual Text Assistant are now widely used but they mostly understand a limited number of languages, such as English. South African native languages do not have enough resources to be used to built such contextual Chatbots and Virtual Text Assistant. Therefore, the resources for native languages need to be created so that they can be used to build software agents that understand South African native languages <cit.>.
South Africa is a multilingual country with eleven langauges (two of which are European and nine are African languages); the African languages are Sepedi, Sesotho, Setswana, Siswati, Tshivenda, Xitsonga, isiZulu, isiNdebele and isiXhosa and on the other hand, European languages are English and Afrikaans. It is important to note that these languages are official in South Africa <cit.>. In South Africa, we have a challenge with the nine African languages because they are resource-poor. There is a shortage of curated and annotated corpora to enable them to benefit from Natural Language Processing. Therefore, the purpose of this study is to focus specifically on the corpus creation and annotation for isiZulu and Siswati and perform a topic classification tasks on the data.
ยง CRITICAL NATURAL LANGUAGE PROCESSING COMPONENTS
Globalisation and the increase in digital communications have created the demand for NLP systems that enable fast communication between people speaking different languages. However, some languages are missing in these systems. For instance, there are roughly 7000 spoken languages on the planet and Most of them still are not included in the NLP systems, primarily because they do not have the labelled corpora to build those NLP systems <cit.>. These languages with scarce or no resources are low-resourced languages <cit.>. The language resources include (but are not limited to) the annotated corpora and core technologies. Examples of core technologies include lemmatisers, part of speech tagger and morphological decomposers <cit.>.
On the other hand, the languages with high resources are the ones that have most of the resources needed to build the NLP technology <cit.>.
The high-resourced languages include English, French, Finnish, Italian, German, Mandarin, Japanese, etc.
<cit.> and low-resourced languages include languages such as isiZulu, isiXhosa, Siswati etc. <cit.>. A study, by <cit.>, focused on the low-resourced languages, namely, isiZulu and Siswati; stated that annotated corpora are one of the things that low-resourced languages lack. Thus, the isiZulu and Siswati datasets need to be annotated, as part of the process of making these languages accessible for NLP and by enriching these two languages. <cit.> defines data annotation as the process of labelling the dataset(s), an important step when building machine learning models. <cit.> stated that manual data annotation is the most important, time-consuming, costly, and tedious task for NLP researchers. Therefore, automation tools are developed to perform these annotations.
The lack of curated and annotated data impede the process of fighting the shortage of resources for low-resourced languages in the NLP space <cit.>. Besides, established NLP methods often cannot be transferred on or to these languages without these corpora <cit.>.
<cit.> collected the datasets of two closely related African languages - Kirundi and Kinyarwanda from two different sources. A total of 21268 and 4612 articles were annotated for Kinyarwanda and Kirundi respectively. The two datasets underwent a cleaning process that involved the removal of special language characters and stopwords.
The sources were newspapers and websites. These datasets were annotated, based on the title and content of the contained articles, into the following categories: Politics; Sport; Economy; Health, Entertainment; History; Technology; Tourism; Culture; Fashion; Religion, Environment; Education; and Relationship <cit.>. Hence, a very similar task was performed in this work as part of language resources creation.
ยง.ยง Data generation techniques for low-resourced languages
An existing approach utilised to mitigate the challenges of low-resourced language data, is the language translation approach. That is the low-resourced language gets translated into the resource-rich language <cit.>. However, in most cases, this approach suffers from language biases and may be impractical to achieve in real life <cit.>. Sometimes the direct translation may be impossible or inaccurate due to language differences. Hence, the translated data will require manual processing thereafter, which is tedious and time-consuming. Manually creating data for low-resourced languages is time-consuming but a good approach, moreover, it introduces minimal language biases and more accurate than translated datasets <cit.>.
Cross-lingual and transfer learning is one of the combinations of techniques frequently used or preferred in NLP due to its speed and efficiency <cit.>. This further serves to highlight why all languages must have NLP resources such as annotated data to avoid data simulations that have unfavourable
effects.
Data Augmentation is a method that generates a copy (or unique data) of the data by slightly altering the existing data <cit.>. It increases the size of small training data in ways that improve model performance <cit.>. Model performance is highly dependent on the quality and size of the training data. Data Augmentation addresses the issue of small training data that leads to the models losing their generalisability <cit.>.
Work by <cit.> had a small data size of Sepedi and Setswana native languages, and incorporated word embeddings based-contextual augmentation to increase the dataset used to train classification models. Each training dataset was augmented 20 times while the test dataset remains unchanged.
In their study, the new data created replaced the words (based on context) in the sentences. Hence a new sentence was formed as a result of applying Contextual Data Augmentation. Furthermore, Data Augmentation improved the performance of the classifiers <cit.>. In this current study, the same Data Augmentation (word embedding-based augmentation) was performed on the Siswati and isiZulu dataset to increase the data size.
ยง.ยง Dealing with data imbalance
The Synthetic Minority Oversampling Technique (SMOTE) is another technique that can be adopted when the learning is done on an imbalanced dataset, since it solves the problem of class imbalance <cit.>. SMOTE works by generating synthetic examples through inserting different values(words) in minority class, the values are randomly picked from a defined neighbourhood within feature space . Minority class is selected, then obtain the k-nearest neighbours of the same minority class and therefore utilises the k- neighbours to create the new synthetic examples <cit.>.
ยง.ยง Related work
Supervised learning models perform better on larger labelled datasets, which presents a challenge for low-resourced languages as they donโt have enough data and annotating data can be expensive <cit.>. Most prior studies focused on developing parallel corpora between low and resource-rich languages, but parallel corpora are often unavailable for some low-resourced languages <cit.>. Work by <cit.> identified low-resourced languages and investigated the idea of distance learning on machine translation.
Since English and French are resource-rich languages, the two languages trained a neural machine translation (NMT) <cit.>.
An English-French neural machine translation (NMT) model was initially trained. Afterwards, the NMT model initialised another NMT model to be used on a low-resourced and high-resourced pair (e.g. Uzbek-English) <cit.>, as such utilising transfer learning. In this case, the low-resourced languages investigated for transfer were Uzbek, Hausa, Turkish and Urdu. The transfer learning was shown to improve the BLEU (bilingual evaluation understudy) for low-resourced Neural machine translation <cit.>.
Work by <cit.> explored transfer learning between the two low-resourced languages Turkey and Uzbek by first pairing each language with English and then generating the parallel data. Then, split the words with Bytes Pair Encoding (BPE) to maximise the overlapping vocab <cit.>. The model and word embedding are trained on the first language pair (Turkey-English) and then the same model parameters and word embeddings were transferred to the other model that trained the second language pair (Uzbek-English). This technique improved the BLEU by 4.3% <cit.>.
The datasets of low-resourced South African languages, isiZulu collected from isolezwe and National Centre for Human Language Technology (<www.sadilar.org>); and Sepedi collected from National Centre for Human Language Technology were used to evaluate the performance of open-vocabulary models on the small datasets, the evaluated models include n-grams, LSTM, RNN, FFNN, and transformers. The performance of the models was evaluated using the byte pair encoding (BPE).
The RNN performed better than the rest of the models on both the isiZulu and Sepedi datasets <cit.>. <cit.> explored the machine translation capability from the zero-short learning, transfer learning and multilingual learning on two South African languages, namely, isiZulu and isiXhosa; and one Zimbabwean language, that is Shona. The datasets were in language pair (parallel text), that is, English-to-Shona, English-to-Zulu, English-to-Xhosa and Zulu -to- Xhosa, with the pair English -to- Zulu being the target pair since it has the smallest datasets (sentence pair). The transfer learning and zero-short learning did not outperform the multilingual model which produced the Bleu score of 18.6 for the English-to-Zulu pair. Moreover, these results provide an avenue for the development and improvement of low resource translation techniques <cit.>.
Work by <cit.> attempted to address the issue of lack of clear guidelines for low-resources languages in terms of collecting and curating the data for specific use in the Natural Language Processing domain. In their investigation, two datasets of news headlines written in Sepedi and Setswana were collected, curated, annotated, and fed into the machine learning classification models to perform text classification. The datasets were annotated by means of categorising the articles into the following categories based on context: Legal; General News; Sports; Politics; Traffic News; Community Activities; Crime; Business; Foreign Affairs <cit.>. The evaluation metric was the F1-score, which is a model performance measure. One of the models, Xgboost, performed well as compared to other models <cit.>.
ยง DEVELOPING NEWS CLASSIFICATION MODELS FOR ISIZULU AND SISWATI LANGUAGES
In this section we discuss data collection and cleaning processes together with the classification models building approach.
ยง.ยง Data Collection, Cleaning and Annotation
We discuss the initial news data collection and annotation process. We further discuss the data collection process of the larger dataset that was used to build our word representations.
ยง.ยง.ยง News data collection and annotation
The isiZulu news data was collected from Isolezwe, which is a Zulu-language local newspaper. The news articles published online on Isolezwe website (<http://www.isolezwe.co.za>) were scraped and stored in a csv file for further processing. The Siswati dataset (news headlines) was collected from the public broadcaster for South Africa, that is, SABC news LigwalagwalaFM Facebook page (<https://www.facebook.com/ligwalagwalafm/>). The Siswati data was also scraped and stored on a csv file. Lastly, to build word respresentations other isiZulu and Siswati datasets were collected from SADILAR (<www.sadilar.org>) and Leipzig Corpus(<https://wortschatz.uni-leipzig.de>) for the purpose of better generalising word representations. We collected 752 (full artilces and titles) in isiZulu and collected 80 Siswati news headlines.
Post data collection process, we worked to categorise the news items using the International Press Telecommunications Council (IPTC) News Categories (or codes)[<https://iptc.org/standards/newscodes/>]. The categories used were: 1. disaster, accident and emergency incident, 2. economy, business and finance, 3. education, 4. environment, 5. health, 6. human interest, 7. labour, 8. lifestyle and leisure, 9. politics, 10. religion and belief, 11. science and technology, 12. society, 13. sport, 14. weather, 15. arts, culture, entertainment and media, 16. crime, law and justice, 17. conflict, war and peace. We make available the data, and annotations and data statement[<https://github.com/dsfsi/za-isizulu-siswati-news-2022>] [<https://doi.org/10.5281/zenodo.7193346>]. An example of an annotated isiZulu article is shown below:
Politics
UMENGAMELI we-ANC uMnuz Cyril Ramaphosa ugqugquzele abantu basePort Shepstone nezindawo ezakhele leli dolobha ukuthi bagcwalise iMoses Mabhida Stadium lapho ezothula khona umyalezo wakhe weJanuary 8 ngoMgqibelo aphinde athule nomhlahlandela weqembu wokuheha abavoti njengoba kuyiwa okhethweni.
URamaphosa ehambisana nabanye abaholi be-ANC esifundazweni uhambele kule ndawo izolo enxenxa abantu ukuthi batheleke ngobuningi kulo mgubho weqembu. Uphinde wathembisa ukuthi uzothula uhlelo lwakhe lokuthuthukisa izwe.
The isiZulu news (articles and titles) and Siswati news titles category distribution are shown below, it was observed that the datasets suffer from class imbalance, small data size and short text (only isiZulu and Siswati titles/headlines). Therefore, oversampling techniques, SMOTE and Data Augmentation were applied to mitigate class imbalance problem and also increase the data size.
For better modelling, class categories with few observations we revmoved, remaining with the below categories:
1. crime, law and justice, 2. economy, business and finance, 3. education, 4. politics, 5. society for isiZulu and 1. crime, law and justice, 2. arts,culture,entertainment and media, 3. education, 4. human interest, 5. society for Siswati. Since the number of class categories has dropped to 5 categories, the news dataset size also dropped to 563 (news articles and titles) for isiZulu and 68 (news titles) for Siswati
The final datasets were cleaned and then used to build classification models, however, prior to model building, word representations were created using a larger datasets.
ยง.ยง Data Preparation/Cleaning
All datasets collected in this work contained some noise such as single characters, white spaces, encoded characters, meaningless words, and special characters. The noise had to be removed before the datasets are fed into the models. All these noises on the datasets were removed. Below we explain each part of the followed cleaning step:
* The single characters carry less meaning, so they were removed from the datasets.
* There were instances where there are multiple spaces between two words, so those spaces were substituted with a single space.
* There were some characters/words that were not ASCII encoded then those characters were decoded back to ASCII.
* Special characters refer characters such as &%$ and they are not accepted by the models. Hence they were also removed.
* The data contained combination of letters that donโt make any existing isiZulu/Siswati word. Words such as 'udkt','unksz','unkk'
. Based on these criteria, they were also removed to streamline the corpus, and as a result, improve the analysis.
Since the datasets are noise-free, each letter in the datasets was set to lowercase, resulting in clean datasets to be used in machine learning models building.
ยง.ยง.ยง Word Representations
It was stated above that the larger datasets collected from SADILAR and Leipzig Corpus for each language was used for word representations (vectorisers and embeddings) creation. The pre-trained vectorizers were created, enabling the opportunity to build classifiers with good generalisability in future. Therefore, from the collected corpora for each language, we created the following vectorizers: Bag Of Words, TFIDF and Word2vec <cit.>.
ยง.ยง News Classification Models
We arbitrarily selected a few classification algorithms to train models to perform news topic classification for isiZulu and Siswati datasets. The selected algorithm are Logistic Regression, XGBoost, Naive Bayes and LSTM.
We performed the classification on the original datasets, and then apply oversampling techniques, namely, Data Augmentation and SMOTE, to solve the class imbalance problem and increase the data size. The classification models were again executed on the Augmented and SMOTE datasets.
ยง EXPERIMENTS AND RESULTS
In this section we discuss the results obtained from the performed experiments, that is, the findings from the multiple combination of word representations and classification models on isiZulu and Siswati datasets. However, the findings presented here are basis, since this work only provide guidelines for resource creation of low-resource languages.
ยง.ยง Experimental Setup
The maximum token size of 20 000 was used for both Bag Of Words and TFIDF vectorizers, whereas for Word2vec we used size 300. For each of the 4 classification models, 5-fold cross validation was applied during model training. As we are creating baseline models and working on small datasets (not enough to split into training, validation and test sets), then parameter optimisation was not performed in this work.
ยง.ยง.ยง Baseline Experiments
In the baseline experiments, we train the classification models using 5-fold cross validation on isiZulu and Siswati original datasets and present the models performance for each dataset. The results show that Word2vec and LSTM model performed very well in all datasets as compared to other models. Below tables shows the classification model results obtained from original datasets.
ยง.ยง.ยง Augmentation
Data Augmentation is the technique that is used to increase the data size to improve the performance of the machine learning classifiers <cit.>. The most common way to augment the data is by means of replacing the words or phrases in a sentence by their synonyms where the synonym is derived by obtaining the semantically close words <cit.>.
The Siswati and isiZulu datasets were augmented using the same approach where the original words on the sentence are replaced based on their contextual meaning. The augmentation was done through referencing the words similarity from the Word2vec word embedding as per <cit.>. Data Augmentation improved the performance of each model on all datasets as compared to original datasets, hence, it remains a task to investigate the effectiveness and robustness of this Data Augmentation algorithm, that can be achieved through comparing the algorithm results on resourced and low-resourced datasets.
The classification models trained on Word2vec outperformed all the classification models trained on TFIDF and Bag Of Words. For isiZulu articles, combination of Word2vec and XGBoost model outperformed all the models, scoring f1-score of 95.21%, on the other hand, Word2vec and Logistic Regression model combination performed well on isiZulu titles dataset scoring f1-score of 86.42%. Lastly, Word2vec and LSTM model combination performed well on Siswati titles dataset scoring f1-score of 93.15%. It was observed that iziZulu articles dataset scored high f1-score as compared to isiZulu titles, which explains that long texts improves the classification accuracy, and also highlights that Logistic Regression outperforms XGBoost on short text dataset. It remains a task to run the same comparison on Siswati dataset, as it was not covered in this work due to lack of Siswati full news articles dataset.
ยง.ยง.ยง SMOTE
SMOTE is an oversampling technique used to rebalance the original training set through the creation of synthetic samples of the minority class <cit.>. This technique works by selecting the minority class and the total amount of oversampling to balance the classes, then the k-nearest neighbours for that particular class are obtained , therefore, iteratively the k nearest neighbours are randomly chosen to create new instances <cit.>. This oversampling technique was used to balance the classes and increase the dataset. Note that SMOTE uses a different approach from the Data Augmentation approach presented earlier.
We applied SMOTE on our three datasets and run the classification model using 5-fold cross validation, the results from each dataset are presented below. From the below tables, it was observed that Word2vec produced the best classification models from all the three datasets. XGBoost performed well in all instances scoring f1-score of 93.35%, 91.26%, 87.46% for isiZulu articles, isiZulu titles and Siswati titles datasets respectively. We observed the XGBoost model on isiZulu articles struggled to separate society and politics from crime,law and justice since most of the incorrect classification happened in the instance where society and politics were classified as crime,law and justice.
ยง SUMMARY
We observed that Data Augmentation outperformed SMOTE in two instances, that is, isiZulu articles and Siswati titles datasets, whereas SMOTE outperformed Data augmentation only in case of isiZulu titles dataset, however, we hope to look into the difference performance from these re-sampling techniques and have a confirmatory pipeline to provide guidance on what approach to take under what circumstance. However, we present the generalised pipeline obtained from this work as a baseline.
The Pipeline obtained from this work was summarised and presented in figure ย <ref> below together with the corresponding top performing classification models presented in table ย <ref>, the figure ย <ref> shows the choices that produced the best results under different circumstances for three different datasets. It was observed that the datasets used resembled three different qualities, that is, large size and long-text (isiZulu Articles), large size and short text(isiZulu Titles), and small size and short text(Siswati), these varieties produced different outcomes from the models under the same circumstance and can be generalised as follows:
* If the data size is large and contains long-text then Contextual Data Augmentation is recommended over SMOTE, and LSTM is likely to perform better.
* If the data size is large and contains short-text then SMOTE is recommended over Contextual Data Augmentation, and XGBoost is likely to perform better.
* If the data size is small and contains short-text then Contextual Data Augmentation is recommended over SMOTE, and XGBoost is likely to perform better
The Above generalisation is limited to Word2vec word embedding since it is the one that produced outstanding results from all the datasets as compared to TFIDF and Bag-Of-Words. It remains a task to further investigate the poor performance from TFIDF and Bag-Of-Words, possibly the parameter change in classification could lead to good results.
ยง CONCLUSION AND FUTURE WORK
This work introduced the collection and annotation of isiZulu and Siswati news datasets. There is still a data shortage (more especially annotated data) of these two native languages, especially Siswati. However, this work paved a way for the other researchers who would want to use annotated data for isiZulu and/or Siswati in downstream NLP tasks.
The experimental findings from the classification models and different combinations of word embeddings with model baselines were presented. Though we were limited by the data availability, however, this provides an overview of what could be achieved with minimal datasets. The isiZulu and Siswati annotated datasets will be made available for other researchers, the pre-trained vectorizers will be open-sourced to other researchers and the classification results that maybe be used as benchmarks.
The collection and annotation of native language datasets remain a task for the future. For this to be successful, there needs to be an identification of other language sources where the dataset can be extracted for more models to be trained. Furthermore, NLP researchers need to focus more on effective ways to augment the datasets. They should be compared with SMOTE sampling, because of the imbalance in the dataset. It is beneficial to have effective ways to augment native language datasets.
In addition, it is also worth investigating the poor performance of TFIDF and Bag-Of-Words compared to Word2vec, possible investigation areas could be the word embedding nature and the classification models hyperparameters optimisation that could improve classification performance. Another extension of this work is transfer learning from isiZulu to Siswati. The isiZulu dataset is large compared to the Siswati dataset making it a viable avenue of research to investigate if transfer learning improves the classification performance for Siswati in this context.
[custom]
agsm
|
http://arxiv.org/abs/2306.08992v1
|
20230615093611
|
On a kinematic proof of Andoyer variables canonicity
|
[
"Anatoly Neishtadt"
] |
math.DS
|
[
"math.DS",
"70E15, 70E17"
] |
On a kinematic proof of Andoyer variables canonicity
Anatoly Neishtadt
=====================================================
We present a kinematic proof that the Andoyer variables in rigid body dynamics are canonical. This proof is based on the approach of โvirtual rotationsโ by H. Andoyer. The difference from the original proof by Andoyer is that we do not assume that the fixed in body frame is the frame of principal moments of inertia, and do not use explicit formulas for the kinetic energy of the body. The proof implies that Andoyer variables are canonical for any fixed in body Cartesian coordinate frame.
ยง INTRODUCTION
The canonical Andoyer variables in rigid body dynamics were introduced by H. Andoyer in
<cit.>. Wide use of these variables started after works of V.V. Beletskii <cit.>, who invented closely related variables, and A. Deprit <cit.>, who re-discovered the Andoyer variables and used them to represent the free rotation of rigid body in the phase plane. Andoyer proved that these variables are canonical using a kinematic approach based on โvirtual rotationsโ. Deprit used formulas of spherical geometry in his proof that the variables are canonical. In this note we present a proof based on the original approach by Andoyer. Unlike <cit.> we do not assume that the fixed in body frame is the frame of principal moments of inertia. The proof implies that Andoyer variables are canonical for any fixed in body Cartesian coordinate frame.
A review of works on canonical variables in rigid body dynamics prior to Andoyer's studies (F. J. Richelot (1850), J. A. Serret (1866), R. Radau (1869), F. Tisserand (1889)) is given in <cit.>.
ยง COORDINATE FRAMES. DEFINITION OF THE ANDOYER VARIABLES
Consider the classical problem of motion of a rigid body about a fixed point (e.g., <cit.>). Let OXYZ be an absolute Cartesian frame of references, Fig.ย <ref>,ย a. Let Oxyz be a fixed in body Cartesian frame (e.g., the frame of principal moments of inertia of the body for the point O as in <cit.>), Fig. <ref>, b. Denote Gโ the angular momentum of the body. Define a Cartesian frame of reference Oฮพฮทฮถ as follows, Fig.1, a, b. Axis Oฮถ is directed along Gโ. Axis Oฮพ is in the plane OZฮถ. Axis Oฮท is orthogonal to the plane Oฮพฮถ. The canonical Andoyer variables are L,G,ฮ, l,g, (we a use slightly modified notation from <cit.>). Angles l,g, are shown in Fig. <ref>, a, b, L is the projection of Gโ onto axis Oz, G is the absolute value of Gโ, ฮ is the projection of Gโ onto axis OZ.
Denote eโ_X,eโ_Y, eโ_Z, eโ_x,eโ_y, eโ_z and eโ_1,eโ_2, eโ_3 unit coordinate vectors of frames OXYZ, Oxyz and Oฮพฮทฮถ, respectively. We denote aโยทbโ and aโรbโ
the scalar (โdotโ) and the vector (โcrossโ) products of aโ and bโ.
ยง CANONICITY CONDITION
Let q_1,q_2, q_3 be any generalised coordinates that characterise position of the frame Oxyz with respect to the absolute frame OXYZ (e.g., q_1,q_2, q_3 could be the Euler angles). The kinetic energy of the body T is a function of these coordinates and their velocities:
T=T( q_1,q_2, q_3, qฬ_1,qฬ_2, qฬ_3). Denote p_1,p_2, p_3 the momenta canonically conjugate to these coordinates, p_j= T/qฬ_j, j=1,2,3. Then q_1,q_2, q_3,p_1,p_2, p_3 is a system of canonical variables for the considered problem. Express the Andoyer variables via q_1,q_2, q_3,p_1,p_2, p_3. To prove that this transformation is canonical we should check that (e.g., <cit.>, p. 241)
p_1dq_1+p_2dq_2+p_3dq_3= Ldl+Gdg+ฮ d .
This would prove that the Andoyer variables are canonical.
As it is traditional in Analytical Dynamics, consider the rigid body as a system of N material points. Let m_i, rโ_i, i=1,2,โฆ, N be masses and position vectors in the absolute frame of these points. Express position vectors via coordinates q_1,q_2, q_3: r_i=r_i(q_1,q_2, q_3). Then
d r_i=โ_j=1^3r_i/ q_jd q_j,
แน_ฬiฬ=โ_j=1^3r_i/ q_jqฬ_j .
and
แน_ฬiฬ/qฬ_j= r_i/ q_j
(this is a standard relation used in derivation of the Lagrange equations from the D'Alembert principle, e.g., <cit.>, p. 20).
The angular momentum Gโ and the kinetic energy of the body T are
Gโ= โ_i=1^N m_i (rโ_i รแน_ฬiฬ), T=1/2โ_i=1^N m_i (แน_ฬiฬยทแน_ฬiฬ).
Consider identities
โ_i=1^N m_i แน_ฬiฬยท d r_i=โ_i=1^N m_i แน_ฬiฬยทโ_j=1^3r_i/ q_jd q_j=
โ_i=1^N m_iแน_ฬiฬยทโ_j=1^3แน_ฬiฬ/qฬ_jd q_j
=โ_j=1^3(โ_i=1^N m_i แน_ฬiฬยทแน_ฬiฬ/qฬ_j)d q_j
=โ_j=1^3/qฬ_j (1/2โ_i=1^N m_i
(แน_ฬiฬยทแน_ฬiฬ)
)d q_j
=โ_j=1^3 T/qฬ_jd q_j= p_1dq_1+p_2dq_2+p_3dq_3.
In view of (<ref>) this implies that to prove the canonicity of the Andoyer variables we have to show that
โ_i=1^N m_i แน_ฬiฬยท d r_i =Ldl+Gdg+ฮ d .
ยง THE CHECK OF CANONICITY CONDITION (<REF>)
Express position vectors of material points via the Andoyer variables. Note that r_i depend on L, G,ฮ via angles ฯ, ฯ, L=Gcosฯ, ฮ=Gcosฯ. Thus r_i=r_i (l,g,, ฯ, ฯ), i=1,2. โฆ, N. Calculate d r_i and substitute the obtained expressions into the left hand side of (<ref>). We get
โ_i=1^N m_i แน_ฬiฬยท d r_i =k_ldl+k_gdg+k_ d + k_ฯdฯ+k_ฯdฯ
with some coefficients k_l, k_g, โฆ, k_ฯ. Calculate these coefficients.
Put dl=1 and give value 0 to all other differentials in the right hand side of (<ref>). Then drโ_i should be equal to the velocity of the point with the position vector rโ_i when the rigid body rotates (counterclockwise) about the axis
Oz with the angular speed 1: d rโ_i=eโ_zรrโ_i. Then
k_l=โ_i=1^N m_i แน_ฬiฬยท (eโ_zรrโ_i)= (โ_i=1^N m_i rโ_i รแน_ฬiฬ)ยทeโ_z=Gโยทeโ_z=L
because L is the projection of Gโ onto the axis Oz.
Put dg=1 and give value 0 to all other differentials in the right hand side of (<ref>). Then drโ_i should be equal to the velocity of the point with the position vector rโ_i when the rigid body rotates (counterclockwise) about the axis
Oฮถ with the angular speed 1: d rโ_i=eโ_3รrโ_i. Then
k_g=โ_i=1^N m_i แน_ฬiฬยท (eโ_3รrโ_i)= (โ_i=1^N m_i rโ_i รแน_ฬiฬ)ยทeโ_3=Gโยทeโ_3=G
because eโ_3 is directed along Gโ.
Put d=1 and give value 0 to all other differentials in the right hand side of (<ref>). Then drโ_i should be equal to the velocity of the point with the position vector rโ_i when the rigid body rotates (counterclockwise) about the axis
OZ with the angular speed 1: d rโ_i=eโ_Zรrโ_i. Then
k_=โ_i=1^N m_i แน_ฬiฬยท (eโ_Zรrโ_i)= (โ_i=1^N m_i rโ_i รแน_ฬiฬ)ยทeโ_Z=Gโยทeโ_Z=ฮ
because ฮ is the projection of Gโ onto the axis OZ.
Put d ฯ=1 and give value 0 to all other differentials in the right hand side of (<ref>). Then drโ_i should be equal to the velocity of the point with the position vector rโ_i when the rigid body rotates (counterclockwise) about the node line
Oฮบ in Fig. <ref>, b with the angular speed 1: d rโ_i=nโรrโ_i, where nโ is the unit vector of the axis Oฮบ. Then
k_ฯ=โ_i=1^N m_i แน_ฬiฬยท (nโรrโ_i)= (โ_i=1^N m_i rโ_i รแน_ฬiฬ)ยทnโ=Gโยทnโ=0
because nโ is orthogonal to Gโ.
Put d ฯ=1 and give value 0 to all other differentials in the right hand side of (<ref>). Then drโ_i should be equal to the velocity of the point with the position vector rโ_i when the rigid body rotates (counterclockwise) about the axis
Oฮท with the angular speed 1: d rโ_i=eโ_2รrโ_i. Then
k_ฯ=โ_i=1^N m_i แน_ฬiฬยท (eโ_2 รrโ_i)= (โ_i=1^N m_i rโ_i รแน_ฬiฬ)ยทeโ_2=Gโยทeโ_2=0
because eโ_2 is orthogonal to Gโ.
Thus we have k_l=L, k_g=G, k_=ฮ, k_ฯ=k_ฯ=0. This proves the canonicity condition (<ref>).
ยง CONLUSION
A kinematic proof is given that the Andoyer variables in rigid body dynamics are canonical. The proof is based on the
approach of โvirtual rotationsโ by Andoyer. The difference from the original proof by Andoyer is that we do not assume that the fixed in body frame is the frame of principal moments of inertia, and do not use explicit formulas for the kinetic energy of the body. The proof shows that the Andoyer variables can be used for any fixed in body frame. Constructions by Andoyer and Deprit assume that the fixed in body frame is the frame of the principal moments of inertia.
99
andoyer_2015 Andoyer H. Sur les problรจmes fondamenteaux de la mรฉcanique cรฉleste. Bull. Astron. Ser. I, 32, 5โ18 (1915)
andoyer_2023 Andoyer H. Cours de Mรฉcanique Cรฉleste, vol. I. Gauthier-Villars, Paris (1923)
arn_1 Arnold V.ย I. Mathematical Methods of Classical Mechanics: Graduate Texts in Mathematics 60. Springer-Verlag, New York (1978)
beletskii Beletskii V.V. Motion of an Artificial Satellite About Its Center of Mass. NASA TTF-429 (1966)
deprit Deprit A. Free rotation of the rigid body studied in the phase plane. American Journal of Physics, 35, 424โ428 (1967)
efr Gurfil P., Elipe A., Tangren W., Efroimsky M. The Serret-Andoyer formalism in rigid-body dynamics: I.
Symmetries and perturbations. Regular and Chaotic Dynamics, 12, 389โ425 (2007)
goldsteinGoldstein H., Poole C. P., Safko J. L. Classical Mechanics (3rd ed.). Addison-Wesley, Boston (2001)
5mm
Anatoly Neishtadt
Department of Mathematical Sciences
Loughborough University, Loughborough LE11 3TU, United Kingdom
E-mail: [email protected]
|
http://arxiv.org/abs/2307.00357v1
|
20230701150358
|
Abstract Orientable Incidence Structure and Algorithms for Finite Bounded Acyclic Categories. II. Data Structure and Fundamental Operations
|
[
"Yu-Wei Huang"
] |
cs.DS
|
[
"cs.DS",
"math.AG",
"math.CO"
] |
unicodehyperref
hyphensurl
011=0
Scale=MatchLowercase
[]Ligatures=TeX,Scale=1
upquote.sty
microtype.sty
[protrusion]basicmath
ifundefinedKOMAClassName
parskip.sty
parskip=half
xurl.sty
bookmark.sty
same
HighlightingVerbatimcommandchars=
{}
Shaded
@nat@width>@nat@width
@nat@height>@nat@height
Ginwidth=,height=,keepaspectratio
@figurehtbp
paper2.bib
Abstract Orientable Incidence Structure and Algorithms for Finite
Bounded Acyclic Categories. II. Data Structure and Fundamental
Operations
Yu-Wei [email protected]
July 31, 2023
===========================================================================================================================================
A data structure for finite bounded acyclic categories has been built,
which is useful to encode and manipulate abstract orientable incidence
structure. It can be represented as a directed acyclic multigraph with
weighted edges, where the weighs encode the algebraic structure between
edges. The fundamental operations on this data structure are
investigated from geometrical, categorical and programming perspectives.
introduction
ยง INTRODUCTION
In the previous articlehuang2023abstract, we introduced the
orientable incidence structure, which is a bounded acyclic category with
some additional properties. It provides another approach to investigate
the computer graphics in any dimension. In order to apply the theory to
practical use, the computer needs to understand the bounded acyclic
category. It's more similar to dealing with graphs than categories; a
finite category can be treated as a transitive graph with algebraic
structure between edges. Moreover, manipulating finite bounded acyclic
categories is simpler than non-acyclic ones, just like manipulating
directed acyclic graphs is easier than general graphs. In this article,
an efficient data structure for finite bounded acyclic categories is
introduced, and fundamental operations on this data structure is
developed.
data-structure
ยง DATA STRUCTURE
model
ยง.ยง Model
The most intuitive way to encode bounded acyclic categories is using a
transitive directed multigraph, where nodes indicate objects, and edges
indicate morphisms. But this is not enough to describe a category, the
additional information required by the category is the composition of
morphisms. The computer should also store the multiplication table of
all morphisms, but this is a waste of memory because not all morphisms
are composable and some cases can be omitted due to transitivity. In the
previous article, we shown that a bounded acyclic category is equivalent
to the category of upper categories and downward functors, so one can
model the category of upper categories instead of the bounded acyclic
category, where upper categories are encoded as nodes, and downward
functors between them as edges. Since such category is acyclic, it forms
a transitive directed acyclic multigraph. Notice that there is exactly
one root node for this directed acyclic multigraph (representing the
host bounded acyclic category), so all others nodes (representing upper
categories) are directly connected from this root node by exactly one
edge (representing downward functors). There may have multiple edges
between nodes, and adjacent edges can be composed to an edge that
preserves transitivity, whose rule is determined by the downward
functors. That means storing all downward functors as edges is not
necessary, one can just keep some of them so that other downward
functors can be obtained by following along paths.
The objects of an upper category are encoded as a set of symbols on the
corresponding node, and object's mappings of downward functors between
upper categories are encoded as mappings between those symbols, which
are stored as the weights of edges. The downward functor represented by
a given path can be constructed by composing mappings along the path.
Objects and their mappings are now fully encoded. Morphisms and their
mappings are also full encoded by these information, as explained below.
There is no need to encode morphisms of upper categories: in an upper
category, initial morphisms are also represented by its objects, and
non-initial morphisms can be obtained by applying downward functors to
initial morphisms of some upper categories. There is also no need to
encode composition of morphisms: consider that a downward functor maps
an initial object to a non-initial object, then this downward functor
should also represent the initial morphism of this non-initial object.
So the mapping representing this downward functor also represents a
process of making composite morphism by the morphism this functor
indicated. At the end of the day, the computer just needs to remember
objects and the mappings between them.
One can visualize this model as a graph-like diagram, which we call
utility pole diagram. Draw each node of graph as a vertical
line, and put symbols as points on it, which indicate objects of an
upper category. One should put the initial object at the bottom. Draw
the edge of graph as serial wires connecting points on the nodes, which
indicate mappings between objects. The direction of edges should be to
the right. For example, the utility pole diagram of a cone, which is
composed by seven facets, can be drawn as seven poles connected by
wires, see Figure <ref>.
We use the curvatures and colors to distinguish edges with the same
parent and child nodes. The mappings of composite functors can be
obtained by following wires through a certain paths. A path of this
graph can be indicated by a string of wires ended at the bottom of a
pole, which are called base wires. The left endpoint of the
base wires of a path indicates the identity of the corresponding
functor: if the base wires of two paths start at the same point, they
should end at the same node on the right, and the mappings of paths
should be the same. That means the circle formed by the base wires of
two paths can be lifted up to any right endpoint. This property is
called supportivity; the wires of a path are said to be
supported by the base wires (see Figure <ref>).
implementation
ยง.ยง Implementation
In this model, a bounded acyclic category can be fully described by all
nodes and weighted edges, where nodes represent upper categories, and
weighted edges represent downward functors. However, a node cannot fully
describe an upper cateogry, instead, all forwardly reachable nodes
should be included, which is called an upper subgraph. An upper subgraph
also represents a bounded acyclic category, so this model can be encoded
as a recursive data structure: define a data type , which is
composed by a root node and some child BACs, where the root node is
connected to each root node of the child BACs by a type .
Written in a pseudocode, it is defined as
The field is a list of tuple of an and a
pointer to the subgraph, where the edge is from the field
of this graph to the field of the subgraph. Under this
definition, each BAC represents an acyclic directed multigraph, which is
defined by unioning smaller graphs recursively. This structure is
similar to the recursive definition of a tree, except that the child
BACs are shared. But in fact, it really is almost a tree, as explained
below.
BACs cannot be mutable referenced. Consider a BAC with two
different child BACs and , whose
identities are determined by above process, and assume that there is a
parent BAC such that these two child BACs correspond
to the same upper category with respect to . If BACs
are mutable referenced, the in-place mutation of the child BAC
with respect to will ruin the structure of
the parent BAC , since and
are now different, which is fine for but
not for . If BACs are immutable referenced, there is no
such problem since child BACs can only be modified by replacement.
References to BAC should be implicit. The same subgraphs don't represent
the same upper category, but the same upper category should be
represented by the same instance of BAC. To determine which BAC
indicates which upper category with respect to a given node, one should
determine the identity of corresponding downward functors. If there are
two paths their functors map an initial object to the same object, these
two functors should be the same, since the target object (the initial
morphism of the target object) represents the identity of the functor.
Because the identity of upper category is determined by downward
functors, addresses between child BACs are meaningless to corresponding
bounded acyclic category. References to nodes should therefore be
immutable and implicit, and such definition of BAC is no different than
the recursive definition of tree except for data manipulation.
There are two ways to implement this data structure according to
mutability. A bounded acyclic category can be implemented as a normal
tree structure, where child BACs are implicitly shared. โImplicitly
sharedโ mean that users should not know who shared this BAC with who,
since it is meaningless. In this implementation, operations should be
carefully designed so that two paths representing identical functor
still point to the same BAC. As a consequence, it is preferred to use
immutable data structure and functional programming technique. It can
also be implemented as a directed acyclic graph, where child BACs are
explicitly shared but encapsulated like implicitly shared BACs. In this
implementation, operations should be carefully designed so that BACs
will be copied on write when there are multiple paths which represent
different functors pointing to this BAC. In this article, the former
implementation will be used.
Due to the immutability nature of the first implementation, it is
suitable to use Haskell to show how it works. The purpose of the code in
this work is just a proof of concept, so performance and debuggability
are not concerned. The source code will be placed in the repository
<https://github.com/worldmaker18349276/bac>. This data structure is
implemented as a weighted tree with symbol maps as weights:
[]
newtype Tree e = Tree (Map e (Tree e)) deriving (Eq, Ord, Show)
type BAC = Tree Dict
type Dict = Map Symbol Symbol
type Symbol = Natural
The instances of BAC, called nodes, represent bounded acyclic
categories, whose objects are marked by type
|Symbol|. Some nodes are implicitly shared, but due
to referential transparency, users cannot know the physical identity of
the node, so one can only say that they are exactly the same. Symbols on
a node are not included in the data structure since it can be determined
by the weights of edges. The symbol list on a node can be obtained by
function , where there must be a special symbol
representing the initial object of the category. There is
no symbol representing the terminal object since it doesn't provide any
information. Also, in this data structure dealing with non-terminal
morphisms and terminal morphisms are very different.
A type is defined as a structure of a dictionary and the
target BAC, representing a local embedding between categories:
[]
data Arrow = Arrow {dict :: Dict, target :: BAC} deriving (Eq, Ord, Show)
edges :: BAC - [Arrow]
edges (Tree m) = fmap (uncurry Arrow) (Map.toList m)
The edges of a node can be obtained by function , which
are represented as type |Arrow|. The weight of an
edge, called a dictionary, is a non-empty mapping from the
symbols on child node to the symbols on the parent node, representing
the mapping of the objects. It is a non-empty map because there must be
a base symbol as a key. In a node, dictionaries of its edges must cover
all symbols except the base symbol, so the valid symbols can be
determined by weights of outgoing edges.
A path of the tree, which is a sequence of connected edges,
also has a derived dictionary, and it can be obtained by concatenating
all dictionaries along the path by . A path from a node to
itself by following no edge is called a null path. A null path
possesses a trivial dictionary, which is the identity mapping of symbols
of the node. A null path is said to be an improper path. Dictionaries of
paths together with the target nodes represent downward functors between
bounded acyclic categories. It is called an arrow in the
program, which can also be represented by a type
|Arrow|.
An arrow represents a downward functor from the category of the child
node to the category of the parent node, where the mapping of objects is
represented by the dictionary of an arrow, and the mapping of morphisms
can be determined by further analysis of the structure of the child
node. As a downward functor, the field represents an
upper category constructed under the category of the parent node. An
arrow can also represent an initial morphism in the category of the
parent node, where the field represents the target
object of this morphism. There is an one-to-one correspondence between
objects and initial morphisms, so it also represents an object in the
category, in which the symbol that refers to the object is the symbol to
which the dictionary maps the base symbol, and all other values of the
dictionary represent all descendant objects of this object. Two arrows
starting at the same node should correspond to the same functor if their
dictionaries map the base symbol to the same symbol. In this case, the
target nodes should be implicitly shared or exactly the same. Note that
it's not true the other way around; two arrows that have the same target
node doesn't mean they correspond to the same functor.
In program, arrows are more useful than symbols because they contain
more information. Some algebraic operations can be performed on arrows
by utilizing these information. The arrow of a null path, which
represents the identity functor, can be obtained by . The
arrow of a path can be obtained by joining all edges along the path,
which is implemented as a function . One can explore all
upper categories under a given category using these functions, which is
equivalent to explore all objects of this category. Reversely, some
arrows start at the same node are divisible, and the result can be
obtained by function : the result of division between the
divisor and the dividend is the set of
arrows such that
|join arr12 arr23 = arr13|. As a
node of a tree, an arrow knows whether a symbol refers to its
descendant, which is implemented as a function . The
upper category specified by a given symbol can be obtained by tracing a
symbol with an arrow, which is implemented as . Reversely,
the symbol specifying a given arrow can be obtained with the function
, which also indicates the identity of the given arrow.
In a category represented by a node, an object can be specified by a
symbol, and a morphism can be specified by a tuple of symbols: the
source object of the morphism is represented by the first symbol, and it
should be a valid symbol in this node; the target object of the morphism
is represented by the second symbol, and it should be a valid symbol in
the node referenced by the first symbol. The object specified by a
symbol can also be represented by an arrow, which can be obtained by
. Also, the morphism specified by two symbols can be
represented by a tuple of connected arrows, which corresponds to two
connected downward functors. The conversion between the two
representations are implemented as the functions and
. In another viewpoint, a symbol indicates an
1-chain of the host category, and a tuple of symbols indicates a
2-chain, moreover, a sequence of n symbols indicates a
n-chain.
Conclude above descriptions, there are three laws for BAC:
enumi.
* totality: the dictionary of an edge should be a mapping from
all valid symbols on the child node to valid symbols on the parent
node.
* surjectivity: all valid symbols should be covered by the
dictionaries of outgoing edges, except the base symbol.
* supportivity: if dictionaries of given two paths with the same
starting node map the base symbol to the same symbol, then they should
have the same dictionary and target node. Note that null paths also
count.
A valid bounded acyclic category should satisfy these laws, and they
immediately derive two laws:
enumi.
* The dictionary of a proper path cannot maps a symbol to the base
symbol, otherwise the target node should be the starting node.
* The dictionary of a proper path should map the base symbol to a unique
symbol compared with other values in this dictionary, otherwise the
target node will have a descendant as itself.
The essence of recursive types is folding. To traverse objects of a
bounded acyclic category, or equivalently, to traverse upper categories
of a bound acyclic category, all descendant nodes should be visited via
arrows, which is implemented as the function . While
folding, identical nodes (from the root's perspective) should only be
visited once, which is important for non-purely functional programming
language (support for side effects without explicit types), otherwise it
is possible to obtain inconsistent results with identical inputs. Users
may more commonly fold data only under a certain symbol (visit only
ancestor nodes of the node referenced by this symbol). In this
situation, the relative location of a node should be marked. It is
defined as the function , which only visit the node
that can reach to a given symbol. The fold method can also be used to
modify data structures. Convenient specialized functions
and handle how edges should be modified and put
back into nodes. This allows us to edit data in the form:
[]
dosomething :: BAC - Maybe BAC
dosomething node = do
-- check if this operation is valid
guard ...-- edit the boundary nodelet res0 =...-- edit edges under the boundary node fromReachable res0 node & modifyUnder src \(curr, edge) -\case-- if the edge points to an outer nodeAtOuter-...-- if the edge points to the boundary nodeAtBoundary-...-- if the edge points to an inner node, which has been modified to `res`AtInner res -...
With those utility functions, it should be easy to manipulate BAC. Below
we will discuss how to edit a BAC from geometrical, categorical and
programming perspectives.
fundamental-operations
ยง FUNDAMENTAL OPERATIONS
The data structure BAC can be used to encode the incidence structures of
geometric objects. In the BAC, symbols on the root node represents
geometric objects, and arrows represent incidence relations between
geometric objects. This implementation does not encode any meaning of
arrows, only the relation and composition between arrows. Users should
associate arrows and incidence relations outside of this data structure.
Below we will focus on algebraic properties of incidence relations.
In order to manipulate geometric objects in the form of bounded acyclic
categories, all geometric operations need to be rewritten. To simplify
the whole workflow, one need to develop a set of basic operations, such
as boolean operations. Even for the intersection operation, the actions
in categorical level are complicated. For example, to put two facets
together, not only the relationship between two facets is needed, the
relationship between their subfacets, and subfacets of subfacets are
also needed. So we need to further decompose boolean operations into
more fundamental operations on category. These operations may not make
sense on geometrical perspective, but they are critical on categorical
level.
Let's discuss how to intersect two balls: we just need to slice a ball
by a sphere, then remove the unwanted part. Before start slicing, how
did the ball come into our world? Moreover, how did the world began? We
need an simplest polytope as our starting point to construct all kind of
polytopes. Such polytope is called โnullitopeโ, which is a
geometric object of nothing. Then we need a method to add a ball into
this polytope. This operation is called โintroduceโ. Now we
want to slice this ball by a sphere, which should result in a UFO shape.
But before that, we need to compute the intersection of their surface,
which is a circle. We then can use โintroduceโ to bring the circle
into our world, and claim that this circle is covered by those two
spheres. This operation is called โincidentโ. Now we have two
spheres with a circle on it, but wait, doesn't it is disconnected? It
should be separated as a cap and a cup, so we need an operation called
โdisconnectโ. Now we have two caps to build our UFO shell,
but how to get rid of those two cup? To do that, we need
โremoveโ to clean up them.
Ok, there are enough tools to intersect two balls, let's describe it
step-by-step.
enumi.
*
Prepare an empty space by โnullitopeโ.
*
Prepare a ball.
*
Create an infinite 3D space by โintroduceโ.
*
Create a sphere with radius and centered at
by โintroduceโ, labeled as .
*
Claim that the sphere is covered by this 3D space using
โincidentโ.
*
Separate inner space and outer space by sphere as two
disconnected components using โdisconnectโ.
*
Remove the outer space by โremoveโ, and label the remaining one as
.
*
Create a sphere with radius and centered at
by โintroduceโ, labeled as
.
*
Cut the ball by this sphere .
*
Compute the intersection between shell and
, which is a circle.
*
Bring this circle into the world by โintroduceโ, labeled as
.
*
Claim that the circle is covered by spheres
and using โincidentโ.
*
Separate spheres and into
cups and caps by โdisconnectโ, labeled as ,
, ,
separately.
*
Remove the cup by โremoveโ.
*
Claim that the cap is covered by
volume using โincidentโ.
*
Separate volume into two part using โdisconnectโ,
labeled as , .
*
Remove unwanted part.
*
Remove outer volume by โremoveโ.
*
Remove cup by โremoveโ.
Those operations have corresponding categorical meanings, which provide
another perspective to investigate fundamental operations. Now let's
discuss how to define those operations.
emptysingleton
ยง.ยง Empty/Singleton
โNullitopeโ is the simplest polytope, which is a geometric
object of nothing. It is just like an empty workspace after launching
GeoGebra 3D calculator. In categorical perspective, it has only two
objects: the initial object and the terminal object. This category is
called empty bounded acyclic category. The empty bounded
acyclic category has only one non-degenerated morphism chain, and it is
locally-embedded into any nondecomposable morphism of a bounded acyclic
category. In program, it is implemented as a function ,
which is a node without children.
โSingletonโ is the second simplest polytope, which represents
a geometric object without boundary. It is just like an primitive shape
in the GeoGebra 3D calculator, such as a sphere, an infinite plane or a
circle. In categorical perspective, it has only one proper object. This
category is called singleton bounded acyclic category. In
program, it is implemented as a function , which is a
node with only one child.
merge-categories
ยง.ยง Merge Categories
โIntroduceโ bring a facet into the universe, just like we can
simply drop a sphere, a cube or a cylinder into the workspace. For
example, the process of intersecting two surfaces is done by adding
intersected line and claiming the incidence relations between them.
After adding intersected line, the computer doesn't know the relation
between this line and those surfaces. This doesn't break the categorical
structure, so there is no need to add incidence relations at the same
time. In categorical perspective, it merges two categories into one,
where the initial object and the terminal object are merged, and others
proper objects are considered disjoint. In program, it corresponds to
merging trees at the root. All symbols in the root nodes are unioned
except the base symbols, which will be merged into one base symbol. It
is implemented as a function with a parameter
|nodes :: [BAC]|,
representing the categories to merge.
remove-a-terminal-morphism
ยง.ยง Remove a Terminal
Morphism
โRemoveโ removes a facet out of the universe. The subfacets
of removed facet will not be removed. For example, if a square is
removed, its borders are not removed along with it. The removed facet
cannot be covered by other facets (not a subfacet of anther facet). In
categorical perspective, it removes a nondecomposable terminal morphism
of the given object, and as a consequence, the morphisms pointing to
this object are also removed. In program, this process corresponds to
removing a leaf node specified by a symbol. Since it is a leaf node,
there is only one symbol (the base symbol) on this node. All wires that
connect to the base point of removed node should be removed, so all
passing points will also be removed. Information related to those wires
will be lost (see Figure <ref>). It is implemented as a
function with a parameter
|tgt ::Symbol|, indicating the object to
remove. The object specified by the symbol should have
nondecomposable terminal morphism.
remove-a-non-terminal-morphism
ยง.ยง Remove a Non-Terminal
Morphism
In categorical perspective, โremoveโ removes a nondecomposable
terminal morphism. It is natural to generalize it to any nondecomposable
morphism. To remove a nondecomposable morphism, all composition rules in
the multiplication table related to this morphism will also be removed.
For example, to remove morphism ฯ, the rule
ฯโฯ' = ฯ for any possible ฯ' and ฯ will
also be removed.
However, removing decomposable morphism is not simple; there are
multiple ways to remove a decomposable morphism consistently. For
example, to remove morphism ฯ while there is a composition rule
ฯ = ฯ_1 โฯ_2, one of morphism ฯ_1 or ฯ_2
should also be removed: one can remove ฯ_1 then ฯ, remove
ฯ_2 then ฯ, or remove both ฯ_1 and ฯ_2 then
ฯ. Two coherent choices are: removing all prefix ones, or
removing all suffix ones. This process can be decomposed into multiple
steps of removing nondecomposable morphisms. Here only nondecomposable
morphisms are concerned.
In geometrical perspective, it is called โunincidentโ because
it corresponds to removing an incidence relation. Removing an initial
morphism leads to the target object being removed, which corresponds to
removing a subfacet of some facets, and such subfacet should be
boundaryless. It is the same as โremoveโ if it is not a subfacet of
any facet. Removing a non-initial morphism can be understood as removing
a subfacet of some facets from a vertex figure, which results in
โunincidentโ some facets. Removing a subfacet can lead to some ill
geometric shapes. For example, it is nonsense to remove the boundary
circle of a disk. If the subfacet to be removed has a 2-dimension
difference from the facet, such as a point covered by a plane, it is
usually fine, otherwise it should be carefully dealt with. For example,
a point on the circle can be safely removed, but removing the endpoint
of a line segment leads to an ill geometric object.
In program, this process removes a symbol on a node, and keep all other
unrelated wires unchanged. Where โkeep all other unrelated wires
unchangedโ means that only the outgoing and incoming edges of this node
will be modified. Consider a base wire passing through but not ending at
this point, the starting point of this wire should not change after this
process. It is a minimal operation, since only a small part of this data
structure is changed. By removing a symbol on a node, the adjacent wires
will also be removed. It's fine to remove incoming wires, but invalid
for removing outgoing wires, except when outgoing wires is connected to
the base point, since now the entire outgoing edge should be removed
(see Figure <ref>). Symbols connected only to the base
points in the right are said to be nondecomposable, which
represent nondecomposable initial morphisms. Above discussion shows only
nondecomposable symbols can be removed. All wires that crossed the
removed wires should find alternative paths. For all parent nodes of the
source node of the removed edge, one only need to check base wires
connected to the target node of the removed edge (see left one of Figure
<ref>). Similarly, for all child nodes of the
target node of the removed edge, one only need to check base wires
connected to this child node (see right one of Figure
<ref>). It is equivalent to: surjectivity of
dictionaries should not be violated after removing this edge.
It is implemented as a function with a parameter
|(src, tgt) :: (Symbol, Symbol)|,
indicating the morphism to remove, which should be nondecomposable. The
decomposability of a morphism can be checked by the function
.
add-a-non-terminal-morphism
ยง.ยง Add a Non-Terminal
Morphism
โIncidentโ is a method to claim the incidence relations
between two facets. This method is the reverse of โunincidentโ. It is
just like attach a point to a surface in the GeoGebra 3D calculator, but
here we don't change the position of the point. Notice that โa line
segment is covered by a planeโ implies โthe endpoints of this line
segment are also covered by this planeโ. To claim A is covered by B,
all objects covered by A should be already covered by B. In this
example, we need to claim โthe endpoints are covered by this planeโ
first. But to do that, all subfacets of endpoints, which is the null
face, should be covered by this plane, and this is already true. In the
opposite, โa point is covered by an edge of a squareโ implies โthis
point is also covered by this squareโ. To claim A is covered by B, all
objects that cover B should already cover A. In this example, we need to
claim โthis point is covered by this squareโ first. But to do that,
all superfacet of this square, which is the universe, should cover this
point, and this is already true.
In some case, it is not enough to claim incidence relations by just
saying โA is covered by Bโ. For example, the dried persimmon shape
described by equation z^2 = (1-x^2-y^2)(x^2+y^2)^2 is a sphere but
north and south poles are pinched together, so the center point is
covered by this surface in two directions. Consider a curved line
segment starting at the center point, to claim โthis segment is covered
by this dried persimmon shapeโ, it is also necessary to specify โwhich
side of this shapeโ. In the vertex figure of the center point, the
problem becomes that โthis point is covered by which one of two
circlesโ (see right sphere of Figure <ref>). In the
opposite, consider a curved line segment on this dried persimmon shape,
to claim โthis segment covers the center pointโ, it is also necessary
to specify โwhich side of this shapeโ. In the face figure of this
shape, the problem becomes that โthis segment covers which one of two
pointsโ (see left sphere of Figure <ref>).
In categorical perspective, this process just add a non-terminal
morphism into this category between two given objects. Assume we are
adding a morphism ฯ : F โ S, and if there exists
another morphism ฯ' : P โ F but no morphism such that
? : P โ S, we will also need to add another morphism
ฯโ : P โ S such that ฯโ = ฯโฯ',
otherwise it will break the closure property of category; it is the
first point we discussed above. Similarly, if there exists another
morphism ฯ' : S โ V but no morphism such that
? : F โ V, we will also need to add another morphism
ฯโ : F โ V such that ฯโ = ฯ' โฯ; it
is the second point we discussed above. So we should add those morphisms
first, and it is possible by recursively repeating it down/up to the
initial/terminal object. After confirming that closure property can be
retained, we need to expand the multiplication table of morphisms. We
only need to provide how to compose nondecomposable morphisms, since
other cases can always be decomposed into this case. Even though this
process can be automatically done in the most case, but for non-trivial
case it becomes necessary.
The choices of new composition rules should agree with the existing
equivalence relations: if we add a morphism ฯ : S โ F
and claim relations ฯ_1' = ฯโฯ_1 and
ฯ_2 = ฯ_2' โฯ, where ฯ_1 : P โ S,
ฯ_2 : S โ V, ฯ_1' : P โ F,
ฯ_2' : F โ V, the equivalence relation
ฯ_2 โฯ_1 = ฯ_2' โฯ_1' should already exist.
Also, if we claim that ฯ_1' = ฯโฯ_1 and
ฯ_2' = ฯโฯ_2, where ฯ_1 : P_1 โ S
and ฯ_2 : P_2 โ S, and there exists two morphisms
ฯ_1 : V โ P_1 and ฯ_2 : V โ P_2, such
that ฯ_1 โฯ_1 = ฯ_2 โฯ_2, then the equivalence
relation ฯ_1' โฯ_1 = ฯ_2' โฯ_2 should already
exists. A dual version should also hold. They can be concluded into
three laws:
enumi.
* ฯ_R' = ฯโ' ฯ_R ฯ_L' = ฯ_L โ' ฯฯ_L' โฯ_R = ฯ_L โฯ_R'.
* ฯ_R1' = ฯโ' ฯ_R1ฯ_R2' = ฯโ' ฯ_R2ฯ_R1โฮถ_R1 = ฯ_R2โฮถ_R2ฯ_R1' โฮถ_R1 = ฯ_R2' โฮถ_R2.
* ฯ_L1' = ฯ_L1โ' ฯฯ_L2' = ฯ_L2โ' ฯฮถ_L1โฯ_L1 = ฮถ_L2โฯ_L2ฮถ_L1โฯ_L1' = ฮถ_L2โฯ_L2'.
Where โ' is the composition related to ฯ, which decides
how to compose the new morphism with old one:
enumi.
*
For ฯ_R : S' โ S, ฯ_R' = ฯโ' ฯ_R is
a morphism ฯ_R' : S' โ F.
*
For ฯ_L : F โ F', ฯ_L' = ฯ_L โ' ฯ is
a morphism ฯ_L' : S โ F'.
One only need to define the composition rule for all nondecomposable
ones of ฯ_R and ฯ_L. There are multiple choices of
ฯ_R' : S' โ F as a result of ฯโ' ฯ_R,
and each choice can be denoted as a tuple (ฯ_R, ฯ_R') called a
coangle. Similarly, a choice of the composition rule in another
direction can be denoted as a tuple (ฯ_L, ฯ_L') called an
angle. Angles and coangles form a vertex set, where they are
grouped by ฯ_R and ฯ_L. To make a composition rule, one
should choose a vertex for each group.
The third constraint with ฯ_L1 = ฯ_L2 excludes some
angles. A fork of a morphism ฯ is a pair of distinct
morphisms ฯ, ฯ' such that
ฯโฯ = ฯ' โฯ. A angle (ฯ_L, ฯ_L') is
valid if forks of the morphism ฯ_L are also forks of the morphism
ฯ_L'. Similarly, some coangles are excluded by the second
constraint.
The first constraint limits how to choose ฯ_L' and ฯ_R'
for each pair of ฯ_L and ฯ_R. For a valid choice of pair
(ฯ_L', ฯ_R'), we draw an edge between the angle
(ฯ_L, ฯ_L') and the coangle (ฯ_R, ฯ_R'). Such
construction makes the induced subgraph of vertex groups ฯ_L and
ฯ_R forms a biclique cover (disjoint union of complete bipartite
graphs).
The second constraint also limits how to choose ฯ_R1' and
ฯ_R2' for each pair of ฯ_R1 and ฯ_R2. Define
pseudo-equalizer between ฯ_R1 and ฯ_R2, which
is a pair of morphisms (ฯ, ฯ') such that
ฯ_R1โฯ = ฯ_R2โฯ'. Two coangles
(ฯ_R1, ฯ_R1') and (ฯ_R2, ฯ_R2') are
compatible if all pseudo-equalizers between ฯ_R1 and
ฯ_R2 are also pseudo-equalizers between ฯ_R1' and
ฯ_R2'. Note that one only need to check all minimal
pseudo-equalizers between ฯ_R1 and ฯ_R2. For a valid
choice of pair (ฯ_R1', ฯ_R2'), we draw an edge between
coangles (ฯ_R1, ฯ_R1') and (ฯ_R2, ฯ_R2'),
and the induced subgraph also forms a biclique cover. The same analysis
can be applied to the third constraint.
Finally, it leads to a graph problem: find a complete subgraph of a
semi-complete multipartite graph by selecting a vertex in each group. A
semi-complete multipartite graph is a multipartite graph where
the induced subgraph of each pair of groups is a biclique cover. In the
practical application of geometry, it is not necessary to actually find
all possible solutions of this problem, since selecting are done by
geometric calculations. This constraint can be used to check if the
geometric calculations are conflicting, or reduce the amount of
calculations by some strategies: one can utilize information theory to
estimate the entropy of selection event for each group, and calculate
the largest one first.
In program, this process adds a symbol to a node, and keep all other
unrelated wires unchanged. The wires connected to this point should also
be added. It's fine to add an incoming wire, but it is impossible to add
an outgoing wire to an existing outgoing edge. That means the added
symbol should be nondecomposable, and a new edge should be added such
that the base wire is connected to this point (see Figure
<ref>). The target node of the added edge may be
unreachable from the given node. One should notice that the added edge
shouldn't form a directed loop in the graph.
Each added wire can be determined by following process: for any pair of
connected edges in which one is the added edge, a base wire of this path
should be specified so that a wire can be added like Figure
<ref>. Some wires are determined by
supportivity. Such choice is not arbitrary, supportivity must still hold
after adding wires. For a pair of connected edges in which the second is
the added edge (left one of Figure <ref>), its
base wire (lower purple line) together with two wires form a triangle,
and all added outgoing wires (blue dashed lines) should be supported by
this triangle. Different triangles must agree with each other.
Similarly, for a pair of connected edges in which the first is the added
edge (right one of Figure <ref>), its base wire
(lower green line) together with two wires form a triangle, and some
added outgoing wires (blue dashed lines) should be supported by this
triangle. Different triangles should agree with each other. Moreover, if
there is a circle formed by two base wires of the target node (see
Figure <ref>), a pair of corresponding added
wires should be supported by such circle.
The possible choices can be obtained by a helper function
, which returns two groups of picklists.
The second group contains picklists of angles, which are used to
determine the outgoing wires (right one of Figure
<ref>). The first group contains picklists of
coangles, which are used to determine the incoming wires (left one of
Figure <ref>). User should select one of angle
or coangle for each picklist. This does not guarantee a valid choice. A
valid choice of angles and coangles can be checked by functions
, and
.
The process of adding a non-terminal morphism is implemented as a
function with parameters
|src, tgt, sym ::Symbol| and
|src_alts :: [Coangle]|
and
|tgt_alts :: [Angle]|.
and indicate source object and target object
of the added morphism, and will indicate the added
morphism. is the list of picked coangles, and
is the list of picked angles.
add-a-terminal-morphism
ยง.ยง Add a Terminal Morphism
Above we discuss how to add a non-terminal morphism for two
reachable objects. It is not the complete inverse of โremove a
morphismโ because there are two special cases: removing an initial or
terminal morphism will result in the object being removed, which also
drop a lot of information, so that cannot then be directly added back.
The reverse processes of these two cases cannot be done by this method.
In categorical perspective, to add a terminal morphism
ฯ : F โ๐, one should also add an object F and its
incoming morphisms. An incoming morphism, say ฯ : S โ F, and
the added terminal morphism ฯ should be composable, and the
result, denoted as ฯโ' ฯ, should be assigned to an unique
terminal morphism ฮถ : S โ๐. The compositions between
added incoming morphisms ฯ : S โ F and existing morphisms, say
ฮพ : V โ S, also need to be given, the result, denoted as
ฯโโฮพ, should be assigned to another incoming morphism
ฯ' : V โ F. Also, the new composition rules โโ and
โ' should respect to original ones: for two compositions
ฯ' = ฯโโฮพ and ฮถ = ฯโ' ฯ, there is
ฮถ' = ฯโ' ฯ' for ฮถ' = ฮถโฮพ.
Consider an added incoming morphism ฯ : S โ F, for any two
parallel incoming morphisms ฮพ, ฮพ' : V โ S, one should decide
ฯโโฮพ and ฯโโฮพ' should be equivalent or
not. As a reverse process of removing terminal morphism, such
information has been dropped by removing terminal morphism. To minimize
input information, we choose the most trivial rule: let
ฯโโฮพ and ฯโโฮพ' be assigned to different
morphisms if ฮพโ ฮพ'. Where we further assume that the source
objects of all added incoming morphisms have a unique maximum S, so
that all other incoming morphisms can be unqiuely derived by the
incoming morphism ฯ : S โ F. In this setup, the new composition
rules already respect to original ones. This operation can also be seen
as inserting an object in the middle of a terminal morphism, such that
it can be decomposed into two added morphisms.
Utilizing this simplified operation, the reverse process of removing a
terminal morphism now can be built. First, insert an object in the
middle of a terminal morphism, which will not introduce additional
structure. Second, repeat the first step, and merge all added objects by
merging their initial morphisms (see
merge-non-terminal-morphismsMerge Non-Terminal
Morphisms), so that all incoming morphisms of all added objects are
disjointly unioned together. And third, merge some incoming morphisms of
added objects (see
merge-non-terminal-morphismsMerge Non-Terminal
Morphisms), so that the additional structure can be encoded.
In geometrical perspective, it corresponds to โadd a trivial facetโ,
which is a reverse process of โremoving a trivial facetโ. A facet is
said to be trivial if it has no superfacet and has only one direct
subfacet, and the incidence relation is directly inherited from this
subfacet. This is like extruding a facet, which make a superfacet on top
of it. No subfacet will be glued after extruding (no new incidence
relation will be created).
In program, it is a process to add a leaf node to a node. A symbol is
also added to the node, which is connected to the added leaf node. The
wires to the leaf node should also be added for every ancestor nodes,
which should have the same shape to the base wires of the node (see
Figure <ref>). It is implemented as a function
with parameters
|src, sym ::Symbol| and
|inserter :: (Symbol, Symbol) -Symbol|.
indicates an object whose terminal morphism will be
interpolated, and will indicate the only morphism
from such object to the inserted object. For all incoming morphisms of
the object , say , the pair of symbol
will indicate the incoming morphism
of the inserted object with the same source object.
add-an-initial-morphism
ยง.ยง Add an Initial Morphism
The opposite of above operation is to add an initial morphism. To add an
initial morphism ฯ : โ
โ F, one should also add an
object F and its outgoing morphisms. An outgoing morphism, say
ฯ : F โ S, and the added initial morphism ฯ should be
composable, and the result, denoted as ฯโ' ฯ, should be
assigned to a unique initial morphism ฮถ : โ
โ S. The
compositions between added outgoing morphisms ฯ : F โ S and
existing morphisms, say ฮถ : S โ V, also need to be given, the
result, denoted as ฮพโโฯ, should be assigned to another
outgoing morphism ฯ' : F โ V. Also, the new composition rules
โโ and โ' should respect to original ones: for two
compositions ฯ' = ฮพโโฯ and
ฮถ = ฯโ' ฯ, there is ฮถ' = ฯ' โ' ฯ
for ฮถ' = ฮพโฮถ.
Similarly, we only consider the most trivial case. There is a minimal
object S such that the added outgoing morphism ฯ : F โ S
uniquely determines the other outgoing morphisms by composition:
ฮพโโฯ and ฮพ' โโฯ are different outgoing
morphisms iff ฮพโ ฮพ'. This operation can also be seen as
inserting an object in the middle of an initial morphism, such that it
can be decomposed into two added morphisms.
The reverse process of removing an initial morphism now can be built.
First, insert an object in the middle of an initial morphism. Second,
repeat the first step, and merge all added objects into one (see
merge-terminal-morphismsMerge Terminal Morphisms),
so that all outgoing morphisms of all added objects are disjointly
unioned together. And third, merge some outgoing morphisms of added
objects (see merge-non-terminal-morphismsMerge
Non-Terminal Morphisms), so that the additional structure can be
encoded.
In geometrical perspective, it creates an boundaryless subfacet on a
facet, for example, creates a point on a plane, or creates a circle in a
box. The created subfacet is placed at the inner of the facet, so that
the incidence relations to its superfacets are directly inherited from
it.
In program, it is a process that add a node in the middle of an arrow
started at the root (see Figure <ref>). The
inserted node has only one edge, and its dictionary is one-to-one, so
there is no additional structure. A corresponding symbol should be added
to the root node, which connects to the base point of the added node. It
is implemented as a function with
parameters
|tgt, sym ::Symbol| and
|mapping ::Dict|, where
indicates the initial morphism to be interpolated, and will
indicate the incoming morphism of the added object, and
is the dictionary of the outgoing morphism of the added object.
split-a-non-terminal-morphism
ยง.ยง Split a Non-Terminal
Morphism
โDisconnectโ is used to separate disconnected components of a
disconnected facet, such ill facet may be produced after claiming
incidence relation. For example, after claim that a circle is covered by
a sphere, the sphere becomes disconnected: the sphere has been cut into
a cap and a cup. Local connectedness can also be manipulated by this
way.
The connectedness of a facet is manifested by identity of morphisms, so
in categorical perspective, it corresponds to splitting a morphism into
multiple parts, and each part represents each connected component. After
splitting the target morphism, one should also figure out how to modify
the multiplication table. The composition of the target morphism and
other morphisms will be duplicated for each splitted one. For the
composition that result in the target morphism, things are complicated.
We need to find all morphism chains of the target morphism, and
determine which chain will compose which splitted morphism. For example,
a sphere with a circle on it can be splitted into a cap and a cup; one
covers this circle in positively-oriented way, another in
negatively-oriented way. Splitting the maximum chains means breaking
equivalence relations, which is not always possible; one should find a
way to consistently break equivalence relations. All equivalence
relations to be broken should be a minimal equivalence relation: for
equivalence relation
ฯ_1 โโฆโฯ_n = ฯ_1' โโฆโฯ_t'
there is no non-trivial subpath decomposition
(ฯ_1 โโฆโฯ_m) โ (ฯ_m+1โโฆโฯ_n) = (ฯ_1' โโฆโฯ_s') โ (ฯ_s+1' โโฆโฯ_t')
such that
ฯ_1 โโฆโฯ_m = ฯ_1' โโฆโฯ_s'
and
ฯ_m+1โโฆโฯ_n = ฯ_s+1' โโฆโฯ_t'.
If one try to break some relations aren't minimal equivalence relations,
one of sub-relations should also be broken. In this case, one should
break this sub-relation first in the same way. So it is limited that
broken equivalence relations should be minimal. The constraint of
minimal equivalence relations just corresponds to the condition of
linked objects of the section category of the target morphism, in this
point of view, this process just separates out splittable parts in a
category.
In program, this process splits a symbol on a node into two, and keep
all other unrelated wires unchanged. Focus on the wires passing through
the point to be splitted, the incoming wire should be splitted in two,
and the outgoing wire should choose to connect to one of the splitted
points (see left one of Figure <ref>). There is a
special case: if outgoing wire directly connects to a base point, this
edge will be duplicated so that there are two base wires that connect to
two splitted points respectively (see right one of Figure
<ref>).
The new dictionaries of incoming edges should map two splitted symbols
to the same symbol as before, and the new dictionaries of outgoing edges
should maps to one of the splitted symbols: it should be decided where
the mapping to the old symbol should now maps to, and such choices
aren't arbitrary. Considering two outgoing wires starting at the point
to be splitted, they should be modified to start at the same splitted
symbol if they are supported by a circle of two base wires (see Figure
<ref>). Since such base wires are not modified,
the supportivity ensures that they should start at the same point. This
constraint is the same as the condition of linked objects.
A function is defined to determine splittable
parts, which has a parameter
|tgt ::Symbol|, indicating the morphism
to partition. This function searches for all 3-chains of given
morphism, in which the first and the third arrows are edges. For each
3-chain, its two direct subchains indicate two linked minimal and
maximal objects. The return value is a list of groups of tuples of
symbols, each group contains all symbols on child nodes that should be
mapped to the same splitted symbol. The main process is implemented as a
function , which has two parameters
|(src, tgt) :: (Symbol, Symbol)|
and
|partition :: [(Symbol, [(Symbol, Symbol)])]|.
for all
indicates a splitted
morphism, and the morphism chains in the group will
compose to this morphism. An splitted morphism which no morphism chains
compose to is nondecomposable.
split-a-terminal-morphismcategory
ยง.ยง Split a Terminal
Morphism/Category
Splitting a non-terminal morphism corresponds to disconnect a facet or a
vertex figure, while splitting a terminal morphism, which leads to the
source object being splitted, corresponds to split a facet from two
sides, so we call this process โsplitโ. For example, a point
covered by two segments in two directions can be splitted into two
points, so that they are covered by two segments respectively. It is
useful when one needs to separate two disconnected facets with shared
boundaries. Complex shared boundaries can be splitted from up to down
using โsplitโ. For example, to split two sticked cubes, one needs to
split the shared facet first, then split shared edges and shared
vertices. After that, one can split this category into two, and rotating
or moving them respectively now works. Splitting the terminal morphism
of the initial object is a special case, which is just splitting a
category into multiple categories. this is the reverse process of
merging categories.
In categorical perspective, splitting a terminal morphism
ฯ : F โ๐ into N parts is special since the object
F is also splitted. Such process can be further decomposed into:
duplicate all incoming morphisms of object F, then split initial
morphism ฯ' : โ
โ F, terminal morphism
ฯ : F โ๐ and object F simultaneously. To
duplicate all incoming morphisms, one should start with minimal ones, so
that longer ones can be duplicated coherently. Denote the duplications
of an incoming morphism ฯ' : P โ F as
(ฯ_n' : P โ F)_n = 1 โผ N. โDuplicateโ means they behave
the same up to indices, that is,
ฮพโฯ_n' = ฮพโฯ_m' for all ฮพ : F โ G and
ฯ_n' โฮถ = ฯ_nโฯ_m' โฮถ = ฯ_mโ
for all ฮถ : Q โ P. To split initial and terminal morphisms and
the given object simultaneously, the target object of incoming morphisms
and the source object of outgoing morphisms will change. Let's say the
outgoing morphism ฮพ_m : F โ G, which is assigned to be splitted
into m-th group, now becomes ฮพฬ_m : F_m โ G, and
incoming morphisms (ฯ_n' : P โ F)_n = 1 โผ N now become
(ฯฬ_n' : P โ F_n)_n = 1 โผ N. For any pair of an
incoming morphism ฯฬ_n' and an outgoing morphism
ฮพฬ_m with n โ m, the composition between them is no
longer possible, so such case will be removed from the multiplication
table. Before this step, there is a morphism
ฮท = ฮพ_m โฯ_n', which will not be removed because for
any incoming morphism ฯฬ_n' : P โ F_n, there is
ฯฬ_m' : P โ F_m such that
ฮพฬ_m โฯฬ_m' = ฮท. Splitting an initial
morphism also satisfies the same statement in an opposite way.
In program, this process splits a node into multiple parts. The symbols
on this node (except the base symbol) should be distributed to each
part, as should the wires connected to them (see left one of Figure
<ref>). The symbols support or are supported by the same
symbol should be distributed to the same part (see Figure
<ref>). All wires that connect to the base
symbol of this node should be duplicated, so all passing points will
also be duplicated (see right one of Figure <ref>). For
the special case of splitting a category, it should be defined in
another function since it has different return type. Because it is
equivalent in the categorical perspective, the criteria is the same.
To determine splittable parts, an utility function
is also defined, which is similar to
. The function finds
splittable groups of symbols by applying union find algorithm to all
values of dictionaries of edges. The process of splitting a category is
defined as a function with a parameter
|partition :: [[Symbol]]|,
which are groups of symbols representing each splitted category.
Splitted categories are built just by separating the edges of the root
via . The process of splitting an object is defined as
a function with parameters
|tgt ::Symbol|, the symbol of the object
to split, and
|partition :: [((Symbol, Symbol) -Symbol, [Symbol])]|.
For all incoming morphisms of the object to split, say
, the pair of symbol
|(s1, splitter (s1, s2))| for
will indicate the
incoming morphism of splitted object with the same source object.
merge-non-terminal-morphisms
ยง.ยง Merge Non-Terminal
Morphisms
The reversed process of โdisconnectโ is โconnectโ. It is
usually used to union two disjoint facets. This is needed when, for
example, merging two sticked squares into one rectangle: before remove
the boundary between them, one should union the area of two squares. In
that moment, the merged geometric object becomes disconnected. The
facets to be merged should be covered by the same facets. For example,
you cannot merge two sticked squares which are faces of two different
cubes; you should first merge the two cubes.
In categorical perspective, this process merges multiple non-terminal
morphisms into one. Merging two morphisms ฯ_n0 and
ฯ_m0' means letting ฯ_n0 = ฯ_m0', which implies
equivalence relations
ฯ_n โโฆโฯ_1 โฯ_0 = ฯ_m' โโฆโฯ_1' โฯ_0'
between their morphism chains
โจฯ_n, โฆ, ฯ_2, ฯ_1 โฉ and
โจฯ_m', โฆ, ฯ_2', ฯ_1' โฉ. To merge two
morphisms ฯ : S โ F and ฯ' : S โ F, if there is a
morphism ฯ : P โ S, the equivalence relation
ฯโฯ = ฯ' โฯ should already exist. Similarly,
if there is a morphism ฯ : F โ V, the equivalence relation
ฯโฯ = ฯโฯ' should already exist. When
merging two initial morphisms ฯ : โ
โ F and
ฯ' : โ
โ F', the target objects of morphisms are not
the same. In this case, one should find an upper isomorphism of
their upper categories, which is an isomorphism
ฮผ : โฑโ F' โโฑโ F
such that their downward functors are equivalent under it:
F^โโฮผ = F'^โ.
In program, this process merges multiple symbols on a node, and keep all
other unrelated wires unchanged. The incoming edge of this node should
maps the symbols to be merged to the same point. The nodes referenced by
these symbols should be the same, and their dictionaries should be the
same except the base one, otherwise supportivity will be violated (see
Figure <ref>). If their dictionaries map
one symbol to two different symbols, user should first merge those
symbols. When merging non-initial morphisms, there is no need to check
whether they have the same target node, since it is already implied by
another requirement. But for merging initial morphisms, this is
important. To merge initial morphisms, the target nodes should be
equivalent in data structure. If they are equivalent in category but not
in data structure, they should be unified via and
. Only target nodes are needed to be relabeled and
rewired, since their descendants are already the same, as implied by
another requirement.
The process of merging morphisms is implemented as a function
with parameters
|(src, tgts) :: (Symbol, [Symbol])|,
indicating morphisms to merge, and merged symbol
, so will be indicate the
merged morphism. When merging initial morphisms, it will checks if the
structures of the target nodes are the same. Users should unify target
nodes before merging.
merge-terminal-morphisms
ยง.ยง Merge Terminal Morphisms
In the previous subsection, we discuss the process of merging
non-terminal morphisms called โconnectโ, which is the reverse process
of โdisconnectโ, while merging terminal morphisms is called
โmergeโ, which is the reverse process of โsplitโ. It can be
used to merge facets. For example, stick two squares by merging their
edges, so that this edge now is covered by the two squares. The facets
to be merged should cover the same facets. In the above case, before
merging edges, their vertices should be merged first.
A categorical perspective can be obtained by reversing the discussion of
split-a-terminal-morphismcategorysplitting a
terminal morphism. Merging N terminal morphisms
(ฯ_n : F_n โ๐)_n = 1 โผ N is special since their
source objects are different. Such process can be further decomposed
into: merge initial morphisms
(ฯ_n' : โ
โ F_n)_n = 1 โผ N, terminal morphisms
(ฯ_n : F_n โ๐)_n = 1 โผ N and objects
(F_n)_n = 1 โผ N simultaneously, then un-duplicate all incoming
morphisms of the merged object. To merge initial and terminal morphisms
and objects simultaneously, the target object of incoming morphisms and
the source object of outgoing morphisms will change. Let's say the
outgoing morphism ฮพ_m : F_m โ G now becomes
ฮพฬ_m : F โ G, and the incoming morphism
ฯ_n' : P โ F_n now becomes ฯฬ_n' : P โ F.
Incoming morphisms have been grouped into
(ฯฬ_n' : P โ F)_n = 1 โผ N according to certain
properties, such that the second step can be done. For any pair of an
incoming morphism ฯฬ_n' and an outgoing morphism
ฮพฬ_m with n โ m, the composition between them is now
possible, so such case should be added to the multiplication table. The
result of ฮพฬ_m โฯฬ_n' should be set as
ฮพฬ_m โฯฬ_m', where ฯฬ_n' and
ฯฬ_m' are in the same group. To un-duplicate all incoming
morphisms, one should start with maximal ones, so that the shorter ones
can be un-duplicated coherently. Each group of incoming morphisms
(ฯฬ_n' : P โ F)_n = 1 โผ N will be identified as the
same morphism, then be denoted as ฯฬ' : P โ F, which is
called an un-duplication. They can be โun-duplicatedโ only if they
behave the same up to indices, that is,
ฮพโฯฬ_n' = ฮพโฯฬ_m' for all
ฮพ : F โ G and
ฯฬ_n' โฮถ = ฯฬ_nโฯฬ_m' โฮถ = ฯฬ_mโ
for all ฮถ : Q โ P, where ฯฬ_n' : P โ F and
ฯฬ_m' : P โ F are in the same group,
ฯฬ_nโ : Q โ F and ฯฬ_mโ : Q โ F are in
the same group.
Conclude above discussion, to merge terminal morphisms
(ฯ_n : F_n โ๐)_n = 1 โผ N, objects
(F_n)_n = 1 โผ N should have the same lower closure on the
induced poset except upper bounds. Furthermore, their incoming morphisms
should be grouped into (ฯ_n' : P โ F_n)_n = 1 โผ N such
that they behave the same up to indices:
ฯ_n โฯ_n' = ฯ_m โฯ_m' and
ฯ_n' โฮถ = ฯ_nโฯ_m' โฮถ = ฯ_mโ
for all ฮถ : Q โ P and (ฯ_n' : P โ F_n)_n = 1 โผ N
and (ฯ_nโ : Q โ F_n)_n = 1 โผ N. This requirement is
equivalent to find lower isomorphisms between them, which are
isomorphisms between their lower categories
ฮผ_nm : โฑโ F_m โโฑโ F_n
such that their upward functors are equivalent under it:
F_n^โโฮผ_nm = F_m^โ.
In program, this process merges multiple nodes into one node. All
symbols (except base symbols) on the nodes being merged are disjointly
unioned. Wires connected to these symbols will remain the same. But the
wires connected to base symbols should be modified since they are now
merged into one base symbol (see left one of Figure
<ref>). Consider the common parent of the nodes to be
merged, which may have multiple edges between them (see right one of
Figure <ref>). To merge nodes, these edges should also
be merged. It should be decided which base wire should be merged into
which. It causes some symbols on the parent node to be merged. All
symbols referencing the nodes being merged also need to be merged
coherently. โCoherentโ means the shape of base wires should remain the
same after merging. Note that incorrectly merging incoming edges will
result in inconsistent situations. For example, consider two triples of
symbols on a node, say and
,
and assume there are some incoming wires:
.
In this case, merging and
will lead to merging all four symbols , ,
, , which
changes the configuration of the wires (see Figure
<ref>).
It is implemented as a function with parameters
|tgts_suffix :: [(Symbol, [(Symbol, Symbol)])]|
and
|merger :: (Symbol, [Symbol]) -Symbol|,
where contains nodes to merge and the
corresponding nondecomposable incoming edges, and is the
function to merge symbols on all ancestor nodes. All incoming morphisms
of these objects, say , will be merged
into the morphism indicated by pair of symbol
. The nondecomposable
incoming edges of the nodes to merge will be paired up by function
according to the keys.
summary
ยง SUMMARY
Above we discussed fundamental operations in categorical level, which
can be classified as:
enumi.
* remove:
remove-a-non-terminal-morphismremoving a
non-terminal morphism,
remove-a-terminal-morphismremoving a terminal
morphism.
* add: add-a-non-terminal-morphismadding a
non-terminal morphism,
add-a-terminal-morphismadding a terminal
morphism and add-an-initial-morphismadding an
initial morphism.
* split:
split-a-non-terminal-morphismsplitting a
non-terminal morphism and
split-a-terminal-morphismcategorysplitting a
terminal morphism/category.
* merge: merge-non-terminal-morphismsmerging
non-terminal morphisms,
merge-terminal-morphismsmerging terminal
morphisms and merge-categoriesmerging
categories.
These operations are fundamental because they directly manipulate
morphisms, like we do with graphs. Fundamental operations are not
simple, as some operations require multiple steps to complete. For
example, to add a morphism, it is required to analyze the situation
first then select required options, which are not arbitrary and may
fail; user should understand the mechanism behind it. Since initial and
terminal morphisms are special, operations of them are more complicated
and can be decomposed into multiple operations. For example, removing an
initial or terminal morphism can be decomposed into removing all related
morphisms then splitting category, which make adding an initial or
terminal morphism very complicated as a reverse process. Splitting
initial or terminal morphisms also can be decomposed into splitting all
related morphisms then splitting on duplications. A similar
decomposition can be applied to merge initial or terminal morphisms,
while should give isomorphisms between the objects to be merged. Noting
that adding/merging initial or terminal morphisms is much more
complicated than removing/splitting initial or terminal morphisms, and
the complexity comes from the huge information changes during these
operations.
In different perspectives, the similarity between operations are
different. In categorical perspectives, operations on initial and
terminal morphisms are very different from operations on non-initial
non-terminal morphisms. In programming perspectives, however, dealing
with initial morphisms and non-terminal morphisms are almost the same,
but not the same as dealing with terminal morphisms. The difference lies
in the way the BAC stores the information. A node in BAC contains the
information of the upper closure of an object, which constitutes a upper
category. This directional structure makes it lose the duality of
categories. Instead, it is more suitable for describing incidence
structures, thus making the interpretation of geometric objects more
intuitive. These complementary strengths drive investigations into
fundamental operations, and also provide a more stereoscopic approach to
analyze BAC.
There are some functions in the library we didn't mention.
and are generalizations of
and , and such
generalizations are natural in the programming perspectives.
and are similar to
and in program.
and provide a way to validate
upper and lower isomorphisms and traverse BACs in parallel.
There are some operations we didn't implement, such as isomorphism,
section category. Various operations worked on cell complexes or
simplicial complexes, such as Cartesian product, join and connected sum,
also can be generalized. There are not necessary for building incidence
structures, so will not be discussed in this series of articles. In the
next article, we will develop a computational geometry system to
describe geometric objects consisting of curved facets in any dimension.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.